Sep 4 17:18:48.051359 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:18:48.051382 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:18:48.051394 kernel: BIOS-provided physical RAM map: Sep 4 17:18:48.051400 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 17:18:48.051406 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 17:18:48.051413 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 17:18:48.051420 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Sep 4 17:18:48.051426 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Sep 4 17:18:48.051432 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 17:18:48.051441 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 17:18:48.051447 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 17:18:48.051453 kernel: NX (Execute Disable) protection: active Sep 4 17:18:48.051460 kernel: APIC: Static calls initialized Sep 4 17:18:48.051466 kernel: SMBIOS 2.8 present. Sep 4 17:18:48.051474 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 4 17:18:48.051483 kernel: Hypervisor detected: KVM Sep 4 17:18:48.051490 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:18:48.051501 kernel: kvm-clock: using sched offset of 2781963184 cycles Sep 4 17:18:48.051508 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:18:48.051515 kernel: tsc: Detected 2794.744 MHz processor Sep 4 17:18:48.051525 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:18:48.051532 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:18:48.051539 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Sep 4 17:18:48.051548 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 17:18:48.051558 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:18:48.051567 kernel: Using GB pages for direct mapping Sep 4 17:18:48.051577 kernel: ACPI: Early table checksum verification disabled Sep 4 17:18:48.051585 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Sep 4 17:18:48.051595 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:18:48.051603 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:18:48.051610 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:18:48.051623 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 4 17:18:48.051631 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:18:48.051642 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:18:48.051649 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:18:48.051656 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Sep 4 17:18:48.051663 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Sep 4 17:18:48.051670 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 4 17:18:48.051677 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Sep 4 17:18:48.051684 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Sep 4 17:18:48.051699 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Sep 4 17:18:48.051709 kernel: No NUMA configuration found Sep 4 17:18:48.051716 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Sep 4 17:18:48.051723 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Sep 4 17:18:48.051732 kernel: Zone ranges: Sep 4 17:18:48.051742 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:18:48.051749 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Sep 4 17:18:48.051759 kernel: Normal empty Sep 4 17:18:48.051766 kernel: Movable zone start for each node Sep 4 17:18:48.051773 kernel: Early memory node ranges Sep 4 17:18:48.051783 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 17:18:48.051792 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Sep 4 17:18:48.051799 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Sep 4 17:18:48.051807 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:18:48.051814 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 17:18:48.051821 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Sep 4 17:18:48.051836 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 17:18:48.051849 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:18:48.051856 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:18:48.051863 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 17:18:48.051873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:18:48.051882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:18:48.051889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:18:48.051897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:18:48.051907 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:18:48.051917 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:18:48.051924 kernel: TSC deadline timer available Sep 4 17:18:48.051935 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 17:18:48.051945 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:18:48.051953 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 17:18:48.051960 kernel: kvm-guest: setup PV sched yield Sep 4 17:18:48.051967 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Sep 4 17:18:48.051974 kernel: Booting paravirtualized kernel on KVM Sep 4 17:18:48.051982 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:18:48.051989 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 17:18:48.052002 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Sep 4 17:18:48.052009 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Sep 4 17:18:48.052017 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 17:18:48.052024 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:18:48.052031 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:18:48.052039 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:18:48.052047 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:18:48.052054 kernel: random: crng init done Sep 4 17:18:48.052064 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:18:48.052071 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:18:48.052078 kernel: Fallback order for Node 0: 0 Sep 4 17:18:48.052086 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Sep 4 17:18:48.052093 kernel: Policy zone: DMA32 Sep 4 17:18:48.052100 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:18:48.052108 kernel: Memory: 2428452K/2571756K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 143044K reserved, 0K cma-reserved) Sep 4 17:18:48.052115 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:18:48.052123 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:18:48.052132 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:18:48.052139 kernel: Dynamic Preempt: voluntary Sep 4 17:18:48.052146 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:18:48.052154 kernel: rcu: RCU event tracing is enabled. Sep 4 17:18:48.052162 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:18:48.052169 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:18:48.052177 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:18:48.052184 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:18:48.052191 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:18:48.052201 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:18:48.052208 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 17:18:48.052216 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:18:48.052223 kernel: Console: colour VGA+ 80x25 Sep 4 17:18:48.052230 kernel: printk: console [ttyS0] enabled Sep 4 17:18:48.052237 kernel: ACPI: Core revision 20230628 Sep 4 17:18:48.052258 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 17:18:48.052266 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:18:48.052273 kernel: x2apic enabled Sep 4 17:18:48.052283 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:18:48.052302 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 17:18:48.052310 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 17:18:48.052320 kernel: kvm-guest: setup PV IPIs Sep 4 17:18:48.052327 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 17:18:48.052335 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 17:18:48.052342 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Sep 4 17:18:48.052350 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 17:18:48.052367 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 17:18:48.052375 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 17:18:48.052382 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:18:48.052390 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:18:48.052400 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:18:48.052407 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:18:48.052415 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 17:18:48.052422 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 17:18:48.052430 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 17:18:48.052440 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 17:18:48.052448 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 17:18:48.052459 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 17:18:48.052467 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 17:18:48.052474 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:18:48.052482 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:18:48.052490 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:18:48.052497 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:18:48.052507 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 17:18:48.052515 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:18:48.052523 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:18:48.052531 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:18:48.052538 kernel: SELinux: Initializing. Sep 4 17:18:48.052546 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:18:48.052554 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:18:48.052561 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 17:18:48.052569 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:18:48.052579 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:18:48.052587 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:18:48.052594 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 17:18:48.052602 kernel: ... version: 0 Sep 4 17:18:48.052609 kernel: ... bit width: 48 Sep 4 17:18:48.052617 kernel: ... generic registers: 6 Sep 4 17:18:48.052624 kernel: ... value mask: 0000ffffffffffff Sep 4 17:18:48.052632 kernel: ... max period: 00007fffffffffff Sep 4 17:18:48.052639 kernel: ... fixed-purpose events: 0 Sep 4 17:18:48.052649 kernel: ... event mask: 000000000000003f Sep 4 17:18:48.052657 kernel: signal: max sigframe size: 1776 Sep 4 17:18:48.052665 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:18:48.052672 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:18:48.052680 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:18:48.052688 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:18:48.052695 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 17:18:48.052705 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:18:48.052713 kernel: smpboot: Max logical packages: 1 Sep 4 17:18:48.052723 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Sep 4 17:18:48.052730 kernel: devtmpfs: initialized Sep 4 17:18:48.052738 kernel: x86/mm: Memory block size: 128MB Sep 4 17:18:48.052746 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:18:48.052753 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:18:48.052761 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:18:48.052769 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:18:48.052776 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:18:48.052784 kernel: audit: type=2000 audit(1725470327.226:1): state=initialized audit_enabled=0 res=1 Sep 4 17:18:48.052794 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:18:48.052801 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:18:48.052809 kernel: cpuidle: using governor menu Sep 4 17:18:48.052817 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:18:48.052824 kernel: dca service started, version 1.12.1 Sep 4 17:18:48.052840 kernel: PCI: Using configuration type 1 for base access Sep 4 17:18:48.052848 kernel: PCI: Using configuration type 1 for extended access Sep 4 17:18:48.052855 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:18:48.052863 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:18:48.052873 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:18:48.052881 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:18:48.052889 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:18:48.052896 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:18:48.052904 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:18:48.052911 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:18:48.052919 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:18:48.052926 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:18:48.052934 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:18:48.052944 kernel: ACPI: Interpreter enabled Sep 4 17:18:48.052951 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:18:48.052959 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:18:48.052967 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:18:48.052975 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:18:48.052988 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 17:18:48.052997 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:18:48.053203 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:18:48.053220 kernel: acpiphp: Slot [3] registered Sep 4 17:18:48.053228 kernel: acpiphp: Slot [4] registered Sep 4 17:18:48.053236 kernel: acpiphp: Slot [5] registered Sep 4 17:18:48.053269 kernel: acpiphp: Slot [6] registered Sep 4 17:18:48.053277 kernel: acpiphp: Slot [7] registered Sep 4 17:18:48.053285 kernel: acpiphp: Slot [8] registered Sep 4 17:18:48.053292 kernel: acpiphp: Slot [9] registered Sep 4 17:18:48.053300 kernel: acpiphp: Slot [10] registered Sep 4 17:18:48.053307 kernel: acpiphp: Slot [11] registered Sep 4 17:18:48.053318 kernel: acpiphp: Slot [12] registered Sep 4 17:18:48.053325 kernel: acpiphp: Slot [13] registered Sep 4 17:18:48.053333 kernel: acpiphp: Slot [14] registered Sep 4 17:18:48.053340 kernel: acpiphp: Slot [15] registered Sep 4 17:18:48.053348 kernel: acpiphp: Slot [16] registered Sep 4 17:18:48.053355 kernel: acpiphp: Slot [17] registered Sep 4 17:18:48.053362 kernel: acpiphp: Slot [18] registered Sep 4 17:18:48.053370 kernel: acpiphp: Slot [19] registered Sep 4 17:18:48.053377 kernel: acpiphp: Slot [20] registered Sep 4 17:18:48.053385 kernel: acpiphp: Slot [21] registered Sep 4 17:18:48.053394 kernel: acpiphp: Slot [22] registered Sep 4 17:18:48.053402 kernel: acpiphp: Slot [23] registered Sep 4 17:18:48.053410 kernel: acpiphp: Slot [24] registered Sep 4 17:18:48.053417 kernel: acpiphp: Slot [25] registered Sep 4 17:18:48.053424 kernel: acpiphp: Slot [26] registered Sep 4 17:18:48.053432 kernel: acpiphp: Slot [27] registered Sep 4 17:18:48.053439 kernel: acpiphp: Slot [28] registered Sep 4 17:18:48.053447 kernel: acpiphp: Slot [29] registered Sep 4 17:18:48.053454 kernel: acpiphp: Slot [30] registered Sep 4 17:18:48.053464 kernel: acpiphp: Slot [31] registered Sep 4 17:18:48.053471 kernel: PCI host bridge to bus 0000:00 Sep 4 17:18:48.053630 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:18:48.053765 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:18:48.053895 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:18:48.054013 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Sep 4 17:18:48.054141 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 4 17:18:48.054279 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:18:48.054440 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:18:48.054587 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:18:48.054734 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 17:18:48.054871 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Sep 4 17:18:48.055000 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 17:18:48.055126 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 17:18:48.055287 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 17:18:48.055418 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 17:18:48.055573 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:18:48.055707 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 4 17:18:48.055842 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 4 17:18:48.055984 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Sep 4 17:18:48.056144 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 4 17:18:48.056289 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 4 17:18:48.056419 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 4 17:18:48.056544 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:18:48.056688 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:18:48.056818 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Sep 4 17:18:48.056957 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 4 17:18:48.057092 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 4 17:18:48.057240 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 4 17:18:48.057413 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 17:18:48.057540 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 4 17:18:48.057667 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 4 17:18:48.057812 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:18:48.057950 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Sep 4 17:18:48.058082 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 4 17:18:48.058211 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 4 17:18:48.058391 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 4 17:18:48.058403 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:18:48.058411 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:18:48.058419 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:18:48.058426 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:18:48.058434 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:18:48.058446 kernel: iommu: Default domain type: Translated Sep 4 17:18:48.058453 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:18:48.058461 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:18:48.058469 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:18:48.058476 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 17:18:48.058484 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Sep 4 17:18:48.058607 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 17:18:48.058732 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 17:18:48.058863 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:18:48.058877 kernel: vgaarb: loaded Sep 4 17:18:48.058885 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 17:18:48.058893 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 17:18:48.058900 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:18:48.058908 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:18:48.058916 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:18:48.058924 kernel: pnp: PnP ACPI init Sep 4 17:18:48.059068 kernel: pnp 00:02: [dma 2] Sep 4 17:18:48.059084 kernel: pnp: PnP ACPI: found 6 devices Sep 4 17:18:48.059092 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:18:48.059099 kernel: NET: Registered PF_INET protocol family Sep 4 17:18:48.059107 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:18:48.059115 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:18:48.059122 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:18:48.059130 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:18:48.059138 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:18:48.059146 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:18:48.059156 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:18:48.059164 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:18:48.059171 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:18:48.059179 kernel: NET: Registered PF_XDP protocol family Sep 4 17:18:48.059325 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:18:48.059441 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:18:48.059554 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:18:48.059668 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Sep 4 17:18:48.059814 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 4 17:18:48.059960 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 17:18:48.060089 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:18:48.060100 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:18:48.060107 kernel: Initialise system trusted keyrings Sep 4 17:18:48.060115 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:18:48.060123 kernel: Key type asymmetric registered Sep 4 17:18:48.060131 kernel: Asymmetric key parser 'x509' registered Sep 4 17:18:48.060139 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:18:48.060151 kernel: io scheduler mq-deadline registered Sep 4 17:18:48.060159 kernel: io scheduler kyber registered Sep 4 17:18:48.060166 kernel: io scheduler bfq registered Sep 4 17:18:48.060174 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:18:48.060182 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:18:48.060190 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 17:18:48.060198 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:18:48.060205 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:18:48.060213 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:18:48.060223 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:18:48.060231 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:18:48.060239 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:18:48.060300 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:18:48.060445 kernel: rtc_cmos 00:05: RTC can wake from S4 Sep 4 17:18:48.060566 kernel: rtc_cmos 00:05: registered as rtc0 Sep 4 17:18:48.060684 kernel: rtc_cmos 00:05: setting system clock to 2024-09-04T17:18:47 UTC (1725470327) Sep 4 17:18:48.060801 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 17:18:48.060816 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 17:18:48.060823 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:18:48.060839 kernel: Segment Routing with IPv6 Sep 4 17:18:48.060848 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:18:48.060855 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:18:48.060863 kernel: Key type dns_resolver registered Sep 4 17:18:48.060871 kernel: IPI shorthand broadcast: enabled Sep 4 17:18:48.060878 kernel: sched_clock: Marking stable (884005459, 102405955)->(1049395430, -62984016) Sep 4 17:18:48.060886 kernel: registered taskstats version 1 Sep 4 17:18:48.060896 kernel: Loading compiled-in X.509 certificates Sep 4 17:18:48.060904 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:18:48.060912 kernel: Key type .fscrypt registered Sep 4 17:18:48.060919 kernel: Key type fscrypt-provisioning registered Sep 4 17:18:48.060927 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:18:48.060935 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:18:48.060942 kernel: ima: No architecture policies found Sep 4 17:18:48.060949 kernel: clk: Disabling unused clocks Sep 4 17:18:48.060960 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:18:48.060968 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:18:48.060976 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:18:48.060983 kernel: Run /init as init process Sep 4 17:18:48.060991 kernel: with arguments: Sep 4 17:18:48.060998 kernel: /init Sep 4 17:18:48.061006 kernel: with environment: Sep 4 17:18:48.061013 kernel: HOME=/ Sep 4 17:18:48.061037 kernel: TERM=linux Sep 4 17:18:48.061047 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:18:48.061060 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:18:48.061069 systemd[1]: Detected virtualization kvm. Sep 4 17:18:48.061078 systemd[1]: Detected architecture x86-64. Sep 4 17:18:48.061086 systemd[1]: Running in initrd. Sep 4 17:18:48.061094 systemd[1]: No hostname configured, using default hostname. Sep 4 17:18:48.061102 systemd[1]: Hostname set to . Sep 4 17:18:48.061113 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:18:48.061122 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:18:48.061130 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:18:48.061139 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:18:48.061148 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:18:48.061157 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:18:48.061165 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:18:48.061174 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:18:48.061186 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:18:48.061195 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:18:48.061203 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:18:48.061212 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:18:48.061220 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:18:48.061228 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:18:48.061236 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:18:48.061271 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:18:48.061280 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:18:48.061288 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:18:48.061297 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:18:48.061305 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:18:48.061314 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:18:48.061333 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:18:48.061343 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:18:48.061351 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:18:48.061373 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:18:48.061399 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:18:48.061417 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:18:48.061426 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:18:48.061454 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:18:48.061489 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:18:48.061500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:18:48.061524 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:18:48.061535 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:18:48.061544 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:18:48.061553 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:18:48.061591 systemd-journald[193]: Collecting audit messages is disabled. Sep 4 17:18:48.061610 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:18:48.061622 systemd-journald[193]: Journal started Sep 4 17:18:48.061639 systemd-journald[193]: Runtime Journal (/run/log/journal/fe28b03a3d404ac59a13930d08a77883) is 6.0M, max 48.4M, 42.3M free. Sep 4 17:18:48.063308 systemd-modules-load[194]: Inserted module 'overlay' Sep 4 17:18:48.087277 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:18:48.088465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:18:48.101271 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:18:48.103546 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 4 17:18:48.104485 kernel: Bridge firewalling registered Sep 4 17:18:48.105509 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:18:48.106587 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:18:48.110355 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:18:48.111093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:18:48.113753 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:18:48.127418 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:18:48.130471 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:18:48.131095 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:18:48.133837 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:18:48.144445 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:18:48.146192 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:18:48.161671 dracut-cmdline[228]: dracut-dracut-053 Sep 4 17:18:48.165779 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:18:48.183036 systemd-resolved[230]: Positive Trust Anchors: Sep 4 17:18:48.183052 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:18:48.183083 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:18:48.185759 systemd-resolved[230]: Defaulting to hostname 'linux'. Sep 4 17:18:48.187030 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:18:48.187894 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:18:48.275288 kernel: SCSI subsystem initialized Sep 4 17:18:48.286273 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:18:48.300274 kernel: iscsi: registered transport (tcp) Sep 4 17:18:48.326610 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:18:48.326643 kernel: QLogic iSCSI HBA Driver Sep 4 17:18:48.380268 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:18:48.391381 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:18:48.420703 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:18:48.420752 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:18:48.421769 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:18:48.468294 kernel: raid6: avx2x4 gen() 30348 MB/s Sep 4 17:18:48.485283 kernel: raid6: avx2x2 gen() 30735 MB/s Sep 4 17:18:48.520752 kernel: raid6: avx2x1 gen() 25983 MB/s Sep 4 17:18:48.520777 kernel: raid6: using algorithm avx2x2 gen() 30735 MB/s Sep 4 17:18:48.538364 kernel: raid6: .... xor() 19920 MB/s, rmw enabled Sep 4 17:18:48.538428 kernel: raid6: using avx2x2 recovery algorithm Sep 4 17:18:48.564279 kernel: xor: automatically using best checksumming function avx Sep 4 17:18:48.748289 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:18:48.762383 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:18:48.770496 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:18:48.786777 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 4 17:18:48.792487 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:18:48.798623 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:18:48.813829 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Sep 4 17:18:48.845556 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:18:48.853432 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:18:48.923489 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:18:48.936106 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:18:48.946752 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:18:48.951159 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:18:48.954540 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:18:48.957470 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:18:48.963273 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 17:18:48.970472 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:18:48.976857 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:18:48.978744 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:18:48.993289 kernel: libata version 3.00 loaded. Sep 4 17:18:48.993947 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:18:48.998273 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 17:18:49.013117 kernel: scsi host0: ata_piix Sep 4 17:18:49.013382 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:18:49.013396 kernel: GPT:9289727 != 19775487 Sep 4 17:18:49.013406 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:18:49.013416 kernel: GPT:9289727 != 19775487 Sep 4 17:18:49.013426 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:18:49.013439 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:18:49.009550 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:18:49.009700 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:18:49.017264 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:18:49.017287 kernel: scsi host1: ata_piix Sep 4 17:18:49.019601 kernel: AES CTR mode by8 optimization enabled Sep 4 17:18:49.019619 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Sep 4 17:18:49.019635 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Sep 4 17:18:49.020235 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:18:49.024019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:18:49.025203 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:18:49.027760 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:18:49.036550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:18:49.089018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:18:49.105531 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:18:49.124028 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:18:49.176306 kernel: ata2: found unknown device (class 0) Sep 4 17:18:49.178274 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 17:18:49.180269 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 17:18:49.217666 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (461) Sep 4 17:18:49.217720 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (475) Sep 4 17:18:49.225596 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 17:18:49.225914 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:18:49.228343 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:18:49.237055 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:18:49.248727 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:18:49.249231 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:18:49.255350 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 4 17:18:49.255629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:18:49.272427 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:18:49.492432 disk-uuid[561]: Primary Header is updated. Sep 4 17:18:49.492432 disk-uuid[561]: Secondary Entries is updated. Sep 4 17:18:49.492432 disk-uuid[561]: Secondary Header is updated. Sep 4 17:18:49.496053 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:18:49.500275 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:18:50.500321 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:18:50.500378 disk-uuid[566]: The operation has completed successfully. Sep 4 17:18:50.539528 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:18:50.539651 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:18:50.568409 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:18:50.572106 sh[577]: Success Sep 4 17:18:50.625273 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 17:18:50.657987 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:18:50.688667 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:18:50.692809 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:18:50.704213 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:18:50.704259 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:18:50.704270 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:18:50.704288 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:18:50.704958 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:18:50.709281 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:18:50.710296 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:18:50.722405 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:18:50.724048 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:18:50.733014 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:18:50.733047 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:18:50.733058 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:18:50.736284 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:18:50.745952 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:18:50.747083 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:18:50.757026 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:18:50.763436 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:18:50.849405 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:18:50.856431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:18:50.877176 ignition[675]: Ignition 2.18.0 Sep 4 17:18:50.877193 ignition[675]: Stage: fetch-offline Sep 4 17:18:50.877263 ignition[675]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:18:50.877275 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:18:50.880101 systemd-networkd[767]: lo: Link UP Sep 4 17:18:50.877498 ignition[675]: parsed url from cmdline: "" Sep 4 17:18:50.880106 systemd-networkd[767]: lo: Gained carrier Sep 4 17:18:50.877502 ignition[675]: no config URL provided Sep 4 17:18:50.881953 systemd-networkd[767]: Enumeration completed Sep 4 17:18:50.877508 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:18:50.882037 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:18:50.877518 ignition[675]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:18:50.882455 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:18:50.877544 ignition[675]: op(1): [started] loading QEMU firmware config module Sep 4 17:18:50.882459 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:18:50.877550 ignition[675]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:18:50.883601 systemd-networkd[767]: eth0: Link UP Sep 4 17:18:50.887522 ignition[675]: op(1): [finished] loading QEMU firmware config module Sep 4 17:18:50.883605 systemd-networkd[767]: eth0: Gained carrier Sep 4 17:18:50.887555 ignition[675]: QEMU firmware config was not found. Ignoring... Sep 4 17:18:50.883613 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:18:50.885564 systemd[1]: Reached target network.target - Network. Sep 4 17:18:50.905293 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:18:50.944896 ignition[675]: parsing config with SHA512: bb4d13793f3dc376b9d1b66c3ac7baa0a980c952d1569d1f9d4711e1e30241fcd8d1eb4f6fb09e910b2f3f14a3cd23ea2ec4238cb58cf87d39a852c742b24754 Sep 4 17:18:50.950223 unknown[675]: fetched base config from "system" Sep 4 17:18:50.950263 unknown[675]: fetched user config from "qemu" Sep 4 17:18:50.951423 ignition[675]: fetch-offline: fetch-offline passed Sep 4 17:18:50.951528 ignition[675]: Ignition finished successfully Sep 4 17:18:50.953498 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:18:50.955929 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:18:50.964499 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:18:50.996545 ignition[774]: Ignition 2.18.0 Sep 4 17:18:50.996556 ignition[774]: Stage: kargs Sep 4 17:18:50.996727 ignition[774]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:18:50.996740 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:18:50.998886 ignition[774]: kargs: kargs passed Sep 4 17:18:50.998946 ignition[774]: Ignition finished successfully Sep 4 17:18:51.003327 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:18:51.018453 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:18:51.032997 ignition[782]: Ignition 2.18.0 Sep 4 17:18:51.033016 ignition[782]: Stage: disks Sep 4 17:18:51.033302 ignition[782]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:18:51.033317 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:18:51.034592 ignition[782]: disks: disks passed Sep 4 17:18:51.037392 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:18:51.034650 ignition[782]: Ignition finished successfully Sep 4 17:18:51.038930 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:18:51.040808 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:18:51.042838 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:18:51.044949 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:18:51.047223 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:18:51.066610 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:18:51.085911 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:18:51.094058 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:18:51.108331 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:18:51.226293 kernel: EXT4-fs (vda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:18:51.227068 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:18:51.228138 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:18:51.239376 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:18:51.241351 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:18:51.243626 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:18:51.243686 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:18:51.249575 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Sep 4 17:18:51.243714 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:18:51.253355 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:18:51.253372 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:18:51.253382 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:18:51.255266 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:18:51.257473 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:18:51.264405 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:18:51.266391 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:18:51.309064 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:18:51.323500 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:18:51.330286 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:18:51.335264 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:18:51.426965 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:18:51.439359 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:18:51.443034 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:18:51.449272 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:18:51.472176 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:18:51.481703 ignition[915]: INFO : Ignition 2.18.0 Sep 4 17:18:51.481703 ignition[915]: INFO : Stage: mount Sep 4 17:18:51.483489 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:18:51.483489 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:18:51.483489 ignition[915]: INFO : mount: mount passed Sep 4 17:18:51.483489 ignition[915]: INFO : Ignition finished successfully Sep 4 17:18:51.485293 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:18:51.544401 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:18:51.702645 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:18:51.715400 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:18:51.722999 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (929) Sep 4 17:18:51.723067 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:18:51.723083 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:18:51.723850 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:18:51.727310 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:18:51.728453 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:18:51.754642 ignition[946]: INFO : Ignition 2.18.0 Sep 4 17:18:51.754642 ignition[946]: INFO : Stage: files Sep 4 17:18:51.756881 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:18:51.756881 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:18:51.756881 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:18:51.756881 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:18:51.756881 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:18:51.764331 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:18:51.764331 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:18:51.764331 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:18:51.764331 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:18:51.764331 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:18:51.760049 unknown[946]: wrote ssh authorized keys file for user: core Sep 4 17:18:51.869754 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:18:51.991471 systemd-networkd[767]: eth0: Gained IPv6LL Sep 4 17:18:52.016195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:18:52.016195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:18:52.021361 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Sep 4 17:18:52.401561 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:18:52.895316 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:18:52.895316 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:18:52.899890 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:18:52.899890 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:18:52.899890 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:18:52.899890 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 17:18:52.899890 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:18:52.899890 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:18:52.899890 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 17:18:52.899890 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:18:52.939604 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:18:52.946489 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:18:52.948369 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:18:52.948369 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:18:52.948369 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:18:52.948369 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:18:52.948369 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:18:52.948369 ignition[946]: INFO : files: files passed Sep 4 17:18:52.948369 ignition[946]: INFO : Ignition finished successfully Sep 4 17:18:52.960608 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:18:52.973472 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:18:52.974834 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:18:52.979551 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:18:52.979685 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:18:52.985667 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:18:52.988604 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:18:52.988604 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:18:52.993085 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:18:52.991097 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:18:52.993338 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:18:53.011395 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:18:53.042592 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:18:53.042743 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:18:53.045439 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:18:53.047868 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:18:53.049109 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:18:53.050009 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:18:53.069797 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:18:53.083428 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:18:53.095505 systemd[1]: Stopped target network.target - Network. Sep 4 17:18:53.097433 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:18:53.097970 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:18:53.098547 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:18:53.098921 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:18:53.099055 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:18:53.104892 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:18:53.105264 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:18:53.105822 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:18:53.106194 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:18:53.106771 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:18:53.107164 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:18:53.107767 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:18:53.108126 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:18:53.108686 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:18:53.109059 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:18:53.109621 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:18:53.109760 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:18:53.129597 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:18:53.130823 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:18:53.133017 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:18:53.133163 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:18:53.135459 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:18:53.135635 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:18:53.137745 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:18:53.137868 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:18:53.140123 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:18:53.141858 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:18:53.145318 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:18:53.147619 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:18:53.149653 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:18:53.152376 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:18:53.152490 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:18:53.154449 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:18:53.154552 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:18:53.156619 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:18:53.156762 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:18:53.158954 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:18:53.159066 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:18:53.175552 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:18:53.178469 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:18:53.180092 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:18:53.181631 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:18:53.183564 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:18:53.183847 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:18:53.187047 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:18:53.187161 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:18:53.187387 systemd-networkd[767]: eth0: DHCPv6 lease lost Sep 4 17:18:53.193323 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:18:53.193515 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:18:53.198208 ignition[1001]: INFO : Ignition 2.18.0 Sep 4 17:18:53.198208 ignition[1001]: INFO : Stage: umount Sep 4 17:18:53.198208 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:18:53.198208 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:18:53.198208 ignition[1001]: INFO : umount: umount passed Sep 4 17:18:53.198208 ignition[1001]: INFO : Ignition finished successfully Sep 4 17:18:53.196577 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:18:53.196722 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:18:53.200367 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:18:53.200492 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:18:53.206134 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:18:53.206337 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:18:53.211825 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:18:53.212322 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:18:53.212377 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:18:53.213749 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:18:53.213829 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:18:53.215078 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:18:53.215154 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:18:53.217239 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:18:53.217332 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:18:53.219836 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:18:53.219929 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:18:53.232432 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:18:53.234443 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:18:53.234526 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:18:53.236870 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:18:53.236930 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:18:53.239323 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:18:53.239394 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:18:53.241370 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:18:53.241430 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:18:53.243711 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:18:53.256488 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:18:53.256684 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:18:53.266211 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:18:53.266429 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:18:53.267120 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:18:53.267173 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:18:53.270067 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:18:53.270111 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:18:53.270535 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:18:53.270587 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:18:53.271234 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:18:53.271298 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:18:53.272060 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:18:53.272111 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:18:53.273738 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:18:53.283142 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:18:53.283268 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:18:53.286340 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:18:53.286427 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:18:53.286807 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:18:53.286857 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:18:53.291625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:18:53.291694 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:18:53.292900 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:18:53.293022 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:18:53.453366 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:18:53.453541 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:18:53.455895 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:18:53.457726 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:18:53.457798 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:18:53.469443 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:18:53.478413 systemd[1]: Switching root. Sep 4 17:18:53.510993 systemd-journald[193]: Journal stopped Sep 4 17:18:54.984659 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 4 17:18:54.984717 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:18:54.984731 kernel: SELinux: policy capability open_perms=1 Sep 4 17:18:54.984743 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:18:54.984759 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:18:54.984770 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:18:54.984782 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:18:54.984793 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:18:54.984804 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:18:54.984821 kernel: audit: type=1403 audit(1725470334.101:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:18:54.984837 systemd[1]: Successfully loaded SELinux policy in 41.396ms. Sep 4 17:18:54.984864 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.535ms. Sep 4 17:18:54.984880 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:18:54.984893 systemd[1]: Detected virtualization kvm. Sep 4 17:18:54.984905 systemd[1]: Detected architecture x86-64. Sep 4 17:18:54.984917 systemd[1]: Detected first boot. Sep 4 17:18:54.984930 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:18:54.984942 zram_generator::config[1044]: No configuration found. Sep 4 17:18:54.984955 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:18:54.984968 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:18:54.984980 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:18:54.984994 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:18:54.985008 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:18:54.985021 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:18:54.985033 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:18:54.985045 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:18:54.985057 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:18:54.985070 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:18:54.985082 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:18:54.985097 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:18:54.985109 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:18:54.985122 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:18:54.985134 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:18:54.985147 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:18:54.985159 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:18:54.985172 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:18:54.985184 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:18:54.985196 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:18:54.985211 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:18:54.985223 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:18:54.985235 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:18:54.985346 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:18:54.985360 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:18:54.985374 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:18:54.985386 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:18:54.985399 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:18:54.985414 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:18:54.985426 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:18:54.985439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:18:54.985451 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:18:54.985464 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:18:54.985476 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:18:54.985488 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:18:54.985501 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:18:54.985513 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:18:54.985528 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:18:54.985540 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:18:54.985553 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:18:54.985565 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:18:54.985577 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:18:54.985589 systemd[1]: Reached target machines.target - Containers. Sep 4 17:18:54.985602 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:18:54.985614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:18:54.985630 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:18:54.985645 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:18:54.985667 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:18:54.985679 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:18:54.985692 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:18:54.985704 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:18:54.985716 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:18:54.985729 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:18:54.985741 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:18:54.985756 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:18:54.985768 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:18:54.985780 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:18:54.985792 kernel: fuse: init (API version 7.39) Sep 4 17:18:54.985803 kernel: loop: module loaded Sep 4 17:18:54.985815 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:18:54.985827 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:18:54.985840 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:18:54.985852 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:18:54.985866 kernel: ACPI: bus type drm_connector registered Sep 4 17:18:54.985878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:18:54.985909 systemd-journald[1113]: Collecting audit messages is disabled. Sep 4 17:18:54.985931 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:18:54.985945 systemd[1]: Stopped verity-setup.service. Sep 4 17:18:54.985957 systemd-journald[1113]: Journal started Sep 4 17:18:54.985986 systemd-journald[1113]: Runtime Journal (/run/log/journal/fe28b03a3d404ac59a13930d08a77883) is 6.0M, max 48.4M, 42.3M free. Sep 4 17:18:54.704668 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:18:54.723545 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:18:54.724013 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:18:54.989342 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:18:54.992759 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:18:54.993556 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:18:54.994789 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:18:54.996070 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:18:54.997185 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:18:54.998414 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:18:54.999716 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:18:55.000997 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:18:55.002471 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:18:55.004098 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:18:55.004284 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:18:55.005786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:18:55.005953 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:18:55.007418 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:18:55.007584 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:18:55.009098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:18:55.009281 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:18:55.010825 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:18:55.010991 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:18:55.012407 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:18:55.012568 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:18:55.013966 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:18:55.015411 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:18:55.017139 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:18:55.083476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:18:55.087594 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:18:55.100370 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:18:55.102902 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:18:55.104172 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:18:55.104212 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:18:55.106719 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:18:55.109534 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:18:55.112091 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:18:55.113431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:18:55.115630 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:18:55.119689 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:18:55.121395 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:18:55.124473 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:18:55.126062 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:18:55.130062 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:18:55.134960 systemd-journald[1113]: Time spent on flushing to /var/log/journal/fe28b03a3d404ac59a13930d08a77883 is 39.179ms for 947 entries. Sep 4 17:18:55.134960 systemd-journald[1113]: System Journal (/var/log/journal/fe28b03a3d404ac59a13930d08a77883) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:18:55.202039 systemd-journald[1113]: Received client request to flush runtime journal. Sep 4 17:18:55.202135 kernel: loop0: detected capacity change from 0 to 209816 Sep 4 17:18:55.202160 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:18:55.136463 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:18:55.142838 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:18:55.148460 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:18:55.153663 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:18:55.176422 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:18:55.178574 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:18:55.180513 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:18:55.186007 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:18:55.191589 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:18:55.207693 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:18:55.216142 udevadm[1160]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:18:55.228306 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Sep 4 17:18:55.228327 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Sep 4 17:18:55.241742 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:18:55.248271 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:18:55.254414 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:18:55.257025 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:18:55.258063 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:18:55.262155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:18:55.278299 kernel: loop1: detected capacity change from 0 to 139904 Sep 4 17:18:55.321719 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:18:55.334290 kernel: loop2: detected capacity change from 0 to 80568 Sep 4 17:18:55.334499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:18:55.362326 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 4 17:18:55.362346 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 4 17:18:55.369338 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:18:55.387436 kernel: loop3: detected capacity change from 0 to 209816 Sep 4 17:18:55.395382 kernel: loop4: detected capacity change from 0 to 139904 Sep 4 17:18:55.406278 kernel: loop5: detected capacity change from 0 to 80568 Sep 4 17:18:55.413878 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:18:55.414561 (sd-merge)[1184]: Merged extensions into '/usr'. Sep 4 17:18:55.418812 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:18:55.418828 systemd[1]: Reloading... Sep 4 17:18:55.482274 zram_generator::config[1208]: No configuration found. Sep 4 17:18:55.625261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:18:55.641033 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:18:55.679144 systemd[1]: Reloading finished in 259 ms. Sep 4 17:18:55.715593 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:18:55.717212 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:18:55.729452 systemd[1]: Starting ensure-sysext.service... Sep 4 17:18:55.731347 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:18:55.755328 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:18:55.755343 systemd[1]: Reloading... Sep 4 17:18:55.812682 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:18:55.813090 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:18:55.814119 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:18:55.814451 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Sep 4 17:18:55.814529 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Sep 4 17:18:55.818951 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:18:55.818968 systemd-tmpfiles[1246]: Skipping /boot Sep 4 17:18:55.828323 zram_generator::config[1273]: No configuration found. Sep 4 17:18:55.834520 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:18:55.834536 systemd-tmpfiles[1246]: Skipping /boot Sep 4 17:18:55.946127 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:18:55.996486 systemd[1]: Reloading finished in 240 ms. Sep 4 17:18:56.016042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:18:56.030517 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:18:56.034504 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:18:56.037377 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:18:56.043416 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:18:56.048962 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:18:56.058752 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:18:56.061636 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:18:56.061810 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:18:56.065339 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:18:56.067664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:18:56.071644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:18:56.072926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:18:56.073028 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:18:56.074059 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:18:56.076522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:18:56.076983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:18:56.080036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:18:56.080234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:18:56.082086 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:18:56.082311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:18:56.084333 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:18:56.097400 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:18:56.100361 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:18:56.100731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:18:56.101729 augenrules[1337]: No rules Sep 4 17:18:56.112941 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:18:56.115822 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:18:56.118462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:18:56.119798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:18:56.124449 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:18:56.127594 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:18:56.129409 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:18:56.131398 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:18:56.134069 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:18:56.136363 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:18:56.146818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:18:56.147298 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:18:56.149587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:18:56.149919 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:18:56.152405 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:18:56.152722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:18:56.159438 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Sep 4 17:18:56.161467 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:18:56.167605 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:18:56.168064 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:18:56.180554 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:18:56.183339 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:18:56.188434 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:18:56.195621 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:18:56.197040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:18:56.197114 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:18:56.197137 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:18:56.197491 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:18:56.199560 systemd[1]: Finished ensure-sysext.service. Sep 4 17:18:56.201394 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:18:56.201584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:18:56.203743 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:18:56.203927 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:18:56.205768 systemd-resolved[1312]: Positive Trust Anchors: Sep 4 17:18:56.205792 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:18:56.205838 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:18:56.211387 systemd-resolved[1312]: Defaulting to hostname 'linux'. Sep 4 17:18:56.217976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:18:56.218235 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:18:56.220004 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:18:56.223183 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:18:56.223506 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:18:56.244489 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1364) Sep 4 17:18:56.237125 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:18:56.249614 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:18:56.251439 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:18:56.251532 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:18:56.253844 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:18:56.259147 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:18:56.318275 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1381) Sep 4 17:18:56.627096 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 17:18:56.634277 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:18:56.646140 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:18:56.657107 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 4 17:18:56.683041 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 17:18:56.682415 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:18:56.691759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:18:56.694926 systemd-networkd[1388]: lo: Link UP Sep 4 17:18:56.694943 systemd-networkd[1388]: lo: Gained carrier Sep 4 17:18:56.697102 systemd-networkd[1388]: Enumeration completed Sep 4 17:18:56.697235 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:18:56.697692 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:18:56.697706 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:18:56.698875 systemd[1]: Reached target network.target - Network. Sep 4 17:18:56.700370 systemd-networkd[1388]: eth0: Link UP Sep 4 17:18:56.700386 systemd-networkd[1388]: eth0: Gained carrier Sep 4 17:18:56.700405 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:18:56.712521 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:18:56.716035 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:18:56.722683 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:18:56.751276 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:18:56.776529 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:18:56.778080 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:18:57.324177 systemd-resolved[1312]: Clock change detected. Flushing caches. Sep 4 17:18:57.324301 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:18:57.324408 systemd-timesyncd[1389]: Initial clock synchronization to Wed 2024-09-04 17:18:57.323985 UTC. Sep 4 17:18:57.419024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:18:57.547003 kernel: kvm_amd: TSC scaling supported Sep 4 17:18:57.547109 kernel: kvm_amd: Nested Virtualization enabled Sep 4 17:18:57.547171 kernel: kvm_amd: Nested Paging enabled Sep 4 17:18:57.548028 kernel: kvm_amd: LBR virtualization supported Sep 4 17:18:57.548060 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 17:18:57.548604 kernel: kvm_amd: Virtual GIF supported Sep 4 17:18:57.646241 kernel: EDAC MC: Ver: 3.0.0 Sep 4 17:18:57.701585 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:18:57.715780 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:18:57.732487 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:18:57.769585 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:18:57.774461 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:18:57.777034 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:18:57.783199 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:18:57.784893 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:18:57.789670 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:18:57.791804 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:18:57.794357 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:18:57.796211 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:18:57.796259 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:18:57.798400 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:18:57.807806 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:18:57.812406 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:18:57.825643 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:18:57.829837 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:18:57.832134 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:18:57.833472 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:18:57.834651 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:18:57.836081 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:18:57.836128 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:18:57.837806 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:18:57.843772 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:18:57.844109 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:18:57.846802 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:18:57.853712 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:18:57.863704 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:18:57.868739 jq[1422]: false Sep 4 17:18:57.870293 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:18:57.876676 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:18:57.880736 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:18:57.888747 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:18:57.904556 extend-filesystems[1423]: Found loop3 Sep 4 17:18:57.904556 extend-filesystems[1423]: Found loop4 Sep 4 17:18:57.904556 extend-filesystems[1423]: Found loop5 Sep 4 17:18:57.904556 extend-filesystems[1423]: Found sr0 Sep 4 17:18:57.904556 extend-filesystems[1423]: Found vda Sep 4 17:18:57.904556 extend-filesystems[1423]: Found vda1 Sep 4 17:18:57.904556 extend-filesystems[1423]: Found vda2 Sep 4 17:18:57.904556 extend-filesystems[1423]: Found vda3 Sep 4 17:18:57.904556 extend-filesystems[1423]: Found usr Sep 4 17:18:57.904556 extend-filesystems[1423]: Found vda4 Sep 4 17:18:57.904556 extend-filesystems[1423]: Found vda6 Sep 4 17:18:57.904556 extend-filesystems[1423]: Found vda7 Sep 4 17:18:57.904556 extend-filesystems[1423]: Found vda9 Sep 4 17:18:57.904556 extend-filesystems[1423]: Checking size of /dev/vda9 Sep 4 17:18:57.904736 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:18:57.906983 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:18:57.908169 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:18:57.913109 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:18:57.916826 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:18:57.919378 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:18:57.922041 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:18:57.922283 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:18:57.927405 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:18:57.927916 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:18:57.946735 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:18:57.947039 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:18:57.964231 extend-filesystems[1423]: Resized partition /dev/vda9 Sep 4 17:18:57.965894 dbus-daemon[1421]: [system] SELinux support is enabled Sep 4 17:18:57.966121 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:18:57.970756 jq[1438]: true Sep 4 17:18:57.975693 extend-filesystems[1455]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:18:57.979600 update_engine[1436]: I0904 17:18:57.979462 1436 main.cc:92] Flatcar Update Engine starting Sep 4 17:18:57.981909 update_engine[1436]: I0904 17:18:57.981551 1436 update_check_scheduler.cc:74] Next update check in 2m2s Sep 4 17:18:57.982265 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:18:57.987677 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:18:58.010629 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:18:58.010762 tar[1442]: linux-amd64/helm Sep 4 17:18:57.987752 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:18:58.032550 jq[1456]: true Sep 4 17:18:57.989438 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:18:57.989466 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:18:57.995959 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:18:58.020735 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:18:58.062726 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1375) Sep 4 17:18:58.129732 systemd-logind[1435]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:18:58.129761 systemd-logind[1435]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:18:58.131363 systemd-logind[1435]: New seat seat0. Sep 4 17:18:58.132317 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:18:58.136200 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:18:58.193919 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:18:58.206990 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:18:58.236279 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:18:58.249988 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:18:58.250280 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:18:58.253542 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:18:58.270682 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:18:58.276000 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:34100.service - OpenSSH per-connection server daemon (10.0.0.1:34100). Sep 4 17:18:58.280334 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:18:58.335561 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:18:58.335561 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:18:58.335561 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:18:58.348258 extend-filesystems[1423]: Resized filesystem in /dev/vda9 Sep 4 17:18:58.337045 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:18:58.337326 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:18:58.377450 bash[1475]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:18:58.379931 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:18:58.382649 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:18:58.415876 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:18:58.449228 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:18:58.461033 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:18:58.464692 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:18:58.576343 sshd[1493]: Accepted publickey for core from 10.0.0.1 port 34100 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:18:58.582608 sshd[1493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:18:58.611911 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:18:58.612710 systemd-logind[1435]: New session 1 of user core. Sep 4 17:18:58.629325 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:18:58.679890 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:18:58.729472 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:18:58.744475 systemd-networkd[1388]: eth0: Gained IPv6LL Sep 4 17:18:58.748211 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:18:58.751902 (systemd)[1510]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:18:58.754522 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:18:58.779418 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:18:58.790617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:18:58.796897 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:18:58.895087 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:18:58.895512 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:18:58.903914 containerd[1453]: time="2024-09-04T17:18:58.896804562Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:18:58.904381 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:18:58.924858 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:18:58.955623 containerd[1453]: time="2024-09-04T17:18:58.955274027Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:18:58.955623 containerd[1453]: time="2024-09-04T17:18:58.955343196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:18:58.963419 containerd[1453]: time="2024-09-04T17:18:58.960897200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:18:58.963419 containerd[1453]: time="2024-09-04T17:18:58.963415046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:18:58.965168 containerd[1453]: time="2024-09-04T17:18:58.965008386Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:18:58.965168 containerd[1453]: time="2024-09-04T17:18:58.965051256Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:18:58.965273 containerd[1453]: time="2024-09-04T17:18:58.965195828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:18:58.965371 containerd[1453]: time="2024-09-04T17:18:58.965346731Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:18:58.965419 containerd[1453]: time="2024-09-04T17:18:58.965373261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:18:58.965581 containerd[1453]: time="2024-09-04T17:18:58.965553980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:18:58.966066 containerd[1453]: time="2024-09-04T17:18:58.966030544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:18:58.966106 containerd[1453]: time="2024-09-04T17:18:58.966068716Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:18:58.966134 containerd[1453]: time="2024-09-04T17:18:58.966118519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:18:58.966399 containerd[1453]: time="2024-09-04T17:18:58.966367076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:18:58.967186 containerd[1453]: time="2024-09-04T17:18:58.967152470Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:18:58.967296 containerd[1453]: time="2024-09-04T17:18:58.967266885Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:18:58.967336 containerd[1453]: time="2024-09-04T17:18:58.967290820Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:18:58.987167 containerd[1453]: time="2024-09-04T17:18:58.986683836Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:18:58.987167 containerd[1453]: time="2024-09-04T17:18:58.986771230Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:18:58.987167 containerd[1453]: time="2024-09-04T17:18:58.986792069Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:18:58.987167 containerd[1453]: time="2024-09-04T17:18:58.986880435Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:18:58.987167 containerd[1453]: time="2024-09-04T17:18:58.986902627Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:18:58.987167 containerd[1453]: time="2024-09-04T17:18:58.986917535Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:18:58.987167 containerd[1453]: time="2024-09-04T17:18:58.986932974Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:18:58.987167 containerd[1453]: time="2024-09-04T17:18:58.987167384Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:18:58.987167 containerd[1453]: time="2024-09-04T17:18:58.987188563Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:18:58.987545 containerd[1453]: time="2024-09-04T17:18:58.987207238Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:18:58.987545 containerd[1453]: time="2024-09-04T17:18:58.987232957Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:18:58.987545 containerd[1453]: time="2024-09-04T17:18:58.987252423Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:18:58.987545 containerd[1453]: time="2024-09-04T17:18:58.987276629Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:18:58.987545 containerd[1453]: time="2024-09-04T17:18:58.987294232Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:18:58.987545 containerd[1453]: time="2024-09-04T17:18:58.987311735Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:18:58.987545 containerd[1453]: time="2024-09-04T17:18:58.987330209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:18:58.987545 containerd[1453]: time="2024-09-04T17:18:58.987348233Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:18:58.987545 containerd[1453]: time="2024-09-04T17:18:58.987366287Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:18:58.987545 containerd[1453]: time="2024-09-04T17:18:58.987386906Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:18:58.987788 containerd[1453]: time="2024-09-04T17:18:58.987557967Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:18:58.987879 containerd[1453]: time="2024-09-04T17:18:58.987848722Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:18:58.987919 containerd[1453]: time="2024-09-04T17:18:58.987888988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.987919 containerd[1453]: time="2024-09-04T17:18:58.987906962Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:18:58.987993 containerd[1453]: time="2024-09-04T17:18:58.987937749Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:18:58.988032 containerd[1453]: time="2024-09-04T17:18:58.988005326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988032 containerd[1453]: time="2024-09-04T17:18:58.988023380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988086 containerd[1453]: time="2024-09-04T17:18:58.988039370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988345 containerd[1453]: time="2024-09-04T17:18:58.988056212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988345 containerd[1453]: time="2024-09-04T17:18:58.988268881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988345 containerd[1453]: time="2024-09-04T17:18:58.988288087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988345 containerd[1453]: time="2024-09-04T17:18:58.988305219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988345 containerd[1453]: time="2024-09-04T17:18:58.988320027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988345 containerd[1453]: time="2024-09-04T17:18:58.988337730Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:18:58.988586 containerd[1453]: time="2024-09-04T17:18:58.988559436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988638 containerd[1453]: time="2024-09-04T17:18:58.988589282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988638 containerd[1453]: time="2024-09-04T17:18:58.988605653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988638 containerd[1453]: time="2024-09-04T17:18:58.988622455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988638 containerd[1453]: time="2024-09-04T17:18:58.988637503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988732 containerd[1453]: time="2024-09-04T17:18:58.988655857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988732 containerd[1453]: time="2024-09-04T17:18:58.988671386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.988732 containerd[1453]: time="2024-09-04T17:18:58.988685082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:18:58.992005 containerd[1453]: time="2024-09-04T17:18:58.991702154Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:18:58.992005 containerd[1453]: time="2024-09-04T17:18:58.991800369Z" level=info msg="Connect containerd service" Sep 4 17:18:58.992005 containerd[1453]: time="2024-09-04T17:18:58.991853468Z" level=info msg="using legacy CRI server" Sep 4 17:18:58.992005 containerd[1453]: time="2024-09-04T17:18:58.991863567Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:18:58.992005 containerd[1453]: time="2024-09-04T17:18:58.991971380Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:18:58.993696 containerd[1453]: time="2024-09-04T17:18:58.992983549Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:18:58.993696 containerd[1453]: time="2024-09-04T17:18:58.993071875Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:18:58.993696 containerd[1453]: time="2024-09-04T17:18:58.993191530Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:18:58.993696 containerd[1453]: time="2024-09-04T17:18:58.993210044Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:18:58.993696 containerd[1453]: time="2024-09-04T17:18:58.993138210Z" level=info msg="Start subscribing containerd event" Sep 4 17:18:58.993696 containerd[1453]: time="2024-09-04T17:18:58.993323918Z" level=info msg="Start recovering state" Sep 4 17:18:58.993696 containerd[1453]: time="2024-09-04T17:18:58.993533381Z" level=info msg="Start event monitor" Sep 4 17:18:58.993696 containerd[1453]: time="2024-09-04T17:18:58.993552076Z" level=info msg="Start snapshots syncer" Sep 4 17:18:58.993696 containerd[1453]: time="2024-09-04T17:18:58.993563698Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:18:58.993696 containerd[1453]: time="2024-09-04T17:18:58.993573136Z" level=info msg="Start streaming server" Sep 4 17:18:58.994013 containerd[1453]: time="2024-09-04T17:18:58.993932641Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:18:58.994799 containerd[1453]: time="2024-09-04T17:18:58.994271817Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:18:58.994799 containerd[1453]: time="2024-09-04T17:18:58.994346197Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:18:58.994570 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:18:59.001601 containerd[1453]: time="2024-09-04T17:18:58.996467989Z" level=info msg="containerd successfully booted in 0.104036s" Sep 4 17:18:59.017781 systemd[1510]: Queued start job for default target default.target. Sep 4 17:18:59.040656 systemd[1510]: Created slice app.slice - User Application Slice. Sep 4 17:18:59.040687 systemd[1510]: Reached target paths.target - Paths. Sep 4 17:18:59.040701 systemd[1510]: Reached target timers.target - Timers. Sep 4 17:18:59.043379 systemd[1510]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:18:59.064967 systemd[1510]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:18:59.065056 systemd[1510]: Reached target sockets.target - Sockets. Sep 4 17:18:59.065074 systemd[1510]: Reached target basic.target - Basic System. Sep 4 17:18:59.065133 systemd[1510]: Reached target default.target - Main User Target. Sep 4 17:18:59.065179 systemd[1510]: Startup finished in 288ms. Sep 4 17:18:59.067389 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:18:59.130470 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:18:59.312714 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:55614.service - OpenSSH per-connection server daemon (10.0.0.1:55614). Sep 4 17:18:59.349660 tar[1442]: linux-amd64/LICENSE Sep 4 17:18:59.349660 tar[1442]: linux-amd64/README.md Sep 4 17:18:59.361530 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 55614 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:18:59.362430 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:18:59.364777 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:18:59.372195 systemd-logind[1435]: New session 2 of user core. Sep 4 17:18:59.383743 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:18:59.440961 sshd[1541]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:59.450529 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:55614.service: Deactivated successfully. Sep 4 17:18:59.452282 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:18:59.454046 systemd-logind[1435]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:18:59.467821 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:55616.service - OpenSSH per-connection server daemon (10.0.0.1:55616). Sep 4 17:18:59.470245 systemd-logind[1435]: Removed session 2. Sep 4 17:18:59.517842 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 55616 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:18:59.520066 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:18:59.524272 systemd-logind[1435]: New session 3 of user core. Sep 4 17:18:59.529644 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:18:59.628633 sshd[1551]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:59.633365 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:55616.service: Deactivated successfully. Sep 4 17:18:59.635745 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:18:59.636375 systemd-logind[1435]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:18:59.637412 systemd-logind[1435]: Removed session 3. Sep 4 17:19:00.109753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:00.155286 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:19:00.157143 systemd[1]: Startup finished in 1.039s (kernel) + 6.320s (initrd) + 5.553s (userspace) = 12.913s. Sep 4 17:19:00.170063 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:19:00.853433 kubelet[1562]: E0904 17:19:00.853328 1562 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:19:00.858366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:19:00.858587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:19:00.858935 systemd[1]: kubelet.service: Consumed 1.670s CPU time. Sep 4 17:19:09.639394 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:49396.service - OpenSSH per-connection server daemon (10.0.0.1:49396). Sep 4 17:19:09.674570 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 49396 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:19:09.676267 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:09.680268 systemd-logind[1435]: New session 4 of user core. Sep 4 17:19:09.689758 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:19:09.747321 sshd[1577]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:09.769073 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:49396.service: Deactivated successfully. Sep 4 17:19:09.771173 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:19:09.772661 systemd-logind[1435]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:19:09.784956 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:49410.service - OpenSSH per-connection server daemon (10.0.0.1:49410). Sep 4 17:19:09.785949 systemd-logind[1435]: Removed session 4. Sep 4 17:19:09.814426 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 49410 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:19:09.815769 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:09.820000 systemd-logind[1435]: New session 5 of user core. Sep 4 17:19:09.829619 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:19:09.879283 sshd[1584]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:09.886208 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:49410.service: Deactivated successfully. Sep 4 17:19:09.888016 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:19:09.889782 systemd-logind[1435]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:19:09.904769 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:49424.service - OpenSSH per-connection server daemon (10.0.0.1:49424). Sep 4 17:19:09.905739 systemd-logind[1435]: Removed session 5. Sep 4 17:19:09.933654 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 49424 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:19:09.935069 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:09.938774 systemd-logind[1435]: New session 6 of user core. Sep 4 17:19:09.955684 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:19:10.009959 sshd[1592]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:10.020218 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:49424.service: Deactivated successfully. Sep 4 17:19:10.022021 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:19:10.023462 systemd-logind[1435]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:19:10.047833 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:49434.service - OpenSSH per-connection server daemon (10.0.0.1:49434). Sep 4 17:19:10.048804 systemd-logind[1435]: Removed session 6. Sep 4 17:19:10.075485 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 49434 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:19:10.076908 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:10.080708 systemd-logind[1435]: New session 7 of user core. Sep 4 17:19:10.090624 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:19:10.147026 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:19:10.147323 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:19:10.165291 sudo[1602]: pam_unix(sudo:session): session closed for user root Sep 4 17:19:10.167026 sshd[1599]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:10.184275 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:49434.service: Deactivated successfully. Sep 4 17:19:10.186035 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:19:10.187457 systemd-logind[1435]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:19:10.188990 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:49442.service - OpenSSH per-connection server daemon (10.0.0.1:49442). Sep 4 17:19:10.189830 systemd-logind[1435]: Removed session 7. Sep 4 17:19:10.222282 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 49442 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:19:10.223695 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:10.227585 systemd-logind[1435]: New session 8 of user core. Sep 4 17:19:10.237633 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:19:10.290082 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:19:10.290376 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:19:10.293856 sudo[1611]: pam_unix(sudo:session): session closed for user root Sep 4 17:19:10.299818 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:19:10.300103 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:19:10.318749 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:19:10.320527 auditctl[1614]: No rules Sep 4 17:19:10.321816 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:19:10.322051 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:19:10.323955 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:19:10.353487 augenrules[1632]: No rules Sep 4 17:19:10.355353 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:19:10.356766 sudo[1610]: pam_unix(sudo:session): session closed for user root Sep 4 17:19:10.358491 sshd[1607]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:10.375138 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:49442.service: Deactivated successfully. Sep 4 17:19:10.376974 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:19:10.378440 systemd-logind[1435]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:19:10.387825 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:49456.service - OpenSSH per-connection server daemon (10.0.0.1:49456). Sep 4 17:19:10.388819 systemd-logind[1435]: Removed session 8. Sep 4 17:19:10.415769 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 49456 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:19:10.417297 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:10.420886 systemd-logind[1435]: New session 9 of user core. Sep 4 17:19:10.430632 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:19:10.483603 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:19:10.483896 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:19:10.594988 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:19:10.595123 (dockerd)[1655]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:19:11.063633 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:19:11.072830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:11.292170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:11.332737 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:19:11.339892 dockerd[1655]: time="2024-09-04T17:19:11.339827848Z" level=info msg="Starting up" Sep 4 17:19:12.587529 kubelet[1672]: E0904 17:19:12.587447 1672 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:19:12.595242 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:19:12.595465 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:19:13.005640 dockerd[1655]: time="2024-09-04T17:19:13.005438672Z" level=info msg="Loading containers: start." Sep 4 17:19:13.429541 kernel: Initializing XFRM netlink socket Sep 4 17:19:13.517153 systemd-networkd[1388]: docker0: Link UP Sep 4 17:19:13.539424 dockerd[1655]: time="2024-09-04T17:19:13.539384894Z" level=info msg="Loading containers: done." Sep 4 17:19:13.588240 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1802313777-merged.mount: Deactivated successfully. Sep 4 17:19:13.592166 dockerd[1655]: time="2024-09-04T17:19:13.592112954Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:19:13.592380 dockerd[1655]: time="2024-09-04T17:19:13.592349417Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:19:13.592534 dockerd[1655]: time="2024-09-04T17:19:13.592514557Z" level=info msg="Daemon has completed initialization" Sep 4 17:19:13.627464 dockerd[1655]: time="2024-09-04T17:19:13.627375594Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:19:13.627681 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:19:14.338750 containerd[1453]: time="2024-09-04T17:19:14.338700158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:19:15.072664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1086867824.mount: Deactivated successfully. Sep 4 17:19:16.231904 containerd[1453]: time="2024-09-04T17:19:16.231838472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:16.232725 containerd[1453]: time="2024-09-04T17:19:16.232667648Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=34530735" Sep 4 17:19:16.234028 containerd[1453]: time="2024-09-04T17:19:16.233988818Z" level=info msg="ImageCreate event name:\"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:16.237020 containerd[1453]: time="2024-09-04T17:19:16.236977918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:16.238235 containerd[1453]: time="2024-09-04T17:19:16.238207545Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"34527535\" in 1.899452715s" Sep 4 17:19:16.238299 containerd[1453]: time="2024-09-04T17:19:16.238246238Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\"" Sep 4 17:19:16.260455 containerd[1453]: time="2024-09-04T17:19:16.260409253Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:19:18.455638 containerd[1453]: time="2024-09-04T17:19:18.455559670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:18.456527 containerd[1453]: time="2024-09-04T17:19:18.456412170Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=31849709" Sep 4 17:19:18.458573 containerd[1453]: time="2024-09-04T17:19:18.458516559Z" level=info msg="ImageCreate event name:\"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:18.463816 containerd[1453]: time="2024-09-04T17:19:18.463486226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:18.465056 containerd[1453]: time="2024-09-04T17:19:18.465023051Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"33399655\" in 2.204563923s" Sep 4 17:19:18.465056 containerd[1453]: time="2024-09-04T17:19:18.465061322Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\"" Sep 4 17:19:18.490118 containerd[1453]: time="2024-09-04T17:19:18.490071050Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:19:20.007082 containerd[1453]: time="2024-09-04T17:19:20.007017002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:20.008564 containerd[1453]: time="2024-09-04T17:19:20.008518260Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=17097777" Sep 4 17:19:20.009909 containerd[1453]: time="2024-09-04T17:19:20.009873704Z" level=info msg="ImageCreate event name:\"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:20.013222 containerd[1453]: time="2024-09-04T17:19:20.013184147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:20.014326 containerd[1453]: time="2024-09-04T17:19:20.014270786Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"18647741\" in 1.524166904s" Sep 4 17:19:20.014326 containerd[1453]: time="2024-09-04T17:19:20.014318265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\"" Sep 4 17:19:20.035114 containerd[1453]: time="2024-09-04T17:19:20.035065483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:19:21.717836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount347209634.mount: Deactivated successfully. Sep 4 17:19:22.195658 containerd[1453]: time="2024-09-04T17:19:22.195580617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:22.229639 containerd[1453]: time="2024-09-04T17:19:22.229550691Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=28303449" Sep 4 17:19:22.251178 containerd[1453]: time="2024-09-04T17:19:22.251114963Z" level=info msg="ImageCreate event name:\"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:22.254782 containerd[1453]: time="2024-09-04T17:19:22.254736941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:22.255776 containerd[1453]: time="2024-09-04T17:19:22.255713073Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"28302468\" in 2.220590472s" Sep 4 17:19:22.255812 containerd[1453]: time="2024-09-04T17:19:22.255780178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\"" Sep 4 17:19:22.278086 containerd[1453]: time="2024-09-04T17:19:22.278038002Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:19:22.741804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:19:22.749693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:22.750889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3833477349.mount: Deactivated successfully. Sep 4 17:19:22.751271 containerd[1453]: time="2024-09-04T17:19:22.751213941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:22.752392 containerd[1453]: time="2024-09-04T17:19:22.752347188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:19:22.754168 containerd[1453]: time="2024-09-04T17:19:22.754093235Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:22.756533 containerd[1453]: time="2024-09-04T17:19:22.756484373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:22.760006 containerd[1453]: time="2024-09-04T17:19:22.758446054Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 480.363639ms" Sep 4 17:19:22.760141 containerd[1453]: time="2024-09-04T17:19:22.760018566Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:19:22.785326 containerd[1453]: time="2024-09-04T17:19:22.785045807Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:19:22.905964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:22.912010 (kubelet)[1927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:19:22.968243 kubelet[1927]: E0904 17:19:22.968062 1927 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:19:22.973897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:19:22.974125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:19:24.044685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3721014877.mount: Deactivated successfully. Sep 4 17:19:26.002302 containerd[1453]: time="2024-09-04T17:19:26.002239619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:26.003137 containerd[1453]: time="2024-09-04T17:19:26.003094023Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 4 17:19:26.004376 containerd[1453]: time="2024-09-04T17:19:26.004341314Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:26.007638 containerd[1453]: time="2024-09-04T17:19:26.007609978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:26.008661 containerd[1453]: time="2024-09-04T17:19:26.008614173Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.223525466s" Sep 4 17:19:26.008703 containerd[1453]: time="2024-09-04T17:19:26.008666651Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:19:26.033337 containerd[1453]: time="2024-09-04T17:19:26.033301375Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:19:28.002123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1812625810.mount: Deactivated successfully. Sep 4 17:19:29.180606 containerd[1453]: time="2024-09-04T17:19:29.180542111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:29.181547 containerd[1453]: time="2024-09-04T17:19:29.181516690Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Sep 4 17:19:29.182865 containerd[1453]: time="2024-09-04T17:19:29.182834113Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:29.187197 containerd[1453]: time="2024-09-04T17:19:29.187153670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:29.187948 containerd[1453]: time="2024-09-04T17:19:29.187888549Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 3.154554041s" Sep 4 17:19:29.188023 containerd[1453]: time="2024-09-04T17:19:29.187949844Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Sep 4 17:19:31.476796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:31.486800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:31.505113 systemd[1]: Reloading requested from client PID 2075 ('systemctl') (unit session-9.scope)... Sep 4 17:19:31.505130 systemd[1]: Reloading... Sep 4 17:19:31.600270 zram_generator::config[2112]: No configuration found. Sep 4 17:19:31.927957 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:19:32.028603 systemd[1]: Reloading finished in 523 ms. Sep 4 17:19:32.085733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:32.088539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:32.091968 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:19:32.092337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:32.094668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:32.250256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:32.256631 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:19:32.301676 kubelet[2162]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:19:32.301676 kubelet[2162]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:19:32.301676 kubelet[2162]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:19:32.303426 kubelet[2162]: I0904 17:19:32.303365 2162 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:19:32.671891 kubelet[2162]: I0904 17:19:32.671841 2162 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:19:32.671891 kubelet[2162]: I0904 17:19:32.671877 2162 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:19:32.672154 kubelet[2162]: I0904 17:19:32.672138 2162 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:19:32.685427 kubelet[2162]: E0904 17:19:32.685382 2162 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:32.686149 kubelet[2162]: I0904 17:19:32.686109 2162 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:19:32.697429 kubelet[2162]: I0904 17:19:32.697391 2162 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:19:32.698861 kubelet[2162]: I0904 17:19:32.698835 2162 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:19:32.699054 kubelet[2162]: I0904 17:19:32.699021 2162 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:19:32.699154 kubelet[2162]: I0904 17:19:32.699058 2162 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:19:32.699154 kubelet[2162]: I0904 17:19:32.699068 2162 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:19:32.701805 kubelet[2162]: I0904 17:19:32.701769 2162 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:19:32.703078 kubelet[2162]: I0904 17:19:32.703051 2162 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:19:32.703078 kubelet[2162]: I0904 17:19:32.703075 2162 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:19:32.703131 kubelet[2162]: I0904 17:19:32.703108 2162 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:19:32.703201 kubelet[2162]: I0904 17:19:32.703176 2162 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:19:32.705391 kubelet[2162]: I0904 17:19:32.705345 2162 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:19:32.705917 kubelet[2162]: W0904 17:19:32.705805 2162 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:32.705917 kubelet[2162]: W0904 17:19:32.705841 2162 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:32.705917 kubelet[2162]: E0904 17:19:32.705864 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:32.705917 kubelet[2162]: E0904 17:19:32.705882 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:32.707729 kubelet[2162]: W0904 17:19:32.707687 2162 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:19:32.708608 kubelet[2162]: I0904 17:19:32.708352 2162 server.go:1232] "Started kubelet" Sep 4 17:19:32.708737 kubelet[2162]: I0904 17:19:32.708710 2162 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:19:32.709229 kubelet[2162]: I0904 17:19:32.709205 2162 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:19:32.709277 kubelet[2162]: I0904 17:19:32.709266 2162 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:19:32.709429 kubelet[2162]: E0904 17:19:32.709416 2162 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:19:32.709475 kubelet[2162]: E0904 17:19:32.709437 2162 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:19:32.710818 kubelet[2162]: I0904 17:19:32.710530 2162 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:19:32.713532 kubelet[2162]: I0904 17:19:32.711157 2162 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:19:32.713532 kubelet[2162]: I0904 17:19:32.711722 2162 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:19:32.713532 kubelet[2162]: E0904 17:19:32.712039 2162 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17f21a297bbda11a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 19, 32, 708327706, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 19, 32, 708327706, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.55:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.55:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:19:32.713532 kubelet[2162]: W0904 17:19:32.712898 2162 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:32.713780 kubelet[2162]: E0904 17:19:32.712945 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:32.713780 kubelet[2162]: I0904 17:19:32.713221 2162 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:19:32.713780 kubelet[2162]: E0904 17:19:32.713257 2162 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:19:32.713780 kubelet[2162]: I0904 17:19:32.713329 2162 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:19:32.714736 kubelet[2162]: E0904 17:19:32.714715 2162 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Sep 4 17:19:32.737156 kubelet[2162]: I0904 17:19:32.737115 2162 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:19:32.739423 kubelet[2162]: I0904 17:19:32.739397 2162 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:19:32.739623 kubelet[2162]: I0904 17:19:32.739592 2162 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:19:32.739931 kubelet[2162]: I0904 17:19:32.739905 2162 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:19:32.740089 kubelet[2162]: E0904 17:19:32.740071 2162 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:19:32.741217 kubelet[2162]: W0904 17:19:32.741177 2162 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:32.741217 kubelet[2162]: E0904 17:19:32.741221 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:32.756421 kubelet[2162]: I0904 17:19:32.756387 2162 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:19:32.756421 kubelet[2162]: I0904 17:19:32.756405 2162 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:19:32.756421 kubelet[2162]: I0904 17:19:32.756427 2162 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:19:32.814852 kubelet[2162]: I0904 17:19:32.814810 2162 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:19:32.815244 kubelet[2162]: E0904 17:19:32.815205 2162 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 4 17:19:32.840256 kubelet[2162]: E0904 17:19:32.840225 2162 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:19:32.916070 kubelet[2162]: E0904 17:19:32.916013 2162 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Sep 4 17:19:33.018334 kubelet[2162]: I0904 17:19:33.018184 2162 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:19:33.018624 kubelet[2162]: E0904 17:19:33.018603 2162 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 4 17:19:33.040384 kubelet[2162]: E0904 17:19:33.040350 2162 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:19:33.316879 kubelet[2162]: E0904 17:19:33.316834 2162 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Sep 4 17:19:33.329179 kubelet[2162]: I0904 17:19:33.329140 2162 policy_none.go:49] "None policy: Start" Sep 4 17:19:33.330002 kubelet[2162]: I0904 17:19:33.329952 2162 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:19:33.330002 kubelet[2162]: I0904 17:19:33.329988 2162 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:19:33.337976 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:19:33.352336 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:19:33.355519 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:19:33.372928 kubelet[2162]: I0904 17:19:33.372767 2162 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:19:33.373274 kubelet[2162]: I0904 17:19:33.373167 2162 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:19:33.373883 kubelet[2162]: E0904 17:19:33.373856 2162 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:19:33.420149 kubelet[2162]: I0904 17:19:33.420081 2162 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:19:33.420475 kubelet[2162]: E0904 17:19:33.420454 2162 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 4 17:19:33.440778 kubelet[2162]: I0904 17:19:33.440711 2162 topology_manager.go:215] "Topology Admit Handler" podUID="1a50c3762c2f35755445fd4fe44f0c74" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:19:33.442401 kubelet[2162]: I0904 17:19:33.442359 2162 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:19:33.443322 kubelet[2162]: I0904 17:19:33.443292 2162 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:19:33.451359 systemd[1]: Created slice kubepods-burstable-pod1a50c3762c2f35755445fd4fe44f0c74.slice - libcontainer container kubepods-burstable-pod1a50c3762c2f35755445fd4fe44f0c74.slice. Sep 4 17:19:33.468857 kubelet[2162]: E0904 17:19:33.468743 2162 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17f21a297bbda11a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 19, 32, 708327706, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 19, 32, 708327706, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.55:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.55:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:19:33.471635 systemd[1]: Created slice kubepods-burstable-podf5bf8d52acd7337c82951a97b42c345d.slice - libcontainer container kubepods-burstable-podf5bf8d52acd7337c82951a97b42c345d.slice. Sep 4 17:19:33.496167 systemd[1]: Created slice kubepods-burstable-podcacd2a680dbc59f99275412e0ba6e38b.slice - libcontainer container kubepods-burstable-podcacd2a680dbc59f99275412e0ba6e38b.slice. Sep 4 17:19:33.518605 kubelet[2162]: I0904 17:19:33.518567 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a50c3762c2f35755445fd4fe44f0c74-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a50c3762c2f35755445fd4fe44f0c74\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:33.518605 kubelet[2162]: I0904 17:19:33.518604 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a50c3762c2f35755445fd4fe44f0c74-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a50c3762c2f35755445fd4fe44f0c74\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:33.518605 kubelet[2162]: I0904 17:19:33.518624 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:33.518605 kubelet[2162]: I0904 17:19:33.518643 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:33.518893 kubelet[2162]: I0904 17:19:33.518663 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:19:33.518893 kubelet[2162]: I0904 17:19:33.518681 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a50c3762c2f35755445fd4fe44f0c74-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a50c3762c2f35755445fd4fe44f0c74\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:33.518893 kubelet[2162]: I0904 17:19:33.518699 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:33.518893 kubelet[2162]: I0904 17:19:33.518732 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:33.518893 kubelet[2162]: I0904 17:19:33.518755 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:33.718819 kubelet[2162]: W0904 17:19:33.718659 2162 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:33.718819 kubelet[2162]: E0904 17:19:33.718739 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:33.770179 kubelet[2162]: E0904 17:19:33.770129 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:33.770900 containerd[1453]: time="2024-09-04T17:19:33.770848952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a50c3762c2f35755445fd4fe44f0c74,Namespace:kube-system,Attempt:0,}" Sep 4 17:19:33.775283 kubelet[2162]: E0904 17:19:33.775236 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:33.775807 containerd[1453]: time="2024-09-04T17:19:33.775760148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,}" Sep 4 17:19:33.799393 kubelet[2162]: E0904 17:19:33.799318 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:33.800034 containerd[1453]: time="2024-09-04T17:19:33.799983714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,}" Sep 4 17:19:33.982877 kubelet[2162]: W0904 17:19:33.982725 2162 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:33.982877 kubelet[2162]: E0904 17:19:33.982782 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:33.999139 kubelet[2162]: W0904 17:19:33.999096 2162 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:33.999139 kubelet[2162]: E0904 17:19:33.999132 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:34.118319 kubelet[2162]: E0904 17:19:34.118286 2162 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="1.6s" Sep 4 17:19:34.170105 kubelet[2162]: W0904 17:19:34.170000 2162 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:34.170105 kubelet[2162]: E0904 17:19:34.170102 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:34.222683 kubelet[2162]: I0904 17:19:34.222654 2162 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:19:34.222951 kubelet[2162]: E0904 17:19:34.222934 2162 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 4 17:19:34.650579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1819265433.mount: Deactivated successfully. Sep 4 17:19:34.657942 containerd[1453]: time="2024-09-04T17:19:34.657898965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:19:34.659076 containerd[1453]: time="2024-09-04T17:19:34.659020914Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:19:34.659937 containerd[1453]: time="2024-09-04T17:19:34.659910623Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:19:34.660632 containerd[1453]: time="2024-09-04T17:19:34.660594320Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:19:34.661374 containerd[1453]: time="2024-09-04T17:19:34.661309938Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:19:34.662185 containerd[1453]: time="2024-09-04T17:19:34.662146956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:19:34.663271 containerd[1453]: time="2024-09-04T17:19:34.663212327Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:19:34.667348 containerd[1453]: time="2024-09-04T17:19:34.667317757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:19:34.668352 containerd[1453]: time="2024-09-04T17:19:34.668309690Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 892.447778ms" Sep 4 17:19:34.669943 containerd[1453]: time="2024-09-04T17:19:34.669889727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 869.780736ms" Sep 4 17:19:34.671054 containerd[1453]: time="2024-09-04T17:19:34.671000885Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 900.046322ms" Sep 4 17:19:34.811613 kubelet[2162]: E0904 17:19:34.811547 2162 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.55:6443: connect: connection refused Sep 4 17:19:34.878649 containerd[1453]: time="2024-09-04T17:19:34.873833250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:34.878649 containerd[1453]: time="2024-09-04T17:19:34.874033070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:34.878649 containerd[1453]: time="2024-09-04T17:19:34.874075881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:34.878649 containerd[1453]: time="2024-09-04T17:19:34.874089758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:34.890539 containerd[1453]: time="2024-09-04T17:19:34.890300105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:34.890539 containerd[1453]: time="2024-09-04T17:19:34.890452986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:34.890852 containerd[1453]: time="2024-09-04T17:19:34.890690267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:34.890852 containerd[1453]: time="2024-09-04T17:19:34.890713881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:34.925138 systemd[1]: Started cri-containerd-606dca337616f402c0dfcfc86a26e29490b83cafa60e50ae569cac4a4c2c446f.scope - libcontainer container 606dca337616f402c0dfcfc86a26e29490b83cafa60e50ae569cac4a4c2c446f. Sep 4 17:19:34.926404 containerd[1453]: time="2024-09-04T17:19:34.926081419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:34.926404 containerd[1453]: time="2024-09-04T17:19:34.926165459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:34.926404 containerd[1453]: time="2024-09-04T17:19:34.926206136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:34.926404 containerd[1453]: time="2024-09-04T17:19:34.926220282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:34.931890 systemd[1]: Started cri-containerd-e65f5e2ff5368ff1a41baca5435c0afe19d96d81e0243d3f3960e59daf0e147c.scope - libcontainer container e65f5e2ff5368ff1a41baca5435c0afe19d96d81e0243d3f3960e59daf0e147c. Sep 4 17:19:34.973663 systemd[1]: Started cri-containerd-b8f858102a59c2f29b55a560ecd2ad951992d1dd00e805407d8a1963fa7f20e1.scope - libcontainer container b8f858102a59c2f29b55a560ecd2ad951992d1dd00e805407d8a1963fa7f20e1. Sep 4 17:19:34.988437 containerd[1453]: time="2024-09-04T17:19:34.987935053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a50c3762c2f35755445fd4fe44f0c74,Namespace:kube-system,Attempt:0,} returns sandbox id \"e65f5e2ff5368ff1a41baca5435c0afe19d96d81e0243d3f3960e59daf0e147c\"" Sep 4 17:19:34.990156 kubelet[2162]: E0904 17:19:34.990031 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:34.993108 containerd[1453]: time="2024-09-04T17:19:34.992897910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,} returns sandbox id \"606dca337616f402c0dfcfc86a26e29490b83cafa60e50ae569cac4a4c2c446f\"" Sep 4 17:19:34.993428 kubelet[2162]: E0904 17:19:34.993412 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:34.996783 containerd[1453]: time="2024-09-04T17:19:34.996658186Z" level=info msg="CreateContainer within sandbox \"e65f5e2ff5368ff1a41baca5435c0afe19d96d81e0243d3f3960e59daf0e147c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:19:34.997218 containerd[1453]: time="2024-09-04T17:19:34.997114551Z" level=info msg="CreateContainer within sandbox \"606dca337616f402c0dfcfc86a26e29490b83cafa60e50ae569cac4a4c2c446f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:19:35.021161 containerd[1453]: time="2024-09-04T17:19:35.021106758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8f858102a59c2f29b55a560ecd2ad951992d1dd00e805407d8a1963fa7f20e1\"" Sep 4 17:19:35.022008 kubelet[2162]: E0904 17:19:35.021981 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:35.024238 containerd[1453]: time="2024-09-04T17:19:35.024161692Z" level=info msg="CreateContainer within sandbox \"b8f858102a59c2f29b55a560ecd2ad951992d1dd00e805407d8a1963fa7f20e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:19:35.028586 containerd[1453]: time="2024-09-04T17:19:35.028546326Z" level=info msg="CreateContainer within sandbox \"e65f5e2ff5368ff1a41baca5435c0afe19d96d81e0243d3f3960e59daf0e147c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6276e4a8178b97596bde00b0b69726985419ff64d811e63731c561a2cccecfd9\"" Sep 4 17:19:35.029705 containerd[1453]: time="2024-09-04T17:19:35.029662893Z" level=info msg="CreateContainer within sandbox \"606dca337616f402c0dfcfc86a26e29490b83cafa60e50ae569cac4a4c2c446f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a14127eb6c5b548f5ecc28405e68cc03869ff22242b591b1f2a9094b0d895251\"" Sep 4 17:19:35.029877 containerd[1453]: time="2024-09-04T17:19:35.029838856Z" level=info msg="StartContainer for \"6276e4a8178b97596bde00b0b69726985419ff64d811e63731c561a2cccecfd9\"" Sep 4 17:19:35.040796 containerd[1453]: time="2024-09-04T17:19:35.039966651Z" level=info msg="StartContainer for \"a14127eb6c5b548f5ecc28405e68cc03869ff22242b591b1f2a9094b0d895251\"" Sep 4 17:19:35.049383 containerd[1453]: time="2024-09-04T17:19:35.048939214Z" level=info msg="CreateContainer within sandbox \"b8f858102a59c2f29b55a560ecd2ad951992d1dd00e805407d8a1963fa7f20e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1d59443c5877e3c47fb7e429fd7d82876ecdbb623a0ef8f352394edb23bf1214\"" Sep 4 17:19:35.049945 containerd[1453]: time="2024-09-04T17:19:35.049922047Z" level=info msg="StartContainer for \"1d59443c5877e3c47fb7e429fd7d82876ecdbb623a0ef8f352394edb23bf1214\"" Sep 4 17:19:35.070801 systemd[1]: Started cri-containerd-a14127eb6c5b548f5ecc28405e68cc03869ff22242b591b1f2a9094b0d895251.scope - libcontainer container a14127eb6c5b548f5ecc28405e68cc03869ff22242b591b1f2a9094b0d895251. Sep 4 17:19:35.086650 systemd[1]: Started cri-containerd-6276e4a8178b97596bde00b0b69726985419ff64d811e63731c561a2cccecfd9.scope - libcontainer container 6276e4a8178b97596bde00b0b69726985419ff64d811e63731c561a2cccecfd9. Sep 4 17:19:35.090418 systemd[1]: Started cri-containerd-1d59443c5877e3c47fb7e429fd7d82876ecdbb623a0ef8f352394edb23bf1214.scope - libcontainer container 1d59443c5877e3c47fb7e429fd7d82876ecdbb623a0ef8f352394edb23bf1214. Sep 4 17:19:35.130649 containerd[1453]: time="2024-09-04T17:19:35.130568776Z" level=info msg="StartContainer for \"a14127eb6c5b548f5ecc28405e68cc03869ff22242b591b1f2a9094b0d895251\" returns successfully" Sep 4 17:19:35.144290 containerd[1453]: time="2024-09-04T17:19:35.144234088Z" level=info msg="StartContainer for \"6276e4a8178b97596bde00b0b69726985419ff64d811e63731c561a2cccecfd9\" returns successfully" Sep 4 17:19:35.148549 containerd[1453]: time="2024-09-04T17:19:35.148477384Z" level=info msg="StartContainer for \"1d59443c5877e3c47fb7e429fd7d82876ecdbb623a0ef8f352394edb23bf1214\" returns successfully" Sep 4 17:19:35.754361 kubelet[2162]: E0904 17:19:35.754319 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:35.762988 kubelet[2162]: E0904 17:19:35.762955 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:35.772865 kubelet[2162]: E0904 17:19:35.772831 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:35.825212 kubelet[2162]: I0904 17:19:35.825186 2162 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:19:36.633871 kubelet[2162]: E0904 17:19:36.633832 2162 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:19:36.707835 kubelet[2162]: I0904 17:19:36.707766 2162 apiserver.go:52] "Watching apiserver" Sep 4 17:19:36.713706 kubelet[2162]: I0904 17:19:36.713650 2162 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:19:36.726109 kubelet[2162]: I0904 17:19:36.726059 2162 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Sep 4 17:19:36.818369 kubelet[2162]: E0904 17:19:36.818314 2162 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:36.818871 kubelet[2162]: E0904 17:19:36.818852 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:37.391737 kubelet[2162]: E0904 17:19:37.391697 2162 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:37.392162 kubelet[2162]: E0904 17:19:37.392121 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:39.353185 systemd[1]: Reloading requested from client PID 2437 ('systemctl') (unit session-9.scope)... Sep 4 17:19:39.353204 systemd[1]: Reloading... Sep 4 17:19:39.439482 zram_generator::config[2475]: No configuration found. Sep 4 17:19:39.560269 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:19:39.651952 systemd[1]: Reloading finished in 298 ms. Sep 4 17:19:39.701427 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:39.722184 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:19:39.722578 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:39.722636 systemd[1]: kubelet.service: Consumed 1.023s CPU time, 117.0M memory peak, 0B memory swap peak. Sep 4 17:19:39.731896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:39.882864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:39.888325 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:19:39.971436 kubelet[2519]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:19:39.971436 kubelet[2519]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:19:39.971436 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:19:39.971870 kubelet[2519]: I0904 17:19:39.971373 2519 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:19:39.977466 kubelet[2519]: I0904 17:19:39.977415 2519 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:19:39.977466 kubelet[2519]: I0904 17:19:39.977452 2519 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:19:39.977719 kubelet[2519]: I0904 17:19:39.977692 2519 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:19:39.979310 kubelet[2519]: I0904 17:19:39.979283 2519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:19:39.980419 kubelet[2519]: I0904 17:19:39.980367 2519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:19:39.987852 kubelet[2519]: I0904 17:19:39.987820 2519 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:19:39.988383 kubelet[2519]: I0904 17:19:39.988357 2519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:19:39.988561 kubelet[2519]: I0904 17:19:39.988535 2519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:19:39.988647 kubelet[2519]: I0904 17:19:39.988566 2519 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:19:39.988647 kubelet[2519]: I0904 17:19:39.988576 2519 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:19:39.988647 kubelet[2519]: I0904 17:19:39.988625 2519 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:19:39.988751 kubelet[2519]: I0904 17:19:39.988725 2519 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:19:39.988751 kubelet[2519]: I0904 17:19:39.988739 2519 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:19:39.988803 kubelet[2519]: I0904 17:19:39.988764 2519 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:19:39.988803 kubelet[2519]: I0904 17:19:39.988797 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:19:39.989954 kubelet[2519]: I0904 17:19:39.989929 2519 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:19:39.991956 kubelet[2519]: I0904 17:19:39.991929 2519 server.go:1232] "Started kubelet" Sep 4 17:19:39.992393 kubelet[2519]: I0904 17:19:39.992269 2519 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:19:39.992949 kubelet[2519]: I0904 17:19:39.992928 2519 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:19:39.993389 kubelet[2519]: I0904 17:19:39.993374 2519 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:19:39.996014 kubelet[2519]: E0904 17:19:39.995548 2519 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:19:39.996014 kubelet[2519]: E0904 17:19:39.995589 2519 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:19:40.002209 kubelet[2519]: I0904 17:19:40.002161 2519 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:19:40.003045 kubelet[2519]: I0904 17:19:40.002555 2519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:19:40.003045 kubelet[2519]: I0904 17:19:40.002704 2519 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:19:40.005092 kubelet[2519]: I0904 17:19:40.005044 2519 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:19:40.005249 kubelet[2519]: I0904 17:19:40.005220 2519 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:19:40.014369 kubelet[2519]: I0904 17:19:40.014331 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:19:40.016493 kubelet[2519]: I0904 17:19:40.016475 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:19:40.016493 kubelet[2519]: I0904 17:19:40.016516 2519 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:19:40.016493 kubelet[2519]: I0904 17:19:40.016537 2519 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:19:40.016493 kubelet[2519]: E0904 17:19:40.016586 2519 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:19:40.079518 kubelet[2519]: I0904 17:19:40.079458 2519 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:19:40.079734 kubelet[2519]: I0904 17:19:40.079550 2519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:19:40.079734 kubelet[2519]: I0904 17:19:40.079576 2519 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:19:40.079790 kubelet[2519]: I0904 17:19:40.079779 2519 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:19:40.079815 kubelet[2519]: I0904 17:19:40.079800 2519 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:19:40.079815 kubelet[2519]: I0904 17:19:40.079809 2519 policy_none.go:49] "None policy: Start" Sep 4 17:19:40.080514 kubelet[2519]: I0904 17:19:40.080463 2519 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:19:40.080565 kubelet[2519]: I0904 17:19:40.080545 2519 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:19:40.080828 kubelet[2519]: I0904 17:19:40.080810 2519 state_mem.go:75] "Updated machine memory state" Sep 4 17:19:40.086271 kubelet[2519]: I0904 17:19:40.086253 2519 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:19:40.087120 kubelet[2519]: I0904 17:19:40.086820 2519 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:19:40.116760 kubelet[2519]: I0904 17:19:40.116702 2519 topology_manager.go:215] "Topology Admit Handler" podUID="1a50c3762c2f35755445fd4fe44f0c74" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:19:40.116948 kubelet[2519]: I0904 17:19:40.116840 2519 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:19:40.116948 kubelet[2519]: I0904 17:19:40.116877 2519 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:19:40.195058 kubelet[2519]: I0904 17:19:40.195005 2519 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:19:40.306800 kubelet[2519]: I0904 17:19:40.306642 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a50c3762c2f35755445fd4fe44f0c74-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a50c3762c2f35755445fd4fe44f0c74\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:40.306800 kubelet[2519]: I0904 17:19:40.306682 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a50c3762c2f35755445fd4fe44f0c74-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a50c3762c2f35755445fd4fe44f0c74\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:40.306800 kubelet[2519]: I0904 17:19:40.306706 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a50c3762c2f35755445fd4fe44f0c74-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a50c3762c2f35755445fd4fe44f0c74\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:40.306800 kubelet[2519]: I0904 17:19:40.306727 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:40.306800 kubelet[2519]: I0904 17:19:40.306745 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:40.307053 kubelet[2519]: I0904 17:19:40.306819 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:19:40.307053 kubelet[2519]: I0904 17:19:40.306916 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:40.307053 kubelet[2519]: I0904 17:19:40.306977 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:40.307053 kubelet[2519]: I0904 17:19:40.307025 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:40.348295 kubelet[2519]: I0904 17:19:40.348254 2519 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Sep 4 17:19:40.349080 kubelet[2519]: I0904 17:19:40.348406 2519 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Sep 4 17:19:40.423329 kubelet[2519]: E0904 17:19:40.423272 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:40.424349 kubelet[2519]: E0904 17:19:40.424291 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:40.424562 kubelet[2519]: E0904 17:19:40.424428 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:40.989825 kubelet[2519]: I0904 17:19:40.989768 2519 apiserver.go:52] "Watching apiserver" Sep 4 17:19:41.005461 kubelet[2519]: I0904 17:19:41.005438 2519 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:19:41.033149 kubelet[2519]: E0904 17:19:41.031617 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:41.033149 kubelet[2519]: E0904 17:19:41.032182 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:41.069959 kubelet[2519]: E0904 17:19:41.069469 2519 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:41.070139 kubelet[2519]: E0904 17:19:41.070126 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:41.114782 kubelet[2519]: I0904 17:19:41.114634 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.1145800829999999 podCreationTimestamp="2024-09-04 17:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:19:41.114291929 +0000 UTC m=+1.221554119" watchObservedRunningTime="2024-09-04 17:19:41.114580083 +0000 UTC m=+1.221842274" Sep 4 17:19:41.314604 kubelet[2519]: I0904 17:19:41.312287 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.312245981 podCreationTimestamp="2024-09-04 17:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:19:41.12861597 +0000 UTC m=+1.235878161" watchObservedRunningTime="2024-09-04 17:19:41.312245981 +0000 UTC m=+1.419508171" Sep 4 17:19:41.314604 kubelet[2519]: I0904 17:19:41.312414 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.312394522 podCreationTimestamp="2024-09-04 17:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:19:41.312160559 +0000 UTC m=+1.419422749" watchObservedRunningTime="2024-09-04 17:19:41.312394522 +0000 UTC m=+1.419656712" Sep 4 17:19:42.037850 kubelet[2519]: E0904 17:19:42.037280 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:42.039327 kubelet[2519]: E0904 17:19:42.039297 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:43.245136 update_engine[1436]: I0904 17:19:43.245074 1436 update_attempter.cc:509] Updating boot flags... Sep 4 17:19:43.290551 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2593) Sep 4 17:19:43.351917 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2595) Sep 4 17:19:45.507778 sudo[1644]: pam_unix(sudo:session): session closed for user root Sep 4 17:19:45.632557 sshd[1640]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:45.636951 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:49456.service: Deactivated successfully. Sep 4 17:19:45.639157 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:19:45.639417 systemd[1]: session-9.scope: Consumed 4.783s CPU time, 137.9M memory peak, 0B memory swap peak. Sep 4 17:19:45.639937 systemd-logind[1435]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:19:45.640819 systemd-logind[1435]: Removed session 9. Sep 4 17:19:49.765738 kubelet[2519]: E0904 17:19:49.765689 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:49.861986 kubelet[2519]: E0904 17:19:49.861901 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:50.051008 kubelet[2519]: E0904 17:19:50.050781 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:50.051008 kubelet[2519]: E0904 17:19:50.050949 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:52.039454 kubelet[2519]: E0904 17:19:52.039416 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:54.269139 kubelet[2519]: I0904 17:19:54.268939 2519 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:19:54.269724 containerd[1453]: time="2024-09-04T17:19:54.269681608Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:19:54.271909 kubelet[2519]: I0904 17:19:54.270261 2519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:19:55.187321 kubelet[2519]: I0904 17:19:55.187254 2519 topology_manager.go:215] "Topology Admit Handler" podUID="8d45e4b3-a204-4369-8cc9-38ef0f81afeb" podNamespace="kube-system" podName="kube-proxy-fstxw" Sep 4 17:19:55.196884 systemd[1]: Created slice kubepods-besteffort-pod8d45e4b3_a204_4369_8cc9_38ef0f81afeb.slice - libcontainer container kubepods-besteffort-pod8d45e4b3_a204_4369_8cc9_38ef0f81afeb.slice. Sep 4 17:19:55.303582 kubelet[2519]: I0904 17:19:55.303542 2519 topology_manager.go:215] "Topology Admit Handler" podUID="02c0e40f-c50a-455f-89b8-f4840ff8be98" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-f7tbg" Sep 4 17:19:55.309260 kubelet[2519]: I0904 17:19:55.309199 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d45e4b3-a204-4369-8cc9-38ef0f81afeb-lib-modules\") pod \"kube-proxy-fstxw\" (UID: \"8d45e4b3-a204-4369-8cc9-38ef0f81afeb\") " pod="kube-system/kube-proxy-fstxw" Sep 4 17:19:55.309260 kubelet[2519]: I0904 17:19:55.309246 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hwp6\" (UniqueName: \"kubernetes.io/projected/8d45e4b3-a204-4369-8cc9-38ef0f81afeb-kube-api-access-6hwp6\") pod \"kube-proxy-fstxw\" (UID: \"8d45e4b3-a204-4369-8cc9-38ef0f81afeb\") " pod="kube-system/kube-proxy-fstxw" Sep 4 17:19:55.309588 kubelet[2519]: I0904 17:19:55.309295 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8d45e4b3-a204-4369-8cc9-38ef0f81afeb-kube-proxy\") pod \"kube-proxy-fstxw\" (UID: \"8d45e4b3-a204-4369-8cc9-38ef0f81afeb\") " pod="kube-system/kube-proxy-fstxw" Sep 4 17:19:55.309588 kubelet[2519]: I0904 17:19:55.309319 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d45e4b3-a204-4369-8cc9-38ef0f81afeb-xtables-lock\") pod \"kube-proxy-fstxw\" (UID: \"8d45e4b3-a204-4369-8cc9-38ef0f81afeb\") " pod="kube-system/kube-proxy-fstxw" Sep 4 17:19:55.311907 systemd[1]: Created slice kubepods-besteffort-pod02c0e40f_c50a_455f_89b8_f4840ff8be98.slice - libcontainer container kubepods-besteffort-pod02c0e40f_c50a_455f_89b8_f4840ff8be98.slice. Sep 4 17:19:55.409715 kubelet[2519]: I0904 17:19:55.409653 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt4pp\" (UniqueName: \"kubernetes.io/projected/02c0e40f-c50a-455f-89b8-f4840ff8be98-kube-api-access-tt4pp\") pod \"tigera-operator-5d56685c77-f7tbg\" (UID: \"02c0e40f-c50a-455f-89b8-f4840ff8be98\") " pod="tigera-operator/tigera-operator-5d56685c77-f7tbg" Sep 4 17:19:55.409866 kubelet[2519]: I0904 17:19:55.409725 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/02c0e40f-c50a-455f-89b8-f4840ff8be98-var-lib-calico\") pod \"tigera-operator-5d56685c77-f7tbg\" (UID: \"02c0e40f-c50a-455f-89b8-f4840ff8be98\") " pod="tigera-operator/tigera-operator-5d56685c77-f7tbg" Sep 4 17:19:55.515602 kubelet[2519]: E0904 17:19:55.515477 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:55.516045 containerd[1453]: time="2024-09-04T17:19:55.516004682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fstxw,Uid:8d45e4b3-a204-4369-8cc9-38ef0f81afeb,Namespace:kube-system,Attempt:0,}" Sep 4 17:19:55.542975 containerd[1453]: time="2024-09-04T17:19:55.542740471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:55.542975 containerd[1453]: time="2024-09-04T17:19:55.542869795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:55.542975 containerd[1453]: time="2024-09-04T17:19:55.542901525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:55.542975 containerd[1453]: time="2024-09-04T17:19:55.542918777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:55.564682 systemd[1]: Started cri-containerd-340ca035c5334c2a382a4360ed91ff2261d6622e296d8454057c2973a0f1ca83.scope - libcontainer container 340ca035c5334c2a382a4360ed91ff2261d6622e296d8454057c2973a0f1ca83. Sep 4 17:19:55.589425 containerd[1453]: time="2024-09-04T17:19:55.589378890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fstxw,Uid:8d45e4b3-a204-4369-8cc9-38ef0f81afeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"340ca035c5334c2a382a4360ed91ff2261d6622e296d8454057c2973a0f1ca83\"" Sep 4 17:19:55.590290 kubelet[2519]: E0904 17:19:55.590270 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:55.592199 containerd[1453]: time="2024-09-04T17:19:55.592102176Z" level=info msg="CreateContainer within sandbox \"340ca035c5334c2a382a4360ed91ff2261d6622e296d8454057c2973a0f1ca83\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:19:55.612261 containerd[1453]: time="2024-09-04T17:19:55.612196666Z" level=info msg="CreateContainer within sandbox \"340ca035c5334c2a382a4360ed91ff2261d6622e296d8454057c2973a0f1ca83\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"14663464529237683af92e982f2d78d052393807d77e9e503afe751d1ff696e8\"" Sep 4 17:19:55.612667 containerd[1453]: time="2024-09-04T17:19:55.612624651Z" level=info msg="StartContainer for \"14663464529237683af92e982f2d78d052393807d77e9e503afe751d1ff696e8\"" Sep 4 17:19:55.615017 containerd[1453]: time="2024-09-04T17:19:55.614974605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-f7tbg,Uid:02c0e40f-c50a-455f-89b8-f4840ff8be98,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:19:55.643308 containerd[1453]: time="2024-09-04T17:19:55.643180372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:55.643572 containerd[1453]: time="2024-09-04T17:19:55.643305286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:55.643572 containerd[1453]: time="2024-09-04T17:19:55.643330143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:55.643572 containerd[1453]: time="2024-09-04T17:19:55.643343970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:55.644699 systemd[1]: Started cri-containerd-14663464529237683af92e982f2d78d052393807d77e9e503afe751d1ff696e8.scope - libcontainer container 14663464529237683af92e982f2d78d052393807d77e9e503afe751d1ff696e8. Sep 4 17:19:55.668663 systemd[1]: Started cri-containerd-d0991bf230a9dd1bccf8d738519d580eb5f55afa52b1bcd9d9f2326694db0db6.scope - libcontainer container d0991bf230a9dd1bccf8d738519d580eb5f55afa52b1bcd9d9f2326694db0db6. Sep 4 17:19:55.690832 containerd[1453]: time="2024-09-04T17:19:55.690731068Z" level=info msg="StartContainer for \"14663464529237683af92e982f2d78d052393807d77e9e503afe751d1ff696e8\" returns successfully" Sep 4 17:19:55.719140 containerd[1453]: time="2024-09-04T17:19:55.719078570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-f7tbg,Uid:02c0e40f-c50a-455f-89b8-f4840ff8be98,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d0991bf230a9dd1bccf8d738519d580eb5f55afa52b1bcd9d9f2326694db0db6\"" Sep 4 17:19:55.721435 containerd[1453]: time="2024-09-04T17:19:55.721070229Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:19:56.070439 kubelet[2519]: E0904 17:19:56.070404 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:56.076772 kubelet[2519]: I0904 17:19:56.076649 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fstxw" podStartSLOduration=1.076608588 podCreationTimestamp="2024-09-04 17:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:19:56.076247659 +0000 UTC m=+16.183509859" watchObservedRunningTime="2024-09-04 17:19:56.076608588 +0000 UTC m=+16.183870768" Sep 4 17:19:56.426677 systemd[1]: run-containerd-runc-k8s.io-340ca035c5334c2a382a4360ed91ff2261d6622e296d8454057c2973a0f1ca83-runc.x16QDX.mount: Deactivated successfully. Sep 4 17:19:56.963082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640952336.mount: Deactivated successfully. Sep 4 17:19:57.506715 containerd[1453]: time="2024-09-04T17:19:57.506641311Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:57.507624 containerd[1453]: time="2024-09-04T17:19:57.507583012Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136521" Sep 4 17:19:57.508881 containerd[1453]: time="2024-09-04T17:19:57.508835258Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:57.511801 containerd[1453]: time="2024-09-04T17:19:57.511761464Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:57.512536 containerd[1453]: time="2024-09-04T17:19:57.512481148Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.791335286s" Sep 4 17:19:57.512536 containerd[1453]: time="2024-09-04T17:19:57.512528246Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:19:57.514174 containerd[1453]: time="2024-09-04T17:19:57.514127185Z" level=info msg="CreateContainer within sandbox \"d0991bf230a9dd1bccf8d738519d580eb5f55afa52b1bcd9d9f2326694db0db6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:19:57.529249 containerd[1453]: time="2024-09-04T17:19:57.529209602Z" level=info msg="CreateContainer within sandbox \"d0991bf230a9dd1bccf8d738519d580eb5f55afa52b1bcd9d9f2326694db0db6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b3ec8001b741707355d6814f9e9fba109f619cf4222de61e228376c902b7e3a5\"" Sep 4 17:19:57.529776 containerd[1453]: time="2024-09-04T17:19:57.529743646Z" level=info msg="StartContainer for \"b3ec8001b741707355d6814f9e9fba109f619cf4222de61e228376c902b7e3a5\"" Sep 4 17:19:57.563692 systemd[1]: Started cri-containerd-b3ec8001b741707355d6814f9e9fba109f619cf4222de61e228376c902b7e3a5.scope - libcontainer container b3ec8001b741707355d6814f9e9fba109f619cf4222de61e228376c902b7e3a5. Sep 4 17:19:57.591856 containerd[1453]: time="2024-09-04T17:19:57.591808339Z" level=info msg="StartContainer for \"b3ec8001b741707355d6814f9e9fba109f619cf4222de61e228376c902b7e3a5\" returns successfully" Sep 4 17:19:58.082325 kubelet[2519]: I0904 17:19:58.081731 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-f7tbg" podStartSLOduration=1.2891671310000001 podCreationTimestamp="2024-09-04 17:19:55 +0000 UTC" firstStartedPulling="2024-09-04 17:19:55.720341878 +0000 UTC m=+15.827604058" lastFinishedPulling="2024-09-04 17:19:57.512846325 +0000 UTC m=+17.620108515" observedRunningTime="2024-09-04 17:19:58.08006717 +0000 UTC m=+18.187329390" watchObservedRunningTime="2024-09-04 17:19:58.081671588 +0000 UTC m=+18.188933788" Sep 4 17:20:00.458359 kubelet[2519]: I0904 17:20:00.458316 2519 topology_manager.go:215] "Topology Admit Handler" podUID="81aee03a-dc21-436a-aa6c-c5141275829f" podNamespace="calico-system" podName="calico-typha-59878dd8db-shzdh" Sep 4 17:20:00.473615 systemd[1]: Created slice kubepods-besteffort-pod81aee03a_dc21_436a_aa6c_c5141275829f.slice - libcontainer container kubepods-besteffort-pod81aee03a_dc21_436a_aa6c_c5141275829f.slice. Sep 4 17:20:00.509526 kubelet[2519]: I0904 17:20:00.509443 2519 topology_manager.go:215] "Topology Admit Handler" podUID="f6e2c536-43a3-4f9d-93e3-70f5b511580d" podNamespace="calico-system" podName="calico-node-74chb" Sep 4 17:20:00.520483 systemd[1]: Created slice kubepods-besteffort-podf6e2c536_43a3_4f9d_93e3_70f5b511580d.slice - libcontainer container kubepods-besteffort-podf6e2c536_43a3_4f9d_93e3_70f5b511580d.slice. Sep 4 17:20:00.616096 kubelet[2519]: I0904 17:20:00.616055 2519 topology_manager.go:215] "Topology Admit Handler" podUID="ec74825e-1f06-4e2a-b769-94c881521a0f" podNamespace="calico-system" podName="csi-node-driver-vmblz" Sep 4 17:20:00.616413 kubelet[2519]: E0904 17:20:00.616387 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmblz" podUID="ec74825e-1f06-4e2a-b769-94c881521a0f" Sep 4 17:20:00.642630 kubelet[2519]: I0904 17:20:00.642588 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81aee03a-dc21-436a-aa6c-c5141275829f-tigera-ca-bundle\") pod \"calico-typha-59878dd8db-shzdh\" (UID: \"81aee03a-dc21-436a-aa6c-c5141275829f\") " pod="calico-system/calico-typha-59878dd8db-shzdh" Sep 4 17:20:00.642630 kubelet[2519]: I0904 17:20:00.642632 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f6e2c536-43a3-4f9d-93e3-70f5b511580d-var-run-calico\") pod \"calico-node-74chb\" (UID: \"f6e2c536-43a3-4f9d-93e3-70f5b511580d\") " pod="calico-system/calico-node-74chb" Sep 4 17:20:00.642630 kubelet[2519]: I0904 17:20:00.642664 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f6e2c536-43a3-4f9d-93e3-70f5b511580d-node-certs\") pod \"calico-node-74chb\" (UID: \"f6e2c536-43a3-4f9d-93e3-70f5b511580d\") " pod="calico-system/calico-node-74chb" Sep 4 17:20:00.642942 kubelet[2519]: I0904 17:20:00.642682 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/81aee03a-dc21-436a-aa6c-c5141275829f-typha-certs\") pod \"calico-typha-59878dd8db-shzdh\" (UID: \"81aee03a-dc21-436a-aa6c-c5141275829f\") " pod="calico-system/calico-typha-59878dd8db-shzdh" Sep 4 17:20:00.642942 kubelet[2519]: I0904 17:20:00.642702 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6n2p\" (UniqueName: \"kubernetes.io/projected/81aee03a-dc21-436a-aa6c-c5141275829f-kube-api-access-m6n2p\") pod \"calico-typha-59878dd8db-shzdh\" (UID: \"81aee03a-dc21-436a-aa6c-c5141275829f\") " pod="calico-system/calico-typha-59878dd8db-shzdh" Sep 4 17:20:00.642942 kubelet[2519]: I0904 17:20:00.642719 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6e2c536-43a3-4f9d-93e3-70f5b511580d-lib-modules\") pod \"calico-node-74chb\" (UID: \"f6e2c536-43a3-4f9d-93e3-70f5b511580d\") " pod="calico-system/calico-node-74chb" Sep 4 17:20:00.642942 kubelet[2519]: I0904 17:20:00.642743 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6e2c536-43a3-4f9d-93e3-70f5b511580d-tigera-ca-bundle\") pod \"calico-node-74chb\" (UID: \"f6e2c536-43a3-4f9d-93e3-70f5b511580d\") " pod="calico-system/calico-node-74chb" Sep 4 17:20:00.642942 kubelet[2519]: I0904 17:20:00.642762 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f6e2c536-43a3-4f9d-93e3-70f5b511580d-flexvol-driver-host\") pod \"calico-node-74chb\" (UID: \"f6e2c536-43a3-4f9d-93e3-70f5b511580d\") " pod="calico-system/calico-node-74chb" Sep 4 17:20:00.643079 kubelet[2519]: I0904 17:20:00.642780 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f6e2c536-43a3-4f9d-93e3-70f5b511580d-policysync\") pod \"calico-node-74chb\" (UID: \"f6e2c536-43a3-4f9d-93e3-70f5b511580d\") " pod="calico-system/calico-node-74chb" Sep 4 17:20:00.643079 kubelet[2519]: I0904 17:20:00.642800 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f6e2c536-43a3-4f9d-93e3-70f5b511580d-var-lib-calico\") pod \"calico-node-74chb\" (UID: \"f6e2c536-43a3-4f9d-93e3-70f5b511580d\") " pod="calico-system/calico-node-74chb" Sep 4 17:20:00.643079 kubelet[2519]: I0904 17:20:00.642817 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6e2c536-43a3-4f9d-93e3-70f5b511580d-xtables-lock\") pod \"calico-node-74chb\" (UID: \"f6e2c536-43a3-4f9d-93e3-70f5b511580d\") " pod="calico-system/calico-node-74chb" Sep 4 17:20:00.643079 kubelet[2519]: I0904 17:20:00.642835 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f6e2c536-43a3-4f9d-93e3-70f5b511580d-cni-bin-dir\") pod \"calico-node-74chb\" (UID: \"f6e2c536-43a3-4f9d-93e3-70f5b511580d\") " pod="calico-system/calico-node-74chb" Sep 4 17:20:00.643079 kubelet[2519]: I0904 17:20:00.642855 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f6e2c536-43a3-4f9d-93e3-70f5b511580d-cni-net-dir\") pod \"calico-node-74chb\" (UID: \"f6e2c536-43a3-4f9d-93e3-70f5b511580d\") " pod="calico-system/calico-node-74chb" Sep 4 17:20:00.643215 kubelet[2519]: I0904 17:20:00.642870 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f6e2c536-43a3-4f9d-93e3-70f5b511580d-cni-log-dir\") pod \"calico-node-74chb\" (UID: \"f6e2c536-43a3-4f9d-93e3-70f5b511580d\") " pod="calico-system/calico-node-74chb" Sep 4 17:20:00.643215 kubelet[2519]: I0904 17:20:00.642887 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9fdd\" (UniqueName: \"kubernetes.io/projected/f6e2c536-43a3-4f9d-93e3-70f5b511580d-kube-api-access-q9fdd\") pod \"calico-node-74chb\" (UID: \"f6e2c536-43a3-4f9d-93e3-70f5b511580d\") " pod="calico-system/calico-node-74chb" Sep 4 17:20:00.745930 kubelet[2519]: I0904 17:20:00.743423 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ec74825e-1f06-4e2a-b769-94c881521a0f-socket-dir\") pod \"csi-node-driver-vmblz\" (UID: \"ec74825e-1f06-4e2a-b769-94c881521a0f\") " pod="calico-system/csi-node-driver-vmblz" Sep 4 17:20:00.745930 kubelet[2519]: I0904 17:20:00.743465 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ec74825e-1f06-4e2a-b769-94c881521a0f-registration-dir\") pod \"csi-node-driver-vmblz\" (UID: \"ec74825e-1f06-4e2a-b769-94c881521a0f\") " pod="calico-system/csi-node-driver-vmblz" Sep 4 17:20:00.745930 kubelet[2519]: I0904 17:20:00.743485 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ec74825e-1f06-4e2a-b769-94c881521a0f-varrun\") pod \"csi-node-driver-vmblz\" (UID: \"ec74825e-1f06-4e2a-b769-94c881521a0f\") " pod="calico-system/csi-node-driver-vmblz" Sep 4 17:20:00.745930 kubelet[2519]: I0904 17:20:00.743557 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec74825e-1f06-4e2a-b769-94c881521a0f-kubelet-dir\") pod \"csi-node-driver-vmblz\" (UID: \"ec74825e-1f06-4e2a-b769-94c881521a0f\") " pod="calico-system/csi-node-driver-vmblz" Sep 4 17:20:00.745930 kubelet[2519]: I0904 17:20:00.743577 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dhj\" (UniqueName: \"kubernetes.io/projected/ec74825e-1f06-4e2a-b769-94c881521a0f-kube-api-access-z4dhj\") pod \"csi-node-driver-vmblz\" (UID: \"ec74825e-1f06-4e2a-b769-94c881521a0f\") " pod="calico-system/csi-node-driver-vmblz" Sep 4 17:20:00.748751 kubelet[2519]: E0904 17:20:00.748719 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.748751 kubelet[2519]: W0904 17:20:00.748745 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.751555 kubelet[2519]: E0904 17:20:00.749359 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.751555 kubelet[2519]: W0904 17:20:00.749373 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.751555 kubelet[2519]: E0904 17:20:00.749395 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.751555 kubelet[2519]: E0904 17:20:00.749473 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.751555 kubelet[2519]: E0904 17:20:00.749742 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.751555 kubelet[2519]: W0904 17:20:00.749779 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.751555 kubelet[2519]: E0904 17:20:00.749807 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.751555 kubelet[2519]: E0904 17:20:00.750150 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.751555 kubelet[2519]: W0904 17:20:00.750158 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.751555 kubelet[2519]: E0904 17:20:00.750169 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.751981 kubelet[2519]: E0904 17:20:00.750410 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.751981 kubelet[2519]: W0904 17:20:00.750418 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.751981 kubelet[2519]: E0904 17:20:00.750428 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.751981 kubelet[2519]: E0904 17:20:00.751397 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.751981 kubelet[2519]: W0904 17:20:00.751406 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.751981 kubelet[2519]: E0904 17:20:00.751418 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.758711 kubelet[2519]: E0904 17:20:00.757560 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.758711 kubelet[2519]: W0904 17:20:00.757580 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.758711 kubelet[2519]: E0904 17:20:00.757603 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.764307 kubelet[2519]: E0904 17:20:00.764224 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.764307 kubelet[2519]: W0904 17:20:00.764244 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.764307 kubelet[2519]: E0904 17:20:00.764267 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.780376 kubelet[2519]: E0904 17:20:00.780338 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:00.781120 containerd[1453]: time="2024-09-04T17:20:00.781067159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59878dd8db-shzdh,Uid:81aee03a-dc21-436a-aa6c-c5141275829f,Namespace:calico-system,Attempt:0,}" Sep 4 17:20:00.818002 containerd[1453]: time="2024-09-04T17:20:00.816704150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:00.818002 containerd[1453]: time="2024-09-04T17:20:00.816754134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:00.818002 containerd[1453]: time="2024-09-04T17:20:00.816770374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:00.818002 containerd[1453]: time="2024-09-04T17:20:00.816780403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:00.823896 kubelet[2519]: E0904 17:20:00.823870 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:00.825447 containerd[1453]: time="2024-09-04T17:20:00.825396348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-74chb,Uid:f6e2c536-43a3-4f9d-93e3-70f5b511580d,Namespace:calico-system,Attempt:0,}" Sep 4 17:20:00.839219 systemd[1]: Started cri-containerd-bc40b9de8b053db9687ae1bda17cb1ebf660133f62ad4897d609f81d164626ed.scope - libcontainer container bc40b9de8b053db9687ae1bda17cb1ebf660133f62ad4897d609f81d164626ed. Sep 4 17:20:00.844690 kubelet[2519]: E0904 17:20:00.844634 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.844690 kubelet[2519]: W0904 17:20:00.844671 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.844804 kubelet[2519]: E0904 17:20:00.844701 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.845046 kubelet[2519]: E0904 17:20:00.845026 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.845046 kubelet[2519]: W0904 17:20:00.845039 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.845746 kubelet[2519]: E0904 17:20:00.845063 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.845746 kubelet[2519]: E0904 17:20:00.845277 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.845746 kubelet[2519]: W0904 17:20:00.845284 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.845746 kubelet[2519]: E0904 17:20:00.845313 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.845746 kubelet[2519]: E0904 17:20:00.845576 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.845746 kubelet[2519]: W0904 17:20:00.845584 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.845746 kubelet[2519]: E0904 17:20:00.845600 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.845986 kubelet[2519]: E0904 17:20:00.845862 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.845986 kubelet[2519]: W0904 17:20:00.845878 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.845986 kubelet[2519]: E0904 17:20:00.845906 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.846887 kubelet[2519]: E0904 17:20:00.846813 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.847396 kubelet[2519]: W0904 17:20:00.847080 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.847396 kubelet[2519]: E0904 17:20:00.847105 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.847396 kubelet[2519]: E0904 17:20:00.847395 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.847779 kubelet[2519]: W0904 17:20:00.847406 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.847779 kubelet[2519]: E0904 17:20:00.847421 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.848085 kubelet[2519]: E0904 17:20:00.847947 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.848699 kubelet[2519]: W0904 17:20:00.848585 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.848699 kubelet[2519]: E0904 17:20:00.848613 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.848980 kubelet[2519]: E0904 17:20:00.848947 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.848980 kubelet[2519]: W0904 17:20:00.848967 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.849117 kubelet[2519]: E0904 17:20:00.849094 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.849213 kubelet[2519]: E0904 17:20:00.849198 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.849213 kubelet[2519]: W0904 17:20:00.849210 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.849385 kubelet[2519]: E0904 17:20:00.849294 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.849439 kubelet[2519]: E0904 17:20:00.849411 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.849439 kubelet[2519]: W0904 17:20:00.849419 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.850558 kubelet[2519]: E0904 17:20:00.850526 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.850732 kubelet[2519]: E0904 17:20:00.850715 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.850732 kubelet[2519]: W0904 17:20:00.850730 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.850886 kubelet[2519]: E0904 17:20:00.850851 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.851079 kubelet[2519]: E0904 17:20:00.850976 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.851079 kubelet[2519]: W0904 17:20:00.850984 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.851079 kubelet[2519]: E0904 17:20:00.851042 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.851390 kubelet[2519]: E0904 17:20:00.851370 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.851390 kubelet[2519]: W0904 17:20:00.851382 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.851532 kubelet[2519]: E0904 17:20:00.851476 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.851616 kubelet[2519]: E0904 17:20:00.851601 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.851616 kubelet[2519]: W0904 17:20:00.851612 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.851786 kubelet[2519]: E0904 17:20:00.851760 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.852085 kubelet[2519]: E0904 17:20:00.852067 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.852085 kubelet[2519]: W0904 17:20:00.852082 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.852202 kubelet[2519]: E0904 17:20:00.852185 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.852450 kubelet[2519]: E0904 17:20:00.852426 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.852450 kubelet[2519]: W0904 17:20:00.852443 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.853263 kubelet[2519]: E0904 17:20:00.852576 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.853263 kubelet[2519]: E0904 17:20:00.852835 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.853263 kubelet[2519]: W0904 17:20:00.852844 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.853263 kubelet[2519]: E0904 17:20:00.852859 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.853263 kubelet[2519]: E0904 17:20:00.853122 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.853263 kubelet[2519]: W0904 17:20:00.853130 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.853263 kubelet[2519]: E0904 17:20:00.853153 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.853434 kubelet[2519]: E0904 17:20:00.853397 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.853434 kubelet[2519]: W0904 17:20:00.853406 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.853482 kubelet[2519]: E0904 17:20:00.853461 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.854187 kubelet[2519]: E0904 17:20:00.853657 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.854187 kubelet[2519]: W0904 17:20:00.853669 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.854187 kubelet[2519]: E0904 17:20:00.853771 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.854187 kubelet[2519]: E0904 17:20:00.853923 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.854187 kubelet[2519]: W0904 17:20:00.853931 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.854187 kubelet[2519]: E0904 17:20:00.853945 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.854360 kubelet[2519]: E0904 17:20:00.854201 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.854360 kubelet[2519]: W0904 17:20:00.854210 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.854360 kubelet[2519]: E0904 17:20:00.854232 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.854445 kubelet[2519]: E0904 17:20:00.854428 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.854445 kubelet[2519]: W0904 17:20:00.854440 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.854646 kubelet[2519]: E0904 17:20:00.854452 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.854777 kubelet[2519]: E0904 17:20:00.854750 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.854777 kubelet[2519]: W0904 17:20:00.854768 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.854848 kubelet[2519]: E0904 17:20:00.854780 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.866380 containerd[1453]: time="2024-09-04T17:20:00.866265217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:00.866380 containerd[1453]: time="2024-09-04T17:20:00.866324278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:00.866380 containerd[1453]: time="2024-09-04T17:20:00.866355567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:00.866700 containerd[1453]: time="2024-09-04T17:20:00.866369854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:00.867463 kubelet[2519]: E0904 17:20:00.867340 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:00.867463 kubelet[2519]: W0904 17:20:00.867360 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:00.867463 kubelet[2519]: E0904 17:20:00.867379 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:00.892908 systemd[1]: Started cri-containerd-1c07b32a2d2069f286a2038fcc1c326a7d4fe785c086dca48e03ba3a5183f0c4.scope - libcontainer container 1c07b32a2d2069f286a2038fcc1c326a7d4fe785c086dca48e03ba3a5183f0c4. Sep 4 17:20:00.896413 containerd[1453]: time="2024-09-04T17:20:00.896331611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59878dd8db-shzdh,Uid:81aee03a-dc21-436a-aa6c-c5141275829f,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc40b9de8b053db9687ae1bda17cb1ebf660133f62ad4897d609f81d164626ed\"" Sep 4 17:20:00.897669 kubelet[2519]: E0904 17:20:00.897631 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:00.900049 containerd[1453]: time="2024-09-04T17:20:00.900005961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:20:00.924791 containerd[1453]: time="2024-09-04T17:20:00.924734657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-74chb,Uid:f6e2c536-43a3-4f9d-93e3-70f5b511580d,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c07b32a2d2069f286a2038fcc1c326a7d4fe785c086dca48e03ba3a5183f0c4\"" Sep 4 17:20:00.925988 kubelet[2519]: E0904 17:20:00.925966 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:02.017001 kubelet[2519]: E0904 17:20:02.016956 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmblz" podUID="ec74825e-1f06-4e2a-b769-94c881521a0f" Sep 4 17:20:04.017144 kubelet[2519]: E0904 17:20:04.017085 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmblz" podUID="ec74825e-1f06-4e2a-b769-94c881521a0f" Sep 4 17:20:05.210213 containerd[1453]: time="2024-09-04T17:20:05.210146067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:05.210922 containerd[1453]: time="2024-09-04T17:20:05.210838939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:20:05.212167 containerd[1453]: time="2024-09-04T17:20:05.212086694Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:05.214416 containerd[1453]: time="2024-09-04T17:20:05.214380055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:05.215147 containerd[1453]: time="2024-09-04T17:20:05.215091222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 4.315035888s" Sep 4 17:20:05.215147 containerd[1453]: time="2024-09-04T17:20:05.215142168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:20:05.215939 containerd[1453]: time="2024-09-04T17:20:05.215887508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:20:05.229151 containerd[1453]: time="2024-09-04T17:20:05.228892763Z" level=info msg="CreateContainer within sandbox \"bc40b9de8b053db9687ae1bda17cb1ebf660133f62ad4897d609f81d164626ed\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:20:05.499092 containerd[1453]: time="2024-09-04T17:20:05.498870338Z" level=info msg="CreateContainer within sandbox \"bc40b9de8b053db9687ae1bda17cb1ebf660133f62ad4897d609f81d164626ed\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cdf985f9edc7f27bfb6dcafc91126eb3a696bad16fa4f4bb20e4d2e9c9931216\"" Sep 4 17:20:05.499947 containerd[1453]: time="2024-09-04T17:20:05.499757174Z" level=info msg="StartContainer for \"cdf985f9edc7f27bfb6dcafc91126eb3a696bad16fa4f4bb20e4d2e9c9931216\"" Sep 4 17:20:05.542679 systemd[1]: Started cri-containerd-cdf985f9edc7f27bfb6dcafc91126eb3a696bad16fa4f4bb20e4d2e9c9931216.scope - libcontainer container cdf985f9edc7f27bfb6dcafc91126eb3a696bad16fa4f4bb20e4d2e9c9931216. Sep 4 17:20:05.588369 containerd[1453]: time="2024-09-04T17:20:05.587691624Z" level=info msg="StartContainer for \"cdf985f9edc7f27bfb6dcafc91126eb3a696bad16fa4f4bb20e4d2e9c9931216\" returns successfully" Sep 4 17:20:06.017778 kubelet[2519]: E0904 17:20:06.017696 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmblz" podUID="ec74825e-1f06-4e2a-b769-94c881521a0f" Sep 4 17:20:06.098247 kubelet[2519]: E0904 17:20:06.098202 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:06.113103 kubelet[2519]: I0904 17:20:06.113059 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-59878dd8db-shzdh" podStartSLOduration=1.796071569 podCreationTimestamp="2024-09-04 17:20:00 +0000 UTC" firstStartedPulling="2024-09-04 17:20:00.899385855 +0000 UTC m=+21.006648045" lastFinishedPulling="2024-09-04 17:20:05.215535567 +0000 UTC m=+25.322797757" observedRunningTime="2024-09-04 17:20:06.107351889 +0000 UTC m=+26.214614089" watchObservedRunningTime="2024-09-04 17:20:06.112221281 +0000 UTC m=+26.219483491" Sep 4 17:20:06.196629 kubelet[2519]: E0904 17:20:06.196577 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.196629 kubelet[2519]: W0904 17:20:06.196605 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.196629 kubelet[2519]: E0904 17:20:06.196635 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.196984 kubelet[2519]: E0904 17:20:06.196944 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.196984 kubelet[2519]: W0904 17:20:06.196970 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.197074 kubelet[2519]: E0904 17:20:06.197008 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.197348 kubelet[2519]: E0904 17:20:06.197312 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.197348 kubelet[2519]: W0904 17:20:06.197325 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.197348 kubelet[2519]: E0904 17:20:06.197340 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.197685 kubelet[2519]: E0904 17:20:06.197654 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.197685 kubelet[2519]: W0904 17:20:06.197671 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.197685 kubelet[2519]: E0904 17:20:06.197686 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.197956 kubelet[2519]: E0904 17:20:06.197934 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.197956 kubelet[2519]: W0904 17:20:06.197946 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.198048 kubelet[2519]: E0904 17:20:06.197962 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.198193 kubelet[2519]: E0904 17:20:06.198164 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.198193 kubelet[2519]: W0904 17:20:06.198176 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.198193 kubelet[2519]: E0904 17:20:06.198189 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.198452 kubelet[2519]: E0904 17:20:06.198419 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.198452 kubelet[2519]: W0904 17:20:06.198436 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.198452 kubelet[2519]: E0904 17:20:06.198453 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.199252 kubelet[2519]: E0904 17:20:06.199219 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.199252 kubelet[2519]: W0904 17:20:06.199233 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.199252 kubelet[2519]: E0904 17:20:06.199247 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.199567 kubelet[2519]: E0904 17:20:06.199530 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.199567 kubelet[2519]: W0904 17:20:06.199553 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.199567 kubelet[2519]: E0904 17:20:06.199569 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.199853 kubelet[2519]: E0904 17:20:06.199819 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.199853 kubelet[2519]: W0904 17:20:06.199840 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.199956 kubelet[2519]: E0904 17:20:06.199858 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.200132 kubelet[2519]: E0904 17:20:06.200101 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.200132 kubelet[2519]: W0904 17:20:06.200114 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.200132 kubelet[2519]: E0904 17:20:06.200128 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.200372 kubelet[2519]: E0904 17:20:06.200341 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.200372 kubelet[2519]: W0904 17:20:06.200355 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.200372 kubelet[2519]: E0904 17:20:06.200369 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.200630 kubelet[2519]: E0904 17:20:06.200608 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.200630 kubelet[2519]: W0904 17:20:06.200620 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.200738 kubelet[2519]: E0904 17:20:06.200637 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.200919 kubelet[2519]: E0904 17:20:06.200889 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.200919 kubelet[2519]: W0904 17:20:06.200905 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.201056 kubelet[2519]: E0904 17:20:06.200930 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.201706 kubelet[2519]: E0904 17:20:06.201671 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.201706 kubelet[2519]: W0904 17:20:06.201684 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.201706 kubelet[2519]: E0904 17:20:06.201698 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.286833 kubelet[2519]: E0904 17:20:06.286702 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.286833 kubelet[2519]: W0904 17:20:06.286722 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.286833 kubelet[2519]: E0904 17:20:06.286743 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.287020 kubelet[2519]: E0904 17:20:06.286975 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.287020 kubelet[2519]: W0904 17:20:06.286983 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.287020 kubelet[2519]: E0904 17:20:06.287001 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.287208 kubelet[2519]: E0904 17:20:06.287193 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.287208 kubelet[2519]: W0904 17:20:06.287203 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.287278 kubelet[2519]: E0904 17:20:06.287216 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.287646 kubelet[2519]: E0904 17:20:06.287614 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.287718 kubelet[2519]: W0904 17:20:06.287664 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.287868 kubelet[2519]: E0904 17:20:06.287716 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.288012 kubelet[2519]: E0904 17:20:06.287988 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.288012 kubelet[2519]: W0904 17:20:06.288000 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.288012 kubelet[2519]: E0904 17:20:06.288016 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.288222 kubelet[2519]: E0904 17:20:06.288207 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.288222 kubelet[2519]: W0904 17:20:06.288221 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.288290 kubelet[2519]: E0904 17:20:06.288241 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.289939 kubelet[2519]: E0904 17:20:06.289922 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.289939 kubelet[2519]: W0904 17:20:06.289938 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.290031 kubelet[2519]: E0904 17:20:06.289988 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.290194 kubelet[2519]: E0904 17:20:06.290173 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.290194 kubelet[2519]: W0904 17:20:06.290192 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.290301 kubelet[2519]: E0904 17:20:06.290266 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.290422 kubelet[2519]: E0904 17:20:06.290408 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.290422 kubelet[2519]: W0904 17:20:06.290419 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.290476 kubelet[2519]: E0904 17:20:06.290439 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.290673 kubelet[2519]: E0904 17:20:06.290658 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.290673 kubelet[2519]: W0904 17:20:06.290671 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.290733 kubelet[2519]: E0904 17:20:06.290687 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.290914 kubelet[2519]: E0904 17:20:06.290900 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.290914 kubelet[2519]: W0904 17:20:06.290911 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.290972 kubelet[2519]: E0904 17:20:06.290927 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.291301 kubelet[2519]: E0904 17:20:06.291262 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.291301 kubelet[2519]: W0904 17:20:06.291291 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.291386 kubelet[2519]: E0904 17:20:06.291319 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.291612 kubelet[2519]: E0904 17:20:06.291588 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.291612 kubelet[2519]: W0904 17:20:06.291601 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.291612 kubelet[2519]: E0904 17:20:06.291617 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.291853 kubelet[2519]: E0904 17:20:06.291831 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.291853 kubelet[2519]: W0904 17:20:06.291843 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.291937 kubelet[2519]: E0904 17:20:06.291859 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.292158 kubelet[2519]: E0904 17:20:06.292128 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.292158 kubelet[2519]: W0904 17:20:06.292140 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.292251 kubelet[2519]: E0904 17:20:06.292180 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.292443 kubelet[2519]: E0904 17:20:06.292409 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.292491 kubelet[2519]: W0904 17:20:06.292451 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.292491 kubelet[2519]: E0904 17:20:06.292477 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.292743 kubelet[2519]: E0904 17:20:06.292726 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.292743 kubelet[2519]: W0904 17:20:06.292738 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.292823 kubelet[2519]: E0904 17:20:06.292750 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.293158 kubelet[2519]: E0904 17:20:06.293125 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:06.293158 kubelet[2519]: W0904 17:20:06.293143 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:06.293158 kubelet[2519]: E0904 17:20:06.293155 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:06.774311 containerd[1453]: time="2024-09-04T17:20:06.774245433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:06.775028 containerd[1453]: time="2024-09-04T17:20:06.774976828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:20:06.776066 containerd[1453]: time="2024-09-04T17:20:06.776030517Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:06.778334 containerd[1453]: time="2024-09-04T17:20:06.778295995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:06.779096 containerd[1453]: time="2024-09-04T17:20:06.779055192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.563115947s" Sep 4 17:20:06.779122 containerd[1453]: time="2024-09-04T17:20:06.779098514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:20:06.780744 containerd[1453]: time="2024-09-04T17:20:06.780714902Z" level=info msg="CreateContainer within sandbox \"1c07b32a2d2069f286a2038fcc1c326a7d4fe785c086dca48e03ba3a5183f0c4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:20:06.798281 containerd[1453]: time="2024-09-04T17:20:06.798228968Z" level=info msg="CreateContainer within sandbox \"1c07b32a2d2069f286a2038fcc1c326a7d4fe785c086dca48e03ba3a5183f0c4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bd4412b419f4c3e503e4c29ac18c33760c22eacbab035cfc57c35cf1c82dc546\"" Sep 4 17:20:06.798843 containerd[1453]: time="2024-09-04T17:20:06.798773862Z" level=info msg="StartContainer for \"bd4412b419f4c3e503e4c29ac18c33760c22eacbab035cfc57c35cf1c82dc546\"" Sep 4 17:20:06.827705 systemd[1]: Started cri-containerd-bd4412b419f4c3e503e4c29ac18c33760c22eacbab035cfc57c35cf1c82dc546.scope - libcontainer container bd4412b419f4c3e503e4c29ac18c33760c22eacbab035cfc57c35cf1c82dc546. Sep 4 17:20:06.860846 containerd[1453]: time="2024-09-04T17:20:06.860794183Z" level=info msg="StartContainer for \"bd4412b419f4c3e503e4c29ac18c33760c22eacbab035cfc57c35cf1c82dc546\" returns successfully" Sep 4 17:20:06.870467 systemd[1]: cri-containerd-bd4412b419f4c3e503e4c29ac18c33760c22eacbab035cfc57c35cf1c82dc546.scope: Deactivated successfully. Sep 4 17:20:06.938994 containerd[1453]: time="2024-09-04T17:20:06.938921366Z" level=info msg="shim disconnected" id=bd4412b419f4c3e503e4c29ac18c33760c22eacbab035cfc57c35cf1c82dc546 namespace=k8s.io Sep 4 17:20:06.938994 containerd[1453]: time="2024-09-04T17:20:06.938986187Z" level=warning msg="cleaning up after shim disconnected" id=bd4412b419f4c3e503e4c29ac18c33760c22eacbab035cfc57c35cf1c82dc546 namespace=k8s.io Sep 4 17:20:06.938994 containerd[1453]: time="2024-09-04T17:20:06.938994653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:20:07.100362 kubelet[2519]: E0904 17:20:07.100333 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:07.101195 containerd[1453]: time="2024-09-04T17:20:07.101158519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:20:07.102709 kubelet[2519]: I0904 17:20:07.102461 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:20:07.103951 kubelet[2519]: E0904 17:20:07.103170 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:07.224862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd4412b419f4c3e503e4c29ac18c33760c22eacbab035cfc57c35cf1c82dc546-rootfs.mount: Deactivated successfully. Sep 4 17:20:08.017707 kubelet[2519]: E0904 17:20:08.017669 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmblz" podUID="ec74825e-1f06-4e2a-b769-94c881521a0f" Sep 4 17:20:10.017570 kubelet[2519]: E0904 17:20:10.017530 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmblz" podUID="ec74825e-1f06-4e2a-b769-94c881521a0f" Sep 4 17:20:12.017027 kubelet[2519]: E0904 17:20:12.016942 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmblz" podUID="ec74825e-1f06-4e2a-b769-94c881521a0f" Sep 4 17:20:12.792471 containerd[1453]: time="2024-09-04T17:20:12.792415250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:12.793112 containerd[1453]: time="2024-09-04T17:20:12.793076191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:20:12.794160 containerd[1453]: time="2024-09-04T17:20:12.794121575Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:12.796266 containerd[1453]: time="2024-09-04T17:20:12.796230217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:12.796884 containerd[1453]: time="2024-09-04T17:20:12.796858817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 5.695656546s" Sep 4 17:20:12.796921 containerd[1453]: time="2024-09-04T17:20:12.796884866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:20:12.798412 containerd[1453]: time="2024-09-04T17:20:12.798389973Z" level=info msg="CreateContainer within sandbox \"1c07b32a2d2069f286a2038fcc1c326a7d4fe785c086dca48e03ba3a5183f0c4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:20:12.814736 containerd[1453]: time="2024-09-04T17:20:12.814698640Z" level=info msg="CreateContainer within sandbox \"1c07b32a2d2069f286a2038fcc1c326a7d4fe785c086dca48e03ba3a5183f0c4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"68c8b50abd89c10ed7541db30172e97fffe9c35715a850c946e06816914cfb5e\"" Sep 4 17:20:12.815258 containerd[1453]: time="2024-09-04T17:20:12.815110133Z" level=info msg="StartContainer for \"68c8b50abd89c10ed7541db30172e97fffe9c35715a850c946e06816914cfb5e\"" Sep 4 17:20:12.851636 systemd[1]: Started cri-containerd-68c8b50abd89c10ed7541db30172e97fffe9c35715a850c946e06816914cfb5e.scope - libcontainer container 68c8b50abd89c10ed7541db30172e97fffe9c35715a850c946e06816914cfb5e. Sep 4 17:20:12.880859 containerd[1453]: time="2024-09-04T17:20:12.880806448Z" level=info msg="StartContainer for \"68c8b50abd89c10ed7541db30172e97fffe9c35715a850c946e06816914cfb5e\" returns successfully" Sep 4 17:20:13.112180 kubelet[2519]: E0904 17:20:13.112131 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:13.423586 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:41196.service - OpenSSH per-connection server daemon (10.0.0.1:41196). Sep 4 17:20:13.476531 sshd[3248]: Accepted publickey for core from 10.0.0.1 port 41196 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:13.472475 sshd[3248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:13.482251 systemd-logind[1435]: New session 10 of user core. Sep 4 17:20:13.488669 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:20:14.017660 kubelet[2519]: E0904 17:20:14.017612 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmblz" podUID="ec74825e-1f06-4e2a-b769-94c881521a0f" Sep 4 17:20:14.087245 sshd[3248]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:14.091795 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:41196.service: Deactivated successfully. Sep 4 17:20:14.094333 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:20:14.095102 systemd-logind[1435]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:20:14.096247 systemd-logind[1435]: Removed session 10. Sep 4 17:20:14.113945 kubelet[2519]: E0904 17:20:14.113914 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:14.197650 containerd[1453]: time="2024-09-04T17:20:14.197590930Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:20:14.200653 systemd[1]: cri-containerd-68c8b50abd89c10ed7541db30172e97fffe9c35715a850c946e06816914cfb5e.scope: Deactivated successfully. Sep 4 17:20:14.207567 kubelet[2519]: I0904 17:20:14.206912 2519 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:20:14.227153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68c8b50abd89c10ed7541db30172e97fffe9c35715a850c946e06816914cfb5e-rootfs.mount: Deactivated successfully. Sep 4 17:20:14.237716 kubelet[2519]: I0904 17:20:14.237677 2519 topology_manager.go:215] "Topology Admit Handler" podUID="84407aac-cc69-469e-80db-ff90c5b8ed8a" podNamespace="kube-system" podName="coredns-5dd5756b68-zxfg9" Sep 4 17:20:14.238315 kubelet[2519]: I0904 17:20:14.238275 2519 topology_manager.go:215] "Topology Admit Handler" podUID="66be1aa6-468b-46e9-8f41-49d472ad634c" podNamespace="kube-system" podName="coredns-5dd5756b68-4rqwj" Sep 4 17:20:14.238786 kubelet[2519]: I0904 17:20:14.238443 2519 topology_manager.go:215] "Topology Admit Handler" podUID="3134e9e3-9d9c-4e6c-af7f-e379eb17a941" podNamespace="calico-system" podName="calico-kube-controllers-79cc849bb-qnk7z" Sep 4 17:20:14.242054 containerd[1453]: time="2024-09-04T17:20:14.241997390Z" level=info msg="shim disconnected" id=68c8b50abd89c10ed7541db30172e97fffe9c35715a850c946e06816914cfb5e namespace=k8s.io Sep 4 17:20:14.242222 containerd[1453]: time="2024-09-04T17:20:14.242176927Z" level=warning msg="cleaning up after shim disconnected" id=68c8b50abd89c10ed7541db30172e97fffe9c35715a850c946e06816914cfb5e namespace=k8s.io Sep 4 17:20:14.242222 containerd[1453]: time="2024-09-04T17:20:14.242195081Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:20:14.247877 systemd[1]: Created slice kubepods-burstable-pod84407aac_cc69_469e_80db_ff90c5b8ed8a.slice - libcontainer container kubepods-burstable-pod84407aac_cc69_469e_80db_ff90c5b8ed8a.slice. Sep 4 17:20:14.257545 systemd[1]: Created slice kubepods-besteffort-pod3134e9e3_9d9c_4e6c_af7f_e379eb17a941.slice - libcontainer container kubepods-besteffort-pod3134e9e3_9d9c_4e6c_af7f_e379eb17a941.slice. Sep 4 17:20:14.263377 containerd[1453]: time="2024-09-04T17:20:14.263295456Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:20:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:20:14.266750 systemd[1]: Created slice kubepods-burstable-pod66be1aa6_468b_46e9_8f41_49d472ad634c.slice - libcontainer container kubepods-burstable-pod66be1aa6_468b_46e9_8f41_49d472ad634c.slice. Sep 4 17:20:14.336152 kubelet[2519]: I0904 17:20:14.336109 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66be1aa6-468b-46e9-8f41-49d472ad634c-config-volume\") pod \"coredns-5dd5756b68-4rqwj\" (UID: \"66be1aa6-468b-46e9-8f41-49d472ad634c\") " pod="kube-system/coredns-5dd5756b68-4rqwj" Sep 4 17:20:14.336152 kubelet[2519]: I0904 17:20:14.336161 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc78r\" (UniqueName: \"kubernetes.io/projected/84407aac-cc69-469e-80db-ff90c5b8ed8a-kube-api-access-rc78r\") pod \"coredns-5dd5756b68-zxfg9\" (UID: \"84407aac-cc69-469e-80db-ff90c5b8ed8a\") " pod="kube-system/coredns-5dd5756b68-zxfg9" Sep 4 17:20:14.336357 kubelet[2519]: I0904 17:20:14.336180 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84407aac-cc69-469e-80db-ff90c5b8ed8a-config-volume\") pod \"coredns-5dd5756b68-zxfg9\" (UID: \"84407aac-cc69-469e-80db-ff90c5b8ed8a\") " pod="kube-system/coredns-5dd5756b68-zxfg9" Sep 4 17:20:14.336357 kubelet[2519]: I0904 17:20:14.336199 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj9bd\" (UniqueName: \"kubernetes.io/projected/66be1aa6-468b-46e9-8f41-49d472ad634c-kube-api-access-zj9bd\") pod \"coredns-5dd5756b68-4rqwj\" (UID: \"66be1aa6-468b-46e9-8f41-49d472ad634c\") " pod="kube-system/coredns-5dd5756b68-4rqwj" Sep 4 17:20:14.336357 kubelet[2519]: I0904 17:20:14.336222 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3134e9e3-9d9c-4e6c-af7f-e379eb17a941-tigera-ca-bundle\") pod \"calico-kube-controllers-79cc849bb-qnk7z\" (UID: \"3134e9e3-9d9c-4e6c-af7f-e379eb17a941\") " pod="calico-system/calico-kube-controllers-79cc849bb-qnk7z" Sep 4 17:20:14.336437 kubelet[2519]: I0904 17:20:14.336374 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5sxh\" (UniqueName: \"kubernetes.io/projected/3134e9e3-9d9c-4e6c-af7f-e379eb17a941-kube-api-access-h5sxh\") pod \"calico-kube-controllers-79cc849bb-qnk7z\" (UID: \"3134e9e3-9d9c-4e6c-af7f-e379eb17a941\") " pod="calico-system/calico-kube-controllers-79cc849bb-qnk7z" Sep 4 17:20:14.552838 kubelet[2519]: E0904 17:20:14.552786 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:14.553683 containerd[1453]: time="2024-09-04T17:20:14.553576444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zxfg9,Uid:84407aac-cc69-469e-80db-ff90c5b8ed8a,Namespace:kube-system,Attempt:0,}" Sep 4 17:20:14.562297 containerd[1453]: time="2024-09-04T17:20:14.562240579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79cc849bb-qnk7z,Uid:3134e9e3-9d9c-4e6c-af7f-e379eb17a941,Namespace:calico-system,Attempt:0,}" Sep 4 17:20:14.569622 kubelet[2519]: E0904 17:20:14.569588 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:14.570541 containerd[1453]: time="2024-09-04T17:20:14.570095523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4rqwj,Uid:66be1aa6-468b-46e9-8f41-49d472ad634c,Namespace:kube-system,Attempt:0,}" Sep 4 17:20:14.641851 containerd[1453]: time="2024-09-04T17:20:14.640226533Z" level=error msg="Failed to destroy network for sandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:14.641851 containerd[1453]: time="2024-09-04T17:20:14.640668995Z" level=error msg="encountered an error cleaning up failed sandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:14.641851 containerd[1453]: time="2024-09-04T17:20:14.640718528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79cc849bb-qnk7z,Uid:3134e9e3-9d9c-4e6c-af7f-e379eb17a941,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:14.641851 containerd[1453]: time="2024-09-04T17:20:14.640982924Z" level=error msg="Failed to destroy network for sandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:14.642134 kubelet[2519]: E0904 17:20:14.640999 2519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:14.642134 kubelet[2519]: E0904 17:20:14.641076 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79cc849bb-qnk7z" Sep 4 17:20:14.642134 kubelet[2519]: E0904 17:20:14.641102 2519 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79cc849bb-qnk7z" Sep 4 17:20:14.642284 kubelet[2519]: E0904 17:20:14.641154 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79cc849bb-qnk7z_calico-system(3134e9e3-9d9c-4e6c-af7f-e379eb17a941)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79cc849bb-qnk7z_calico-system(3134e9e3-9d9c-4e6c-af7f-e379eb17a941)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79cc849bb-qnk7z" podUID="3134e9e3-9d9c-4e6c-af7f-e379eb17a941" Sep 4 17:20:14.642612 containerd[1453]: time="2024-09-04T17:20:14.642493041Z" level=error msg="encountered an error cleaning up failed sandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:14.642775 containerd[1453]: time="2024-09-04T17:20:14.642658421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zxfg9,Uid:84407aac-cc69-469e-80db-ff90c5b8ed8a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:14.642927 kubelet[2519]: E0904 17:20:14.642875 2519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:14.642927 kubelet[2519]: E0904 17:20:14.642923 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-zxfg9" Sep 4 17:20:14.643111 kubelet[2519]: E0904 17:20:14.642946 2519 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-zxfg9" Sep 4 17:20:14.643111 kubelet[2519]: E0904 17:20:14.643001 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-zxfg9_kube-system(84407aac-cc69-469e-80db-ff90c5b8ed8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-zxfg9_kube-system(84407aac-cc69-469e-80db-ff90c5b8ed8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-zxfg9" podUID="84407aac-cc69-469e-80db-ff90c5b8ed8a" Sep 4 17:20:14.655874 containerd[1453]: time="2024-09-04T17:20:14.655811357Z" level=error msg="Failed to destroy network for sandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:14.656283 containerd[1453]: time="2024-09-04T17:20:14.656240693Z" level=error msg="encountered an error cleaning up failed sandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:14.656353 containerd[1453]: time="2024-09-04T17:20:14.656302499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4rqwj,Uid:66be1aa6-468b-46e9-8f41-49d472ad634c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:14.656601 kubelet[2519]: E0904 17:20:14.656562 2519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:14.656680 kubelet[2519]: E0904 17:20:14.656616 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-4rqwj" Sep 4 17:20:14.656680 kubelet[2519]: E0904 17:20:14.656638 2519 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-4rqwj" Sep 4 17:20:14.656760 kubelet[2519]: E0904 17:20:14.656698 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-4rqwj_kube-system(66be1aa6-468b-46e9-8f41-49d472ad634c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-4rqwj_kube-system(66be1aa6-468b-46e9-8f41-49d472ad634c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-4rqwj" podUID="66be1aa6-468b-46e9-8f41-49d472ad634c" Sep 4 17:20:15.116005 kubelet[2519]: I0904 17:20:15.115956 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:15.116544 containerd[1453]: time="2024-09-04T17:20:15.116392699Z" level=info msg="StopPodSandbox for \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\"" Sep 4 17:20:15.116847 containerd[1453]: time="2024-09-04T17:20:15.116810312Z" level=info msg="Ensure that sandbox 8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a in task-service has been cleanup successfully" Sep 4 17:20:15.118689 kubelet[2519]: E0904 17:20:15.117951 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:15.118689 kubelet[2519]: I0904 17:20:15.118465 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:15.118842 containerd[1453]: time="2024-09-04T17:20:15.118574687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:20:15.119228 containerd[1453]: time="2024-09-04T17:20:15.119185865Z" level=info msg="StopPodSandbox for \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\"" Sep 4 17:20:15.119520 containerd[1453]: time="2024-09-04T17:20:15.119430023Z" level=info msg="Ensure that sandbox b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1 in task-service has been cleanup successfully" Sep 4 17:20:15.121394 kubelet[2519]: I0904 17:20:15.120967 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:15.121563 containerd[1453]: time="2024-09-04T17:20:15.121532392Z" level=info msg="StopPodSandbox for \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\"" Sep 4 17:20:15.121742 containerd[1453]: time="2024-09-04T17:20:15.121712691Z" level=info msg="Ensure that sandbox 4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b in task-service has been cleanup successfully" Sep 4 17:20:15.148574 containerd[1453]: time="2024-09-04T17:20:15.148512708Z" level=error msg="StopPodSandbox for \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\" failed" error="failed to destroy network for sandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:15.148906 kubelet[2519]: E0904 17:20:15.148861 2519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:15.148960 kubelet[2519]: E0904 17:20:15.148936 2519 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a"} Sep 4 17:20:15.149012 kubelet[2519]: E0904 17:20:15.148972 2519 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84407aac-cc69-469e-80db-ff90c5b8ed8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:20:15.149012 kubelet[2519]: E0904 17:20:15.149003 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84407aac-cc69-469e-80db-ff90c5b8ed8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-zxfg9" podUID="84407aac-cc69-469e-80db-ff90c5b8ed8a" Sep 4 17:20:15.153774 containerd[1453]: time="2024-09-04T17:20:15.153721221Z" level=error msg="StopPodSandbox for \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\" failed" error="failed to destroy network for sandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:15.153961 kubelet[2519]: E0904 17:20:15.153934 2519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:15.153961 kubelet[2519]: E0904 17:20:15.153964 2519 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b"} Sep 4 17:20:15.154145 kubelet[2519]: E0904 17:20:15.153990 2519 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3134e9e3-9d9c-4e6c-af7f-e379eb17a941\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:20:15.154145 kubelet[2519]: E0904 17:20:15.154018 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3134e9e3-9d9c-4e6c-af7f-e379eb17a941\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79cc849bb-qnk7z" podUID="3134e9e3-9d9c-4e6c-af7f-e379eb17a941" Sep 4 17:20:15.158392 containerd[1453]: time="2024-09-04T17:20:15.158298688Z" level=error msg="StopPodSandbox for \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\" failed" error="failed to destroy network for sandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:15.158443 kubelet[2519]: E0904 17:20:15.158422 2519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:15.158480 kubelet[2519]: E0904 17:20:15.158445 2519 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1"} Sep 4 17:20:15.158480 kubelet[2519]: E0904 17:20:15.158472 2519 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66be1aa6-468b-46e9-8f41-49d472ad634c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:20:15.158609 kubelet[2519]: E0904 17:20:15.158493 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66be1aa6-468b-46e9-8f41-49d472ad634c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-4rqwj" podUID="66be1aa6-468b-46e9-8f41-49d472ad634c" Sep 4 17:20:15.227251 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a-shm.mount: Deactivated successfully. Sep 4 17:20:16.024306 systemd[1]: Created slice kubepods-besteffort-podec74825e_1f06_4e2a_b769_94c881521a0f.slice - libcontainer container kubepods-besteffort-podec74825e_1f06_4e2a_b769_94c881521a0f.slice. Sep 4 17:20:16.026563 containerd[1453]: time="2024-09-04T17:20:16.026489377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmblz,Uid:ec74825e-1f06-4e2a-b769-94c881521a0f,Namespace:calico-system,Attempt:0,}" Sep 4 17:20:16.098873 containerd[1453]: time="2024-09-04T17:20:16.098792595Z" level=error msg="Failed to destroy network for sandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:16.099808 containerd[1453]: time="2024-09-04T17:20:16.099440812Z" level=error msg="encountered an error cleaning up failed sandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:16.099808 containerd[1453]: time="2024-09-04T17:20:16.099539027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmblz,Uid:ec74825e-1f06-4e2a-b769-94c881521a0f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:16.101682 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b-shm.mount: Deactivated successfully. Sep 4 17:20:16.101823 kubelet[2519]: E0904 17:20:16.101716 2519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:16.101823 kubelet[2519]: E0904 17:20:16.101782 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vmblz" Sep 4 17:20:16.101823 kubelet[2519]: E0904 17:20:16.101805 2519 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vmblz" Sep 4 17:20:16.101968 kubelet[2519]: E0904 17:20:16.101868 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vmblz_calico-system(ec74825e-1f06-4e2a-b769-94c881521a0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vmblz_calico-system(ec74825e-1f06-4e2a-b769-94c881521a0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vmblz" podUID="ec74825e-1f06-4e2a-b769-94c881521a0f" Sep 4 17:20:16.123950 kubelet[2519]: I0904 17:20:16.123912 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:16.125081 containerd[1453]: time="2024-09-04T17:20:16.124575619Z" level=info msg="StopPodSandbox for \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\"" Sep 4 17:20:16.125081 containerd[1453]: time="2024-09-04T17:20:16.124808146Z" level=info msg="Ensure that sandbox bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b in task-service has been cleanup successfully" Sep 4 17:20:16.152578 containerd[1453]: time="2024-09-04T17:20:16.152487132Z" level=error msg="StopPodSandbox for \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\" failed" error="failed to destroy network for sandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:16.152791 kubelet[2519]: E0904 17:20:16.152760 2519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:16.152887 kubelet[2519]: E0904 17:20:16.152806 2519 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b"} Sep 4 17:20:16.152887 kubelet[2519]: E0904 17:20:16.152838 2519 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec74825e-1f06-4e2a-b769-94c881521a0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:20:16.152887 kubelet[2519]: E0904 17:20:16.152866 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec74825e-1f06-4e2a-b769-94c881521a0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vmblz" podUID="ec74825e-1f06-4e2a-b769-94c881521a0f" Sep 4 17:20:19.106887 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:52172.service - OpenSSH per-connection server daemon (10.0.0.1:52172). Sep 4 17:20:19.142978 sshd[3532]: Accepted publickey for core from 10.0.0.1 port 52172 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:19.144836 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:19.150861 systemd-logind[1435]: New session 11 of user core. Sep 4 17:20:19.156941 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:20:19.286234 sshd[3532]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:19.289636 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:52172.service: Deactivated successfully. Sep 4 17:20:19.292347 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:20:19.295104 systemd-logind[1435]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:20:19.296167 systemd-logind[1435]: Removed session 11. Sep 4 17:20:20.418767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466226929.mount: Deactivated successfully. Sep 4 17:20:21.438702 containerd[1453]: time="2024-09-04T17:20:21.438632253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:21.440174 containerd[1453]: time="2024-09-04T17:20:21.440057911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:20:21.444836 containerd[1453]: time="2024-09-04T17:20:21.444802490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 6.326192998s" Sep 4 17:20:21.444836 containerd[1453]: time="2024-09-04T17:20:21.444835662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:20:21.452138 containerd[1453]: time="2024-09-04T17:20:21.452086107Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:21.453221 containerd[1453]: time="2024-09-04T17:20:21.452813663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:21.455023 containerd[1453]: time="2024-09-04T17:20:21.454836642Z" level=info msg="CreateContainer within sandbox \"1c07b32a2d2069f286a2038fcc1c326a7d4fe785c086dca48e03ba3a5183f0c4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:20:21.477438 containerd[1453]: time="2024-09-04T17:20:21.477379903Z" level=info msg="CreateContainer within sandbox \"1c07b32a2d2069f286a2038fcc1c326a7d4fe785c086dca48e03ba3a5183f0c4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"55f3a5138d811d2c0c0e8723f8f1a3d20d565dd858050085f406ffdc14a5dac4\"" Sep 4 17:20:21.479805 containerd[1453]: time="2024-09-04T17:20:21.477981292Z" level=info msg="StartContainer for \"55f3a5138d811d2c0c0e8723f8f1a3d20d565dd858050085f406ffdc14a5dac4\"" Sep 4 17:20:21.554762 systemd[1]: Started cri-containerd-55f3a5138d811d2c0c0e8723f8f1a3d20d565dd858050085f406ffdc14a5dac4.scope - libcontainer container 55f3a5138d811d2c0c0e8723f8f1a3d20d565dd858050085f406ffdc14a5dac4. Sep 4 17:20:21.591778 containerd[1453]: time="2024-09-04T17:20:21.591719147Z" level=info msg="StartContainer for \"55f3a5138d811d2c0c0e8723f8f1a3d20d565dd858050085f406ffdc14a5dac4\" returns successfully" Sep 4 17:20:21.679681 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:20:21.679820 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:20:22.139141 kubelet[2519]: E0904 17:20:22.139104 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:22.242225 kubelet[2519]: I0904 17:20:22.242015 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-74chb" podStartSLOduration=1.723426755 podCreationTimestamp="2024-09-04 17:20:00 +0000 UTC" firstStartedPulling="2024-09-04 17:20:00.926588303 +0000 UTC m=+21.033850483" lastFinishedPulling="2024-09-04 17:20:21.445128983 +0000 UTC m=+41.552391174" observedRunningTime="2024-09-04 17:20:22.241704683 +0000 UTC m=+42.348966903" watchObservedRunningTime="2024-09-04 17:20:22.241967446 +0000 UTC m=+42.349229636" Sep 4 17:20:24.298462 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:52188.service - OpenSSH per-connection server daemon (10.0.0.1:52188). Sep 4 17:20:24.336469 sshd[3739]: Accepted publickey for core from 10.0.0.1 port 52188 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:24.338162 sshd[3739]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:24.342740 systemd-logind[1435]: New session 12 of user core. Sep 4 17:20:24.357627 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:20:24.500308 sshd[3739]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:24.505401 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:52188.service: Deactivated successfully. Sep 4 17:20:24.507829 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:20:24.508544 systemd-logind[1435]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:20:24.509568 systemd-logind[1435]: Removed session 12. Sep 4 17:20:27.017922 containerd[1453]: time="2024-09-04T17:20:27.017794392Z" level=info msg="StopPodSandbox for \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\"" Sep 4 17:20:27.265420 kubelet[2519]: I0904 17:20:27.265359 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:20:27.267724 kubelet[2519]: E0904 17:20:27.265963 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.346 [INFO][3822] k8s.go 608: Cleaning up netns ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.346 [INFO][3822] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" iface="eth0" netns="/var/run/netns/cni-d85ec4ce-7d83-121e-4f5f-ecb7e32e3549" Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.347 [INFO][3822] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" iface="eth0" netns="/var/run/netns/cni-d85ec4ce-7d83-121e-4f5f-ecb7e32e3549" Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.347 [INFO][3822] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" iface="eth0" netns="/var/run/netns/cni-d85ec4ce-7d83-121e-4f5f-ecb7e32e3549" Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.347 [INFO][3822] k8s.go 615: Releasing IP address(es) ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.348 [INFO][3822] utils.go 188: Calico CNI releasing IP address ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.411 [INFO][3854] ipam_plugin.go 417: Releasing address using handleID ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" HandleID="k8s-pod-network.bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Workload="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.411 [INFO][3854] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.411 [INFO][3854] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.418 [WARNING][3854] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" HandleID="k8s-pod-network.bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Workload="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.418 [INFO][3854] ipam_plugin.go 445: Releasing address using workloadID ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" HandleID="k8s-pod-network.bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Workload="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.419 [INFO][3854] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:27.425255 containerd[1453]: 2024-09-04 17:20:27.422 [INFO][3822] k8s.go 621: Teardown processing complete. ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:27.425814 containerd[1453]: time="2024-09-04T17:20:27.425464697Z" level=info msg="TearDown network for sandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\" successfully" Sep 4 17:20:27.425814 containerd[1453]: time="2024-09-04T17:20:27.425527666Z" level=info msg="StopPodSandbox for \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\" returns successfully" Sep 4 17:20:27.428281 systemd[1]: run-netns-cni\x2dd85ec4ce\x2d7d83\x2d121e\x2d4f5f\x2decb7e32e3549.mount: Deactivated successfully. Sep 4 17:20:27.436598 containerd[1453]: time="2024-09-04T17:20:27.436572652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmblz,Uid:ec74825e-1f06-4e2a-b769-94c881521a0f,Namespace:calico-system,Attempt:1,}" Sep 4 17:20:27.548850 systemd-networkd[1388]: calie64bca0b135: Link UP Sep 4 17:20:27.549845 systemd-networkd[1388]: calie64bca0b135: Gained carrier Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.472 [INFO][3868] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.482 [INFO][3868] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vmblz-eth0 csi-node-driver- calico-system ec74825e-1f06-4e2a-b769-94c881521a0f 783 0 2024-09-04 17:20:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-vmblz eth0 default [] [] [kns.calico-system ksa.calico-system.default] calie64bca0b135 [] []}} ContainerID="e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" Namespace="calico-system" Pod="csi-node-driver-vmblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmblz-" Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.482 [INFO][3868] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" Namespace="calico-system" Pod="csi-node-driver-vmblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.510 [INFO][3875] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" HandleID="k8s-pod-network.e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" Workload="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.518 [INFO][3875] ipam_plugin.go 270: Auto assigning IP ContainerID="e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" HandleID="k8s-pod-network.e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" Workload="localhost-k8s-csi--node--driver--vmblz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027c8f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vmblz", "timestamp":"2024-09-04 17:20:27.510655711 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.518 [INFO][3875] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.518 [INFO][3875] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.518 [INFO][3875] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.519 [INFO][3875] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" host="localhost" Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.523 [INFO][3875] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.526 [INFO][3875] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.528 [INFO][3875] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.530 [INFO][3875] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.530 [INFO][3875] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" host="localhost" Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.531 [INFO][3875] ipam.go 1685: Creating new handle: k8s-pod-network.e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32 Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.533 [INFO][3875] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" host="localhost" Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.538 [INFO][3875] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" host="localhost" Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.538 [INFO][3875] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" host="localhost" Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.538 [INFO][3875] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:27.562032 containerd[1453]: 2024-09-04 17:20:27.538 [INFO][3875] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" HandleID="k8s-pod-network.e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" Workload="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:27.562956 containerd[1453]: 2024-09-04 17:20:27.541 [INFO][3868] k8s.go 386: Populated endpoint ContainerID="e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" Namespace="calico-system" Pod="csi-node-driver-vmblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmblz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vmblz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec74825e-1f06-4e2a-b769-94c881521a0f", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vmblz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie64bca0b135", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:27.562956 containerd[1453]: 2024-09-04 17:20:27.541 [INFO][3868] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" Namespace="calico-system" Pod="csi-node-driver-vmblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:27.562956 containerd[1453]: 2024-09-04 17:20:27.541 [INFO][3868] dataplane_linux.go 68: Setting the host side veth name to calie64bca0b135 ContainerID="e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" Namespace="calico-system" Pod="csi-node-driver-vmblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:27.562956 containerd[1453]: 2024-09-04 17:20:27.548 [INFO][3868] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" Namespace="calico-system" Pod="csi-node-driver-vmblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:27.562956 containerd[1453]: 2024-09-04 17:20:27.549 [INFO][3868] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" Namespace="calico-system" Pod="csi-node-driver-vmblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmblz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vmblz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec74825e-1f06-4e2a-b769-94c881521a0f", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32", Pod:"csi-node-driver-vmblz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie64bca0b135", MAC:"c2:33:b8:76:3b:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:27.562956 containerd[1453]: 2024-09-04 17:20:27.557 [INFO][3868] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32" Namespace="calico-system" Pod="csi-node-driver-vmblz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:27.601922 containerd[1453]: time="2024-09-04T17:20:27.601296194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:27.601922 containerd[1453]: time="2024-09-04T17:20:27.601881603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:27.601922 containerd[1453]: time="2024-09-04T17:20:27.601896791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:27.601922 containerd[1453]: time="2024-09-04T17:20:27.601905939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:27.627660 systemd[1]: Started cri-containerd-e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32.scope - libcontainer container e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32. Sep 4 17:20:27.631822 kubelet[2519]: I0904 17:20:27.631794 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:20:27.632454 kubelet[2519]: E0904 17:20:27.632437 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:27.642515 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:20:27.665195 containerd[1453]: time="2024-09-04T17:20:27.665126132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmblz,Uid:ec74825e-1f06-4e2a-b769-94c881521a0f,Namespace:calico-system,Attempt:1,} returns sandbox id \"e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32\"" Sep 4 17:20:27.666683 containerd[1453]: time="2024-09-04T17:20:27.666430882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:20:28.018224 containerd[1453]: time="2024-09-04T17:20:28.018155901Z" level=info msg="StopPodSandbox for \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\"" Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.059 [INFO][3998] k8s.go 608: Cleaning up netns ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.060 [INFO][3998] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" iface="eth0" netns="/var/run/netns/cni-bfa13b11-893e-3e95-f32f-59d49837dc62" Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.060 [INFO][3998] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" iface="eth0" netns="/var/run/netns/cni-bfa13b11-893e-3e95-f32f-59d49837dc62" Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.060 [INFO][3998] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" iface="eth0" netns="/var/run/netns/cni-bfa13b11-893e-3e95-f32f-59d49837dc62" Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.060 [INFO][3998] k8s.go 615: Releasing IP address(es) ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.060 [INFO][3998] utils.go 188: Calico CNI releasing IP address ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.083 [INFO][4006] ipam_plugin.go 417: Releasing address using handleID ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" HandleID="k8s-pod-network.8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Workload="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.083 [INFO][4006] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.083 [INFO][4006] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.090 [WARNING][4006] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" HandleID="k8s-pod-network.8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Workload="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.090 [INFO][4006] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" HandleID="k8s-pod-network.8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Workload="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.091 [INFO][4006] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:28.097873 containerd[1453]: 2024-09-04 17:20:28.094 [INFO][3998] k8s.go 621: Teardown processing complete. ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:28.098430 containerd[1453]: time="2024-09-04T17:20:28.098074903Z" level=info msg="TearDown network for sandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\" successfully" Sep 4 17:20:28.098430 containerd[1453]: time="2024-09-04T17:20:28.098106392Z" level=info msg="StopPodSandbox for \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\" returns successfully" Sep 4 17:20:28.098661 kubelet[2519]: E0904 17:20:28.098453 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:28.099017 containerd[1453]: time="2024-09-04T17:20:28.098963682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zxfg9,Uid:84407aac-cc69-469e-80db-ff90c5b8ed8a,Namespace:kube-system,Attempt:1,}" Sep 4 17:20:28.154247 kubelet[2519]: E0904 17:20:28.154215 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:28.396091 systemd-networkd[1388]: calia626b61c1dc: Link UP Sep 4 17:20:28.396262 systemd-networkd[1388]: calia626b61c1dc: Gained carrier Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.133 [INFO][4014] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.143 [INFO][4014] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--zxfg9-eth0 coredns-5dd5756b68- kube-system 84407aac-cc69-469e-80db-ff90c5b8ed8a 799 0 2024-09-04 17:19:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-zxfg9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia626b61c1dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" Namespace="kube-system" Pod="coredns-5dd5756b68-zxfg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--zxfg9-" Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.143 [INFO][4014] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" Namespace="kube-system" Pod="coredns-5dd5756b68-zxfg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.176 [INFO][4027] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" HandleID="k8s-pod-network.62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" Workload="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.183 [INFO][4027] ipam_plugin.go 270: Auto assigning IP ContainerID="62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" HandleID="k8s-pod-network.62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" Workload="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000136ac0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-zxfg9", "timestamp":"2024-09-04 17:20:28.176265269 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.183 [INFO][4027] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.183 [INFO][4027] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.183 [INFO][4027] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.184 [INFO][4027] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" host="localhost" Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.188 [INFO][4027] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.192 [INFO][4027] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.194 [INFO][4027] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.196 [INFO][4027] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.196 [INFO][4027] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" host="localhost" Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.197 [INFO][4027] ipam.go 1685: Creating new handle: k8s-pod-network.62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.386 [INFO][4027] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" host="localhost" Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.390 [INFO][4027] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" host="localhost" Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.390 [INFO][4027] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" host="localhost" Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.390 [INFO][4027] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:28.431388 containerd[1453]: 2024-09-04 17:20:28.390 [INFO][4027] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" HandleID="k8s-pod-network.62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" Workload="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:28.432708 containerd[1453]: 2024-09-04 17:20:28.393 [INFO][4014] k8s.go 386: Populated endpoint ContainerID="62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" Namespace="kube-system" Pod="coredns-5dd5756b68-zxfg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--zxfg9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"84407aac-cc69-469e-80db-ff90c5b8ed8a", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-zxfg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia626b61c1dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:28.432708 containerd[1453]: 2024-09-04 17:20:28.394 [INFO][4014] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" Namespace="kube-system" Pod="coredns-5dd5756b68-zxfg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:28.432708 containerd[1453]: 2024-09-04 17:20:28.394 [INFO][4014] dataplane_linux.go 68: Setting the host side veth name to calia626b61c1dc ContainerID="62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" Namespace="kube-system" Pod="coredns-5dd5756b68-zxfg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:28.432708 containerd[1453]: 2024-09-04 17:20:28.395 [INFO][4014] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" Namespace="kube-system" Pod="coredns-5dd5756b68-zxfg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:28.432708 containerd[1453]: 2024-09-04 17:20:28.396 [INFO][4014] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" Namespace="kube-system" Pod="coredns-5dd5756b68-zxfg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--zxfg9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"84407aac-cc69-469e-80db-ff90c5b8ed8a", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a", Pod:"coredns-5dd5756b68-zxfg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia626b61c1dc", MAC:"ee:5f:31:c7:69:14", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:28.432708 containerd[1453]: 2024-09-04 17:20:28.426 [INFO][4014] k8s.go 500: Wrote updated endpoint to datastore ContainerID="62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a" Namespace="kube-system" Pod="coredns-5dd5756b68-zxfg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:28.432295 systemd[1]: run-netns-cni\x2dbfa13b11\x2d893e\x2d3e95\x2df32f\x2d59d49837dc62.mount: Deactivated successfully. Sep 4 17:20:28.568529 kernel: bpftool[4090]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:20:28.601330 containerd[1453]: time="2024-09-04T17:20:28.600568614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:28.601330 containerd[1453]: time="2024-09-04T17:20:28.601308804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:28.601831 containerd[1453]: time="2024-09-04T17:20:28.601342097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:28.601831 containerd[1453]: time="2024-09-04T17:20:28.601362355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:28.634659 systemd[1]: Started cri-containerd-62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a.scope - libcontainer container 62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a. Sep 4 17:20:28.649152 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:20:28.675955 containerd[1453]: time="2024-09-04T17:20:28.675704085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zxfg9,Uid:84407aac-cc69-469e-80db-ff90c5b8ed8a,Namespace:kube-system,Attempt:1,} returns sandbox id \"62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a\"" Sep 4 17:20:28.677304 kubelet[2519]: E0904 17:20:28.677090 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:28.682251 containerd[1453]: time="2024-09-04T17:20:28.681793147Z" level=info msg="CreateContainer within sandbox \"62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:20:28.707087 containerd[1453]: time="2024-09-04T17:20:28.707000922Z" level=info msg="CreateContainer within sandbox \"62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea2b35f21e0951ebc56f60addbd07f74decc1cf2a3eecdd053e98efa80d46714\"" Sep 4 17:20:28.707707 containerd[1453]: time="2024-09-04T17:20:28.707673705Z" level=info msg="StartContainer for \"ea2b35f21e0951ebc56f60addbd07f74decc1cf2a3eecdd053e98efa80d46714\"" Sep 4 17:20:28.743657 systemd[1]: Started cri-containerd-ea2b35f21e0951ebc56f60addbd07f74decc1cf2a3eecdd053e98efa80d46714.scope - libcontainer container ea2b35f21e0951ebc56f60addbd07f74decc1cf2a3eecdd053e98efa80d46714. Sep 4 17:20:28.777087 containerd[1453]: time="2024-09-04T17:20:28.776982116Z" level=info msg="StartContainer for \"ea2b35f21e0951ebc56f60addbd07f74decc1cf2a3eecdd053e98efa80d46714\" returns successfully" Sep 4 17:20:28.840051 systemd-networkd[1388]: vxlan.calico: Link UP Sep 4 17:20:28.840060 systemd-networkd[1388]: vxlan.calico: Gained carrier Sep 4 17:20:29.018460 containerd[1453]: time="2024-09-04T17:20:29.017903075Z" level=info msg="StopPodSandbox for \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\"" Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.069 [INFO][4227] k8s.go 608: Cleaning up netns ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.069 [INFO][4227] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" iface="eth0" netns="/var/run/netns/cni-ce3e14bf-78c7-7f13-0f6f-92527faaf411" Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.069 [INFO][4227] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" iface="eth0" netns="/var/run/netns/cni-ce3e14bf-78c7-7f13-0f6f-92527faaf411" Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.070 [INFO][4227] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" iface="eth0" netns="/var/run/netns/cni-ce3e14bf-78c7-7f13-0f6f-92527faaf411" Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.070 [INFO][4227] k8s.go 615: Releasing IP address(es) ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.070 [INFO][4227] utils.go 188: Calico CNI releasing IP address ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.095 [INFO][4236] ipam_plugin.go 417: Releasing address using handleID ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" HandleID="k8s-pod-network.4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Workload="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.095 [INFO][4236] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.095 [INFO][4236] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.102 [WARNING][4236] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" HandleID="k8s-pod-network.4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Workload="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.102 [INFO][4236] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" HandleID="k8s-pod-network.4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Workload="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.104 [INFO][4236] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:29.111457 containerd[1453]: 2024-09-04 17:20:29.107 [INFO][4227] k8s.go 621: Teardown processing complete. ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:29.112018 containerd[1453]: time="2024-09-04T17:20:29.111665282Z" level=info msg="TearDown network for sandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\" successfully" Sep 4 17:20:29.112018 containerd[1453]: time="2024-09-04T17:20:29.111703895Z" level=info msg="StopPodSandbox for \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\" returns successfully" Sep 4 17:20:29.112471 containerd[1453]: time="2024-09-04T17:20:29.112440727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79cc849bb-qnk7z,Uid:3134e9e3-9d9c-4e6c-af7f-e379eb17a941,Namespace:calico-system,Attempt:1,}" Sep 4 17:20:29.170299 kubelet[2519]: E0904 17:20:29.170239 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:29.182034 kubelet[2519]: I0904 17:20:29.181983 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-zxfg9" podStartSLOduration=34.181922682 podCreationTimestamp="2024-09-04 17:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:20:29.180340712 +0000 UTC m=+49.287602912" watchObservedRunningTime="2024-09-04 17:20:29.181922682 +0000 UTC m=+49.289184872" Sep 4 17:20:29.281579 systemd-networkd[1388]: cali57348b23f37: Link UP Sep 4 17:20:29.281851 systemd-networkd[1388]: cali57348b23f37: Gained carrier Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.179 [INFO][4267] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0 calico-kube-controllers-79cc849bb- calico-system 3134e9e3-9d9c-4e6c-af7f-e379eb17a941 813 0 2024-09-04 17:20:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79cc849bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-79cc849bb-qnk7z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali57348b23f37 [] []}} ContainerID="0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" Namespace="calico-system" Pod="calico-kube-controllers-79cc849bb-qnk7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-" Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.180 [INFO][4267] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" Namespace="calico-system" Pod="calico-kube-controllers-79cc849bb-qnk7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.231 [INFO][4286] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" HandleID="k8s-pod-network.0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" Workload="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.244 [INFO][4286] ipam_plugin.go 270: Auto assigning IP ContainerID="0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" HandleID="k8s-pod-network.0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" Workload="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-79cc849bb-qnk7z", "timestamp":"2024-09-04 17:20:29.231265236 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.244 [INFO][4286] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.244 [INFO][4286] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.244 [INFO][4286] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.246 [INFO][4286] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" host="localhost" Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.251 [INFO][4286] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.255 [INFO][4286] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.256 [INFO][4286] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.258 [INFO][4286] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.258 [INFO][4286] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" host="localhost" Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.260 [INFO][4286] ipam.go 1685: Creating new handle: k8s-pod-network.0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54 Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.263 [INFO][4286] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" host="localhost" Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.272 [INFO][4286] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" host="localhost" Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.272 [INFO][4286] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" host="localhost" Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.272 [INFO][4286] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:29.296873 containerd[1453]: 2024-09-04 17:20:29.272 [INFO][4286] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" HandleID="k8s-pod-network.0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" Workload="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:29.297533 containerd[1453]: 2024-09-04 17:20:29.277 [INFO][4267] k8s.go 386: Populated endpoint ContainerID="0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" Namespace="calico-system" Pod="calico-kube-controllers-79cc849bb-qnk7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0", GenerateName:"calico-kube-controllers-79cc849bb-", Namespace:"calico-system", SelfLink:"", UID:"3134e9e3-9d9c-4e6c-af7f-e379eb17a941", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79cc849bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-79cc849bb-qnk7z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali57348b23f37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:29.297533 containerd[1453]: 2024-09-04 17:20:29.277 [INFO][4267] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" Namespace="calico-system" Pod="calico-kube-controllers-79cc849bb-qnk7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:29.297533 containerd[1453]: 2024-09-04 17:20:29.277 [INFO][4267] dataplane_linux.go 68: Setting the host side veth name to cali57348b23f37 ContainerID="0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" Namespace="calico-system" Pod="calico-kube-controllers-79cc849bb-qnk7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:29.297533 containerd[1453]: 2024-09-04 17:20:29.281 [INFO][4267] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" Namespace="calico-system" Pod="calico-kube-controllers-79cc849bb-qnk7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:29.297533 containerd[1453]: 2024-09-04 17:20:29.282 [INFO][4267] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" Namespace="calico-system" Pod="calico-kube-controllers-79cc849bb-qnk7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0", GenerateName:"calico-kube-controllers-79cc849bb-", Namespace:"calico-system", SelfLink:"", UID:"3134e9e3-9d9c-4e6c-af7f-e379eb17a941", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79cc849bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54", Pod:"calico-kube-controllers-79cc849bb-qnk7z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali57348b23f37", MAC:"0a:12:be:ab:77:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:29.297533 containerd[1453]: 2024-09-04 17:20:29.292 [INFO][4267] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54" Namespace="calico-system" Pod="calico-kube-controllers-79cc849bb-qnk7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:29.357560 containerd[1453]: time="2024-09-04T17:20:29.356378452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:29.357560 containerd[1453]: time="2024-09-04T17:20:29.356435750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:29.357560 containerd[1453]: time="2024-09-04T17:20:29.356463231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:29.357560 containerd[1453]: time="2024-09-04T17:20:29.356476616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:29.366345 systemd-networkd[1388]: calie64bca0b135: Gained IPv6LL Sep 4 17:20:29.378709 systemd[1]: Started cri-containerd-0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54.scope - libcontainer container 0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54. Sep 4 17:20:29.396143 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:20:29.422372 containerd[1453]: time="2024-09-04T17:20:29.422318286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79cc849bb-qnk7z,Uid:3134e9e3-9d9c-4e6c-af7f-e379eb17a941,Namespace:calico-system,Attempt:1,} returns sandbox id \"0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54\"" Sep 4 17:20:29.431391 systemd[1]: run-netns-cni\x2dce3e14bf\x2d78c7\x2d7f13\x2d0f6f\x2d92527faaf411.mount: Deactivated successfully. Sep 4 17:20:29.514272 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:54894.service - OpenSSH per-connection server daemon (10.0.0.1:54894). Sep 4 17:20:29.632337 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 54894 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:29.634686 sshd[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:29.640065 systemd-logind[1435]: New session 13 of user core. Sep 4 17:20:29.650770 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:20:29.774055 containerd[1453]: time="2024-09-04T17:20:29.773986671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:29.792378 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:54906.service - OpenSSH per-connection server daemon (10.0.0.1:54906). Sep 4 17:20:29.802789 containerd[1453]: time="2024-09-04T17:20:29.802711468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:20:29.827663 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 54906 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:29.829219 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:29.833608 systemd-logind[1435]: New session 14 of user core. Sep 4 17:20:29.847755 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:20:29.866514 containerd[1453]: time="2024-09-04T17:20:29.866444289Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:29.929450 containerd[1453]: time="2024-09-04T17:20:29.929295154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:29.930702 containerd[1453]: time="2024-09-04T17:20:29.929802977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 2.263338342s" Sep 4 17:20:29.930702 containerd[1453]: time="2024-09-04T17:20:29.929836700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:20:29.931703 containerd[1453]: time="2024-09-04T17:20:29.931678308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:20:29.932584 containerd[1453]: time="2024-09-04T17:20:29.932547961Z" level=info msg="CreateContainer within sandbox \"e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:20:29.941695 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Sep 4 17:20:29.992628 sshd[4362]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:29.997621 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:54894.service: Deactivated successfully. Sep 4 17:20:29.999747 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:20:30.000388 systemd-logind[1435]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:20:30.001279 systemd-logind[1435]: Removed session 13. Sep 4 17:20:30.005685 systemd-networkd[1388]: calia626b61c1dc: Gained IPv6LL Sep 4 17:20:30.018543 containerd[1453]: time="2024-09-04T17:20:30.018273379Z" level=info msg="StopPodSandbox for \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\"" Sep 4 17:20:30.172617 kubelet[2519]: E0904 17:20:30.172578 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.117 [INFO][4404] k8s.go 608: Cleaning up netns ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.118 [INFO][4404] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" iface="eth0" netns="/var/run/netns/cni-eb0a9c31-f200-7f51-11aa-62ff383decf4" Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.118 [INFO][4404] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" iface="eth0" netns="/var/run/netns/cni-eb0a9c31-f200-7f51-11aa-62ff383decf4" Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.118 [INFO][4404] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" iface="eth0" netns="/var/run/netns/cni-eb0a9c31-f200-7f51-11aa-62ff383decf4" Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.118 [INFO][4404] k8s.go 615: Releasing IP address(es) ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.118 [INFO][4404] utils.go 188: Calico CNI releasing IP address ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.224 [INFO][4413] ipam_plugin.go 417: Releasing address using handleID ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" HandleID="k8s-pod-network.b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Workload="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.224 [INFO][4413] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.224 [INFO][4413] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.234 [WARNING][4413] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" HandleID="k8s-pod-network.b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Workload="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.234 [INFO][4413] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" HandleID="k8s-pod-network.b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Workload="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.239 [INFO][4413] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:30.246544 containerd[1453]: 2024-09-04 17:20:30.243 [INFO][4404] k8s.go 621: Teardown processing complete. ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:30.248133 containerd[1453]: time="2024-09-04T17:20:30.248096986Z" level=info msg="TearDown network for sandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\" successfully" Sep 4 17:20:30.248319 containerd[1453]: time="2024-09-04T17:20:30.248203947Z" level=info msg="StopPodSandbox for \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\" returns successfully" Sep 4 17:20:30.248672 kubelet[2519]: E0904 17:20:30.248649 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:30.249525 containerd[1453]: time="2024-09-04T17:20:30.249368985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4rqwj,Uid:66be1aa6-468b-46e9-8f41-49d472ad634c,Namespace:kube-system,Attempt:1,}" Sep 4 17:20:30.250459 systemd[1]: run-netns-cni\x2deb0a9c31\x2df200\x2d7f51\x2d11aa\x2d62ff383decf4.mount: Deactivated successfully. Sep 4 17:20:30.322777 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:54912.service - OpenSSH per-connection server daemon (10.0.0.1:54912). Sep 4 17:20:30.372439 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 54912 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:30.374543 sshd[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:30.379467 systemd-logind[1435]: New session 15 of user core. Sep 4 17:20:30.389817 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:20:30.453744 systemd-networkd[1388]: cali57348b23f37: Gained IPv6LL Sep 4 17:20:30.529622 sshd[4375]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:30.534625 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:54906.service: Deactivated successfully. Sep 4 17:20:30.537739 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:20:30.538513 systemd-logind[1435]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:20:30.539398 systemd-logind[1435]: Removed session 14. Sep 4 17:20:30.869136 sshd[4427]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:30.874985 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:54912.service: Deactivated successfully. Sep 4 17:20:30.876996 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:20:30.877674 systemd-logind[1435]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:20:30.878530 systemd-logind[1435]: Removed session 15. Sep 4 17:20:31.174896 kubelet[2519]: E0904 17:20:31.174761 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:31.334659 containerd[1453]: time="2024-09-04T17:20:31.334588662Z" level=info msg="CreateContainer within sandbox \"e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"eb8434b7c1db9794e4d5ba89d610da32912b3fac8241790942873478de252d67\"" Sep 4 17:20:31.335412 containerd[1453]: time="2024-09-04T17:20:31.335384245Z" level=info msg="StartContainer for \"eb8434b7c1db9794e4d5ba89d610da32912b3fac8241790942873478de252d67\"" Sep 4 17:20:31.374738 systemd[1]: Started cri-containerd-eb8434b7c1db9794e4d5ba89d610da32912b3fac8241790942873478de252d67.scope - libcontainer container eb8434b7c1db9794e4d5ba89d610da32912b3fac8241790942873478de252d67. Sep 4 17:20:31.628088 containerd[1453]: time="2024-09-04T17:20:31.628013871Z" level=info msg="StartContainer for \"eb8434b7c1db9794e4d5ba89d610da32912b3fac8241790942873478de252d67\" returns successfully" Sep 4 17:20:31.927288 systemd-networkd[1388]: calia2208b29cc6: Link UP Sep 4 17:20:31.927527 systemd-networkd[1388]: calia2208b29cc6: Gained carrier Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.834 [INFO][4479] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--4rqwj-eth0 coredns-5dd5756b68- kube-system 66be1aa6-468b-46e9-8f41-49d472ad634c 831 0 2024-09-04 17:19:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-4rqwj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia2208b29cc6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" Namespace="kube-system" Pod="coredns-5dd5756b68-4rqwj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4rqwj-" Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.834 [INFO][4479] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" Namespace="kube-system" Pod="coredns-5dd5756b68-4rqwj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.859 [INFO][4493] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" HandleID="k8s-pod-network.f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" Workload="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.890 [INFO][4493] ipam_plugin.go 270: Auto assigning IP ContainerID="f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" HandleID="k8s-pod-network.f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" Workload="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002927e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-4rqwj", "timestamp":"2024-09-04 17:20:31.859140547 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.890 [INFO][4493] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.890 [INFO][4493] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.890 [INFO][4493] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.892 [INFO][4493] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" host="localhost" Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.895 [INFO][4493] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.898 [INFO][4493] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.899 [INFO][4493] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.901 [INFO][4493] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.901 [INFO][4493] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" host="localhost" Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.902 [INFO][4493] ipam.go 1685: Creating new handle: k8s-pod-network.f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8 Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.905 [INFO][4493] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" host="localhost" Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.921 [INFO][4493] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" host="localhost" Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.921 [INFO][4493] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" host="localhost" Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.921 [INFO][4493] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:31.971169 containerd[1453]: 2024-09-04 17:20:31.921 [INFO][4493] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" HandleID="k8s-pod-network.f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" Workload="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:31.971772 containerd[1453]: 2024-09-04 17:20:31.924 [INFO][4479] k8s.go 386: Populated endpoint ContainerID="f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" Namespace="kube-system" Pod="coredns-5dd5756b68-4rqwj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--4rqwj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"66be1aa6-468b-46e9-8f41-49d472ad634c", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-4rqwj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2208b29cc6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:31.971772 containerd[1453]: 2024-09-04 17:20:31.924 [INFO][4479] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" Namespace="kube-system" Pod="coredns-5dd5756b68-4rqwj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:31.971772 containerd[1453]: 2024-09-04 17:20:31.924 [INFO][4479] dataplane_linux.go 68: Setting the host side veth name to calia2208b29cc6 ContainerID="f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" Namespace="kube-system" Pod="coredns-5dd5756b68-4rqwj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:31.971772 containerd[1453]: 2024-09-04 17:20:31.927 [INFO][4479] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" Namespace="kube-system" Pod="coredns-5dd5756b68-4rqwj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:31.971772 containerd[1453]: 2024-09-04 17:20:31.927 [INFO][4479] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" Namespace="kube-system" Pod="coredns-5dd5756b68-4rqwj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--4rqwj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"66be1aa6-468b-46e9-8f41-49d472ad634c", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8", Pod:"coredns-5dd5756b68-4rqwj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2208b29cc6", MAC:"8e:0d:af:65:cb:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:31.971772 containerd[1453]: 2024-09-04 17:20:31.967 [INFO][4479] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8" Namespace="kube-system" Pod="coredns-5dd5756b68-4rqwj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:32.000441 containerd[1453]: time="2024-09-04T17:20:31.999794247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:32.000441 containerd[1453]: time="2024-09-04T17:20:31.999883594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:32.000441 containerd[1453]: time="2024-09-04T17:20:31.999918199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:32.000441 containerd[1453]: time="2024-09-04T17:20:31.999955159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:32.027667 systemd[1]: Started cri-containerd-f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8.scope - libcontainer container f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8. Sep 4 17:20:32.044597 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:20:32.076283 containerd[1453]: time="2024-09-04T17:20:32.076217575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4rqwj,Uid:66be1aa6-468b-46e9-8f41-49d472ad634c,Namespace:kube-system,Attempt:1,} returns sandbox id \"f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8\"" Sep 4 17:20:32.080580 kubelet[2519]: E0904 17:20:32.078405 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:32.085620 containerd[1453]: time="2024-09-04T17:20:32.085556997Z" level=info msg="CreateContainer within sandbox \"f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:20:32.126311 containerd[1453]: time="2024-09-04T17:20:32.126250600Z" level=info msg="CreateContainer within sandbox \"f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ab55f1f3696497fa5e975746648a5654378b5c3b80eeef2d68db7cd79647660\"" Sep 4 17:20:32.127014 containerd[1453]: time="2024-09-04T17:20:32.126981853Z" level=info msg="StartContainer for \"6ab55f1f3696497fa5e975746648a5654378b5c3b80eeef2d68db7cd79647660\"" Sep 4 17:20:32.174639 systemd[1]: Started cri-containerd-6ab55f1f3696497fa5e975746648a5654378b5c3b80eeef2d68db7cd79647660.scope - libcontainer container 6ab55f1f3696497fa5e975746648a5654378b5c3b80eeef2d68db7cd79647660. Sep 4 17:20:32.207458 containerd[1453]: time="2024-09-04T17:20:32.207051305Z" level=info msg="StartContainer for \"6ab55f1f3696497fa5e975746648a5654378b5c3b80eeef2d68db7cd79647660\" returns successfully" Sep 4 17:20:33.141686 systemd-networkd[1388]: calia2208b29cc6: Gained IPv6LL Sep 4 17:20:33.195638 kubelet[2519]: E0904 17:20:33.195594 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:33.227618 kubelet[2519]: I0904 17:20:33.227530 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4rqwj" podStartSLOduration=38.227458633 podCreationTimestamp="2024-09-04 17:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:20:33.226234446 +0000 UTC m=+53.333496646" watchObservedRunningTime="2024-09-04 17:20:33.227458633 +0000 UTC m=+53.334720823" Sep 4 17:20:33.700594 containerd[1453]: time="2024-09-04T17:20:33.700529983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:33.701254 containerd[1453]: time="2024-09-04T17:20:33.701178171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:20:33.702532 containerd[1453]: time="2024-09-04T17:20:33.702484032Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:33.704982 containerd[1453]: time="2024-09-04T17:20:33.704945283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:33.706319 containerd[1453]: time="2024-09-04T17:20:33.705477582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.773766433s" Sep 4 17:20:33.706319 containerd[1453]: time="2024-09-04T17:20:33.705532345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:20:33.707142 containerd[1453]: time="2024-09-04T17:20:33.707105487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:20:33.719372 containerd[1453]: time="2024-09-04T17:20:33.719315008Z" level=info msg="CreateContainer within sandbox \"0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:20:33.738977 containerd[1453]: time="2024-09-04T17:20:33.738910072Z" level=info msg="CreateContainer within sandbox \"0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6b088c021273b0421aced7389c628005f178a1e76d3d3a158c6e934c2b62b027\"" Sep 4 17:20:33.740743 containerd[1453]: time="2024-09-04T17:20:33.739697381Z" level=info msg="StartContainer for \"6b088c021273b0421aced7389c628005f178a1e76d3d3a158c6e934c2b62b027\"" Sep 4 17:20:33.778681 systemd[1]: Started cri-containerd-6b088c021273b0421aced7389c628005f178a1e76d3d3a158c6e934c2b62b027.scope - libcontainer container 6b088c021273b0421aced7389c628005f178a1e76d3d3a158c6e934c2b62b027. Sep 4 17:20:34.072463 containerd[1453]: time="2024-09-04T17:20:34.072419825Z" level=info msg="StartContainer for \"6b088c021273b0421aced7389c628005f178a1e76d3d3a158c6e934c2b62b027\" returns successfully" Sep 4 17:20:34.199991 kubelet[2519]: E0904 17:20:34.199847 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:34.209635 kubelet[2519]: I0904 17:20:34.209587 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79cc849bb-qnk7z" podStartSLOduration=29.927564931 podCreationTimestamp="2024-09-04 17:20:00 +0000 UTC" firstStartedPulling="2024-09-04 17:20:29.424053043 +0000 UTC m=+49.531315233" lastFinishedPulling="2024-09-04 17:20:33.706030039 +0000 UTC m=+53.813292249" observedRunningTime="2024-09-04 17:20:34.208416891 +0000 UTC m=+54.315679081" watchObservedRunningTime="2024-09-04 17:20:34.209541947 +0000 UTC m=+54.316804137" Sep 4 17:20:35.202239 kubelet[2519]: E0904 17:20:35.202155 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:35.659536 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:54918.service - OpenSSH per-connection server daemon (10.0.0.1:54918). Sep 4 17:20:35.785630 sshd[4678]: Accepted publickey for core from 10.0.0.1 port 54918 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:35.787424 sshd[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:35.791849 systemd-logind[1435]: New session 16 of user core. Sep 4 17:20:35.801643 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:20:35.815454 containerd[1453]: time="2024-09-04T17:20:35.815399979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:35.823174 containerd[1453]: time="2024-09-04T17:20:35.823054626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:20:35.827893 containerd[1453]: time="2024-09-04T17:20:35.827821681Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:35.845068 containerd[1453]: time="2024-09-04T17:20:35.844978187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:35.846079 containerd[1453]: time="2024-09-04T17:20:35.845985214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.138841213s" Sep 4 17:20:35.846079 containerd[1453]: time="2024-09-04T17:20:35.846041212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:20:35.848390 containerd[1453]: time="2024-09-04T17:20:35.848332581Z" level=info msg="CreateContainer within sandbox \"e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:20:35.954933 sshd[4678]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:35.959848 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:54918.service: Deactivated successfully. Sep 4 17:20:35.962507 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:20:35.963884 systemd-logind[1435]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:20:35.972768 containerd[1453]: time="2024-09-04T17:20:35.972708423Z" level=info msg="CreateContainer within sandbox \"e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7c45a4304c8be8be939f56a80d748c82185413c61cc1f54f4dd683d7bb8c0b25\"" Sep 4 17:20:35.973012 systemd-logind[1435]: Removed session 16. Sep 4 17:20:35.973764 containerd[1453]: time="2024-09-04T17:20:35.973712143Z" level=info msg="StartContainer for \"7c45a4304c8be8be939f56a80d748c82185413c61cc1f54f4dd683d7bb8c0b25\"" Sep 4 17:20:36.007687 systemd[1]: Started cri-containerd-7c45a4304c8be8be939f56a80d748c82185413c61cc1f54f4dd683d7bb8c0b25.scope - libcontainer container 7c45a4304c8be8be939f56a80d748c82185413c61cc1f54f4dd683d7bb8c0b25. Sep 4 17:20:36.087987 containerd[1453]: time="2024-09-04T17:20:36.087941667Z" level=info msg="StartContainer for \"7c45a4304c8be8be939f56a80d748c82185413c61cc1f54f4dd683d7bb8c0b25\" returns successfully" Sep 4 17:20:36.123880 kubelet[2519]: I0904 17:20:36.123820 2519 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:20:36.123880 kubelet[2519]: I0904 17:20:36.123857 2519 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:20:36.255729 kubelet[2519]: I0904 17:20:36.244471 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-vmblz" podStartSLOduration=28.064074786 podCreationTimestamp="2024-09-04 17:20:00 +0000 UTC" firstStartedPulling="2024-09-04 17:20:27.666197404 +0000 UTC m=+47.773459594" lastFinishedPulling="2024-09-04 17:20:35.84653687 +0000 UTC m=+55.953799060" observedRunningTime="2024-09-04 17:20:36.242670974 +0000 UTC m=+56.349933164" watchObservedRunningTime="2024-09-04 17:20:36.244414252 +0000 UTC m=+56.351676442" Sep 4 17:20:40.003858 containerd[1453]: time="2024-09-04T17:20:40.003785348Z" level=info msg="StopPodSandbox for \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\"" Sep 4 17:20:40.077256 containerd[1453]: 2024-09-04 17:20:40.044 [WARNING][4759] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vmblz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec74825e-1f06-4e2a-b769-94c881521a0f", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32", Pod:"csi-node-driver-vmblz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie64bca0b135", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:40.077256 containerd[1453]: 2024-09-04 17:20:40.044 [INFO][4759] k8s.go 608: Cleaning up netns ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:40.077256 containerd[1453]: 2024-09-04 17:20:40.044 [INFO][4759] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" iface="eth0" netns="" Sep 4 17:20:40.077256 containerd[1453]: 2024-09-04 17:20:40.044 [INFO][4759] k8s.go 615: Releasing IP address(es) ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:40.077256 containerd[1453]: 2024-09-04 17:20:40.044 [INFO][4759] utils.go 188: Calico CNI releasing IP address ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:40.077256 containerd[1453]: 2024-09-04 17:20:40.065 [INFO][4769] ipam_plugin.go 417: Releasing address using handleID ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" HandleID="k8s-pod-network.bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Workload="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:40.077256 containerd[1453]: 2024-09-04 17:20:40.065 [INFO][4769] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:40.077256 containerd[1453]: 2024-09-04 17:20:40.065 [INFO][4769] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:40.077256 containerd[1453]: 2024-09-04 17:20:40.070 [WARNING][4769] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" HandleID="k8s-pod-network.bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Workload="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:40.077256 containerd[1453]: 2024-09-04 17:20:40.070 [INFO][4769] ipam_plugin.go 445: Releasing address using workloadID ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" HandleID="k8s-pod-network.bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Workload="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:40.077256 containerd[1453]: 2024-09-04 17:20:40.072 [INFO][4769] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:40.077256 containerd[1453]: 2024-09-04 17:20:40.074 [INFO][4759] k8s.go 621: Teardown processing complete. ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:40.077848 containerd[1453]: time="2024-09-04T17:20:40.077299170Z" level=info msg="TearDown network for sandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\" successfully" Sep 4 17:20:40.077848 containerd[1453]: time="2024-09-04T17:20:40.077330110Z" level=info msg="StopPodSandbox for \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\" returns successfully" Sep 4 17:20:40.077909 containerd[1453]: time="2024-09-04T17:20:40.077886811Z" level=info msg="RemovePodSandbox for \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\"" Sep 4 17:20:40.080015 containerd[1453]: time="2024-09-04T17:20:40.079987867Z" level=info msg="Forcibly stopping sandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\"" Sep 4 17:20:40.266543 containerd[1453]: 2024-09-04 17:20:40.182 [WARNING][4792] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vmblz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec74825e-1f06-4e2a-b769-94c881521a0f", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e93679c9571621515a7aa2c838ff361b9b5ccafe684dbbde3fe15ecc982f9f32", Pod:"csi-node-driver-vmblz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie64bca0b135", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:40.266543 containerd[1453]: 2024-09-04 17:20:40.182 [INFO][4792] k8s.go 608: Cleaning up netns ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:40.266543 containerd[1453]: 2024-09-04 17:20:40.182 [INFO][4792] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" iface="eth0" netns="" Sep 4 17:20:40.266543 containerd[1453]: 2024-09-04 17:20:40.182 [INFO][4792] k8s.go 615: Releasing IP address(es) ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:40.266543 containerd[1453]: 2024-09-04 17:20:40.182 [INFO][4792] utils.go 188: Calico CNI releasing IP address ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:40.266543 containerd[1453]: 2024-09-04 17:20:40.223 [INFO][4799] ipam_plugin.go 417: Releasing address using handleID ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" HandleID="k8s-pod-network.bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Workload="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:40.266543 containerd[1453]: 2024-09-04 17:20:40.223 [INFO][4799] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:40.266543 containerd[1453]: 2024-09-04 17:20:40.223 [INFO][4799] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:40.266543 containerd[1453]: 2024-09-04 17:20:40.242 [WARNING][4799] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" HandleID="k8s-pod-network.bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Workload="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:40.266543 containerd[1453]: 2024-09-04 17:20:40.242 [INFO][4799] ipam_plugin.go 445: Releasing address using workloadID ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" HandleID="k8s-pod-network.bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Workload="localhost-k8s-csi--node--driver--vmblz-eth0" Sep 4 17:20:40.266543 containerd[1453]: 2024-09-04 17:20:40.250 [INFO][4799] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:40.266543 containerd[1453]: 2024-09-04 17:20:40.256 [INFO][4792] k8s.go 621: Teardown processing complete. ContainerID="bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b" Sep 4 17:20:40.266543 containerd[1453]: time="2024-09-04T17:20:40.265754224Z" level=info msg="TearDown network for sandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\" successfully" Sep 4 17:20:40.533871 containerd[1453]: time="2024-09-04T17:20:40.532740412Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:20:40.533871 containerd[1453]: time="2024-09-04T17:20:40.532877907Z" level=info msg="RemovePodSandbox \"bd79e10664e98aaee113bbce82c3663e4414de262bb9912ffc1727c4ad51323b\" returns successfully" Sep 4 17:20:40.533871 containerd[1453]: time="2024-09-04T17:20:40.533533179Z" level=info msg="StopPodSandbox for \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\"" Sep 4 17:20:40.807932 containerd[1453]: 2024-09-04 17:20:40.687 [WARNING][4823] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--4rqwj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"66be1aa6-468b-46e9-8f41-49d472ad634c", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8", Pod:"coredns-5dd5756b68-4rqwj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2208b29cc6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:40.807932 containerd[1453]: 2024-09-04 17:20:40.688 [INFO][4823] k8s.go 608: Cleaning up netns ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:40.807932 containerd[1453]: 2024-09-04 17:20:40.688 [INFO][4823] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" iface="eth0" netns="" Sep 4 17:20:40.807932 containerd[1453]: 2024-09-04 17:20:40.688 [INFO][4823] k8s.go 615: Releasing IP address(es) ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:40.807932 containerd[1453]: 2024-09-04 17:20:40.688 [INFO][4823] utils.go 188: Calico CNI releasing IP address ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:40.807932 containerd[1453]: 2024-09-04 17:20:40.757 [INFO][4830] ipam_plugin.go 417: Releasing address using handleID ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" HandleID="k8s-pod-network.b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Workload="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:40.807932 containerd[1453]: 2024-09-04 17:20:40.757 [INFO][4830] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:40.807932 containerd[1453]: 2024-09-04 17:20:40.757 [INFO][4830] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:40.807932 containerd[1453]: 2024-09-04 17:20:40.775 [WARNING][4830] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" HandleID="k8s-pod-network.b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Workload="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:40.807932 containerd[1453]: 2024-09-04 17:20:40.775 [INFO][4830] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" HandleID="k8s-pod-network.b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Workload="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:40.807932 containerd[1453]: 2024-09-04 17:20:40.782 [INFO][4830] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:40.807932 containerd[1453]: 2024-09-04 17:20:40.790 [INFO][4823] k8s.go 621: Teardown processing complete. ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:40.807932 containerd[1453]: time="2024-09-04T17:20:40.807247113Z" level=info msg="TearDown network for sandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\" successfully" Sep 4 17:20:40.812321 containerd[1453]: time="2024-09-04T17:20:40.811930442Z" level=info msg="StopPodSandbox for \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\" returns successfully" Sep 4 17:20:40.818376 containerd[1453]: time="2024-09-04T17:20:40.816554176Z" level=info msg="RemovePodSandbox for \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\"" Sep 4 17:20:40.818376 containerd[1453]: time="2024-09-04T17:20:40.816619994Z" level=info msg="Forcibly stopping sandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\"" Sep 4 17:20:40.985389 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:55888.service - OpenSSH per-connection server daemon (10.0.0.1:55888). Sep 4 17:20:40.998833 containerd[1453]: 2024-09-04 17:20:40.904 [WARNING][4851] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--4rqwj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"66be1aa6-468b-46e9-8f41-49d472ad634c", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f811506b01b92fe28de24dcec061bf4f0e0aee6cf3b98d3bf85a123bea8704d8", Pod:"coredns-5dd5756b68-4rqwj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2208b29cc6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:40.998833 containerd[1453]: 2024-09-04 17:20:40.904 [INFO][4851] k8s.go 608: Cleaning up netns ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:40.998833 containerd[1453]: 2024-09-04 17:20:40.904 [INFO][4851] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" iface="eth0" netns="" Sep 4 17:20:40.998833 containerd[1453]: 2024-09-04 17:20:40.904 [INFO][4851] k8s.go 615: Releasing IP address(es) ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:40.998833 containerd[1453]: 2024-09-04 17:20:40.904 [INFO][4851] utils.go 188: Calico CNI releasing IP address ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:40.998833 containerd[1453]: 2024-09-04 17:20:40.949 [INFO][4859] ipam_plugin.go 417: Releasing address using handleID ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" HandleID="k8s-pod-network.b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Workload="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:40.998833 containerd[1453]: 2024-09-04 17:20:40.950 [INFO][4859] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:40.998833 containerd[1453]: 2024-09-04 17:20:40.950 [INFO][4859] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:40.998833 containerd[1453]: 2024-09-04 17:20:40.964 [WARNING][4859] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" HandleID="k8s-pod-network.b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Workload="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:40.998833 containerd[1453]: 2024-09-04 17:20:40.964 [INFO][4859] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" HandleID="k8s-pod-network.b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Workload="localhost-k8s-coredns--5dd5756b68--4rqwj-eth0" Sep 4 17:20:40.998833 containerd[1453]: 2024-09-04 17:20:40.976 [INFO][4859] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:40.998833 containerd[1453]: 2024-09-04 17:20:40.993 [INFO][4851] k8s.go 621: Teardown processing complete. ContainerID="b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1" Sep 4 17:20:40.999407 containerd[1453]: time="2024-09-04T17:20:40.998869857Z" level=info msg="TearDown network for sandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\" successfully" Sep 4 17:20:41.010013 containerd[1453]: time="2024-09-04T17:20:41.009953506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:20:41.010717 containerd[1453]: time="2024-09-04T17:20:41.010588658Z" level=info msg="RemovePodSandbox \"b6ce38edf83a61327e15a8839cc6d8683e4afa0c6a12ed19bf3c944cdf3d80e1\" returns successfully" Sep 4 17:20:41.016182 containerd[1453]: time="2024-09-04T17:20:41.011260681Z" level=info msg="StopPodSandbox for \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\"" Sep 4 17:20:41.094601 sshd[4868]: Accepted publickey for core from 10.0.0.1 port 55888 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:41.096833 sshd[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:41.109715 systemd-logind[1435]: New session 17 of user core. Sep 4 17:20:41.118973 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:20:41.161791 containerd[1453]: 2024-09-04 17:20:41.083 [WARNING][4882] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0", GenerateName:"calico-kube-controllers-79cc849bb-", Namespace:"calico-system", SelfLink:"", UID:"3134e9e3-9d9c-4e6c-af7f-e379eb17a941", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79cc849bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54", Pod:"calico-kube-controllers-79cc849bb-qnk7z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali57348b23f37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:41.161791 containerd[1453]: 2024-09-04 17:20:41.083 [INFO][4882] k8s.go 608: Cleaning up netns ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:41.161791 containerd[1453]: 2024-09-04 17:20:41.083 [INFO][4882] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" iface="eth0" netns="" Sep 4 17:20:41.161791 containerd[1453]: 2024-09-04 17:20:41.083 [INFO][4882] k8s.go 615: Releasing IP address(es) ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:41.161791 containerd[1453]: 2024-09-04 17:20:41.086 [INFO][4882] utils.go 188: Calico CNI releasing IP address ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:41.161791 containerd[1453]: 2024-09-04 17:20:41.125 [INFO][4891] ipam_plugin.go 417: Releasing address using handleID ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" HandleID="k8s-pod-network.4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Workload="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:41.161791 containerd[1453]: 2024-09-04 17:20:41.127 [INFO][4891] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:41.161791 containerd[1453]: 2024-09-04 17:20:41.130 [INFO][4891] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:41.161791 containerd[1453]: 2024-09-04 17:20:41.145 [WARNING][4891] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" HandleID="k8s-pod-network.4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Workload="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:41.161791 containerd[1453]: 2024-09-04 17:20:41.145 [INFO][4891] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" HandleID="k8s-pod-network.4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Workload="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:41.161791 containerd[1453]: 2024-09-04 17:20:41.148 [INFO][4891] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:41.161791 containerd[1453]: 2024-09-04 17:20:41.157 [INFO][4882] k8s.go 621: Teardown processing complete. ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:41.161791 containerd[1453]: time="2024-09-04T17:20:41.161670348Z" level=info msg="TearDown network for sandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\" successfully" Sep 4 17:20:41.161791 containerd[1453]: time="2024-09-04T17:20:41.161701017Z" level=info msg="StopPodSandbox for \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\" returns successfully" Sep 4 17:20:41.162426 containerd[1453]: time="2024-09-04T17:20:41.162170199Z" level=info msg="RemovePodSandbox for \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\"" Sep 4 17:20:41.162426 containerd[1453]: time="2024-09-04T17:20:41.162203564Z" level=info msg="Forcibly stopping sandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\"" Sep 4 17:20:41.353940 containerd[1453]: 2024-09-04 17:20:41.243 [WARNING][4916] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0", GenerateName:"calico-kube-controllers-79cc849bb-", Namespace:"calico-system", SelfLink:"", UID:"3134e9e3-9d9c-4e6c-af7f-e379eb17a941", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79cc849bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0015fd37b7cbe89cdafc8efd7ccfcc5b33315d87ba23f033ac29212c5ae28f54", Pod:"calico-kube-controllers-79cc849bb-qnk7z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali57348b23f37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:41.353940 containerd[1453]: 2024-09-04 17:20:41.243 [INFO][4916] k8s.go 608: Cleaning up netns ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:41.353940 containerd[1453]: 2024-09-04 17:20:41.243 [INFO][4916] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" iface="eth0" netns="" Sep 4 17:20:41.353940 containerd[1453]: 2024-09-04 17:20:41.243 [INFO][4916] k8s.go 615: Releasing IP address(es) ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:41.353940 containerd[1453]: 2024-09-04 17:20:41.243 [INFO][4916] utils.go 188: Calico CNI releasing IP address ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:41.353940 containerd[1453]: 2024-09-04 17:20:41.284 [INFO][4929] ipam_plugin.go 417: Releasing address using handleID ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" HandleID="k8s-pod-network.4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Workload="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:41.353940 containerd[1453]: 2024-09-04 17:20:41.284 [INFO][4929] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:41.353940 containerd[1453]: 2024-09-04 17:20:41.284 [INFO][4929] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:41.353940 containerd[1453]: 2024-09-04 17:20:41.296 [WARNING][4929] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" HandleID="k8s-pod-network.4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Workload="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:41.353940 containerd[1453]: 2024-09-04 17:20:41.297 [INFO][4929] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" HandleID="k8s-pod-network.4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Workload="localhost-k8s-calico--kube--controllers--79cc849bb--qnk7z-eth0" Sep 4 17:20:41.353940 containerd[1453]: 2024-09-04 17:20:41.301 [INFO][4929] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:41.353940 containerd[1453]: 2024-09-04 17:20:41.313 [INFO][4916] k8s.go 621: Teardown processing complete. ContainerID="4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b" Sep 4 17:20:41.353940 containerd[1453]: time="2024-09-04T17:20:41.353856272Z" level=info msg="TearDown network for sandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\" successfully" Sep 4 17:20:41.402435 sshd[4868]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:41.408301 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:55888.service: Deactivated successfully. Sep 4 17:20:41.411871 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:20:41.412879 containerd[1453]: time="2024-09-04T17:20:41.412781841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:20:41.412964 containerd[1453]: time="2024-09-04T17:20:41.412905209Z" level=info msg="RemovePodSandbox \"4d8acab24997d55194b9a0b07c399d530762a5e53e989a6bf81d64c15ed30c1b\" returns successfully" Sep 4 17:20:41.415163 containerd[1453]: time="2024-09-04T17:20:41.414971905Z" level=info msg="StopPodSandbox for \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\"" Sep 4 17:20:41.417206 systemd-logind[1435]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:20:41.420726 systemd-logind[1435]: Removed session 17. Sep 4 17:20:41.577333 containerd[1453]: 2024-09-04 17:20:41.506 [WARNING][4954] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--zxfg9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"84407aac-cc69-469e-80db-ff90c5b8ed8a", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a", Pod:"coredns-5dd5756b68-zxfg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia626b61c1dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:41.577333 containerd[1453]: 2024-09-04 17:20:41.506 [INFO][4954] k8s.go 608: Cleaning up netns ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:41.577333 containerd[1453]: 2024-09-04 17:20:41.506 [INFO][4954] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" iface="eth0" netns="" Sep 4 17:20:41.577333 containerd[1453]: 2024-09-04 17:20:41.506 [INFO][4954] k8s.go 615: Releasing IP address(es) ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:41.577333 containerd[1453]: 2024-09-04 17:20:41.506 [INFO][4954] utils.go 188: Calico CNI releasing IP address ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:41.577333 containerd[1453]: 2024-09-04 17:20:41.545 [INFO][4961] ipam_plugin.go 417: Releasing address using handleID ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" HandleID="k8s-pod-network.8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Workload="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:41.577333 containerd[1453]: 2024-09-04 17:20:41.545 [INFO][4961] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:41.577333 containerd[1453]: 2024-09-04 17:20:41.545 [INFO][4961] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:41.577333 containerd[1453]: 2024-09-04 17:20:41.556 [WARNING][4961] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" HandleID="k8s-pod-network.8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Workload="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:41.577333 containerd[1453]: 2024-09-04 17:20:41.556 [INFO][4961] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" HandleID="k8s-pod-network.8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Workload="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:41.577333 containerd[1453]: 2024-09-04 17:20:41.569 [INFO][4961] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:41.577333 containerd[1453]: 2024-09-04 17:20:41.573 [INFO][4954] k8s.go 621: Teardown processing complete. ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:41.578292 containerd[1453]: time="2024-09-04T17:20:41.577363641Z" level=info msg="TearDown network for sandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\" successfully" Sep 4 17:20:41.578292 containerd[1453]: time="2024-09-04T17:20:41.577395592Z" level=info msg="StopPodSandbox for \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\" returns successfully" Sep 4 17:20:41.578292 containerd[1453]: time="2024-09-04T17:20:41.577963726Z" level=info msg="RemovePodSandbox for \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\"" Sep 4 17:20:41.578292 containerd[1453]: time="2024-09-04T17:20:41.577995477Z" level=info msg="Forcibly stopping sandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\"" Sep 4 17:20:41.791042 containerd[1453]: 2024-09-04 17:20:41.714 [WARNING][4982] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--zxfg9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"84407aac-cc69-469e-80db-ff90c5b8ed8a", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62529d95dd81e6bd17fc295b37cc5f9e8cf1582deec4203faebfd5cb18d42d2a", Pod:"coredns-5dd5756b68-zxfg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia626b61c1dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:41.791042 containerd[1453]: 2024-09-04 17:20:41.714 [INFO][4982] k8s.go 608: Cleaning up netns ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:41.791042 containerd[1453]: 2024-09-04 17:20:41.714 [INFO][4982] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" iface="eth0" netns="" Sep 4 17:20:41.791042 containerd[1453]: 2024-09-04 17:20:41.714 [INFO][4982] k8s.go 615: Releasing IP address(es) ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:41.791042 containerd[1453]: 2024-09-04 17:20:41.714 [INFO][4982] utils.go 188: Calico CNI releasing IP address ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:41.791042 containerd[1453]: 2024-09-04 17:20:41.766 [INFO][4990] ipam_plugin.go 417: Releasing address using handleID ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" HandleID="k8s-pod-network.8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Workload="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:41.791042 containerd[1453]: 2024-09-04 17:20:41.766 [INFO][4990] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:41.791042 containerd[1453]: 2024-09-04 17:20:41.767 [INFO][4990] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:41.791042 containerd[1453]: 2024-09-04 17:20:41.777 [WARNING][4990] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" HandleID="k8s-pod-network.8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Workload="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:41.791042 containerd[1453]: 2024-09-04 17:20:41.777 [INFO][4990] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" HandleID="k8s-pod-network.8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Workload="localhost-k8s-coredns--5dd5756b68--zxfg9-eth0" Sep 4 17:20:41.791042 containerd[1453]: 2024-09-04 17:20:41.780 [INFO][4990] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:41.791042 containerd[1453]: 2024-09-04 17:20:41.786 [INFO][4982] k8s.go 621: Teardown processing complete. ContainerID="8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a" Sep 4 17:20:41.791042 containerd[1453]: time="2024-09-04T17:20:41.790988313Z" level=info msg="TearDown network for sandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\" successfully" Sep 4 17:20:41.799937 containerd[1453]: time="2024-09-04T17:20:41.799867851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:20:41.799937 containerd[1453]: time="2024-09-04T17:20:41.799942514Z" level=info msg="RemovePodSandbox \"8c55574e7a26f983a2ea118590d4c54e6b11bb83ab6c5e8044eb17755832369a\" returns successfully" Sep 4 17:20:46.409638 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:60470.service - OpenSSH per-connection server daemon (10.0.0.1:60470). Sep 4 17:20:46.445701 sshd[5021]: Accepted publickey for core from 10.0.0.1 port 60470 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:46.447385 sshd[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:46.451210 systemd-logind[1435]: New session 18 of user core. Sep 4 17:20:46.466645 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:20:46.660146 sshd[5021]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:46.664279 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:60470.service: Deactivated successfully. Sep 4 17:20:46.666875 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:20:46.667596 systemd-logind[1435]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:20:46.668613 systemd-logind[1435]: Removed session 18. Sep 4 17:20:51.671768 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:60484.service - OpenSSH per-connection server daemon (10.0.0.1:60484). Sep 4 17:20:51.706517 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 60484 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:51.708110 sshd[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:51.712051 systemd-logind[1435]: New session 19 of user core. Sep 4 17:20:51.720638 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:20:51.832829 sshd[5047]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:51.837552 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:60484.service: Deactivated successfully. Sep 4 17:20:51.840604 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:20:51.841382 systemd-logind[1435]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:20:51.842354 systemd-logind[1435]: Removed session 19. Sep 4 17:20:56.844966 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:44310.service - OpenSSH per-connection server daemon (10.0.0.1:44310). Sep 4 17:20:56.879747 sshd[5065]: Accepted publickey for core from 10.0.0.1 port 44310 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:56.881663 sshd[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:56.886108 systemd-logind[1435]: New session 20 of user core. Sep 4 17:20:56.893626 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:20:57.010803 sshd[5065]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:57.025857 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:44310.service: Deactivated successfully. Sep 4 17:20:57.027997 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:20:57.029851 systemd-logind[1435]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:20:57.036861 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:44324.service - OpenSSH per-connection server daemon (10.0.0.1:44324). Sep 4 17:20:57.038006 systemd-logind[1435]: Removed session 20. Sep 4 17:20:57.068781 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 44324 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:57.070597 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:57.075004 systemd-logind[1435]: New session 21 of user core. Sep 4 17:20:57.084622 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:20:57.356420 sshd[5079]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:57.366570 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:44324.service: Deactivated successfully. Sep 4 17:20:57.368815 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:20:57.370573 systemd-logind[1435]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:20:57.375813 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:44328.service - OpenSSH per-connection server daemon (10.0.0.1:44328). Sep 4 17:20:57.377281 systemd-logind[1435]: Removed session 21. Sep 4 17:20:57.408758 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 44328 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:57.410236 sshd[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:57.414739 systemd-logind[1435]: New session 22 of user core. Sep 4 17:20:57.424636 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:20:57.710071 kubelet[2519]: E0904 17:20:57.709921 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:58.494585 sshd[5091]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:58.509768 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:44328.service: Deactivated successfully. Sep 4 17:20:58.511954 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:20:58.513629 systemd-logind[1435]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:20:58.518819 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:44332.service - OpenSSH per-connection server daemon (10.0.0.1:44332). Sep 4 17:20:58.520027 systemd-logind[1435]: Removed session 22. Sep 4 17:20:58.566793 sshd[5134]: Accepted publickey for core from 10.0.0.1 port 44332 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:58.568551 sshd[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:58.572860 systemd-logind[1435]: New session 23 of user core. Sep 4 17:20:58.580720 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:20:58.913148 sshd[5134]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:58.924245 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:44332.service: Deactivated successfully. Sep 4 17:20:58.926856 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:20:58.929577 systemd-logind[1435]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:20:58.935825 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:44344.service - OpenSSH per-connection server daemon (10.0.0.1:44344). Sep 4 17:20:58.937063 systemd-logind[1435]: Removed session 23. Sep 4 17:20:58.971450 sshd[5147]: Accepted publickey for core from 10.0.0.1 port 44344 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:20:58.973810 sshd[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:58.979136 systemd-logind[1435]: New session 24 of user core. Sep 4 17:20:58.999830 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:20:59.116463 sshd[5147]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:59.121670 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:44344.service: Deactivated successfully. Sep 4 17:20:59.124703 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:20:59.125583 systemd-logind[1435]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:20:59.126683 systemd-logind[1435]: Removed session 24. Sep 4 17:21:00.265783 update_engine[1436]: I0904 17:21:00.265699 1436 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 4 17:21:00.265783 update_engine[1436]: I0904 17:21:00.265761 1436 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 4 17:21:00.266341 update_engine[1436]: I0904 17:21:00.266311 1436 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 4 17:21:00.266967 update_engine[1436]: I0904 17:21:00.266937 1436 omaha_request_params.cc:62] Current group set to stable Sep 4 17:21:00.267722 update_engine[1436]: I0904 17:21:00.267695 1436 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 4 17:21:00.267722 update_engine[1436]: I0904 17:21:00.267706 1436 update_attempter.cc:643] Scheduling an action processor start. Sep 4 17:21:00.267833 update_engine[1436]: I0904 17:21:00.267731 1436 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 17:21:00.267833 update_engine[1436]: I0904 17:21:00.267792 1436 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 4 17:21:00.267913 update_engine[1436]: I0904 17:21:00.267866 1436 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 17:21:00.267913 update_engine[1436]: I0904 17:21:00.267874 1436 omaha_request_action.cc:272] Request: Sep 4 17:21:00.267913 update_engine[1436]: Sep 4 17:21:00.267913 update_engine[1436]: Sep 4 17:21:00.267913 update_engine[1436]: Sep 4 17:21:00.267913 update_engine[1436]: Sep 4 17:21:00.267913 update_engine[1436]: Sep 4 17:21:00.267913 update_engine[1436]: Sep 4 17:21:00.267913 update_engine[1436]: Sep 4 17:21:00.267913 update_engine[1436]: Sep 4 17:21:00.267913 update_engine[1436]: I0904 17:21:00.267882 1436 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:21:00.271672 update_engine[1436]: I0904 17:21:00.270519 1436 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:21:00.271672 update_engine[1436]: I0904 17:21:00.270836 1436 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:21:00.273053 locksmithd[1458]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 4 17:21:00.297420 update_engine[1436]: E0904 17:21:00.297381 1436 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:21:00.297482 update_engine[1436]: I0904 17:21:00.297437 1436 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 4 17:21:04.129150 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:44348.service - OpenSSH per-connection server daemon (10.0.0.1:44348). Sep 4 17:21:04.169110 sshd[5169]: Accepted publickey for core from 10.0.0.1 port 44348 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:21:04.171063 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:04.176450 systemd-logind[1435]: New session 25 of user core. Sep 4 17:21:04.181709 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:21:04.302415 sshd[5169]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:04.306541 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:44348.service: Deactivated successfully. Sep 4 17:21:04.309037 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:21:04.309908 systemd-logind[1435]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:21:04.310960 systemd-logind[1435]: Removed session 25. Sep 4 17:21:09.018073 kubelet[2519]: E0904 17:21:09.018030 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:09.314341 systemd[1]: Started sshd@25-10.0.0.55:22-10.0.0.1:51392.service - OpenSSH per-connection server daemon (10.0.0.1:51392). Sep 4 17:21:09.347376 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 51392 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:21:09.348910 sshd[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:09.352740 systemd-logind[1435]: New session 26 of user core. Sep 4 17:21:09.362644 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:21:09.480192 sshd[5190]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:09.484675 systemd[1]: sshd@25-10.0.0.55:22-10.0.0.1:51392.service: Deactivated successfully. Sep 4 17:21:09.487084 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:21:09.487890 systemd-logind[1435]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:21:09.489079 systemd-logind[1435]: Removed session 26. Sep 4 17:21:10.193302 kubelet[2519]: I0904 17:21:10.193252 2519 topology_manager.go:215] "Topology Admit Handler" podUID="aa94ba73-11fc-4f77-ab59-83ae6d16f701" podNamespace="calico-apiserver" podName="calico-apiserver-7f75ddf75c-gv2zs" Sep 4 17:21:10.206231 systemd[1]: Created slice kubepods-besteffort-podaa94ba73_11fc_4f77_ab59_83ae6d16f701.slice - libcontainer container kubepods-besteffort-podaa94ba73_11fc_4f77_ab59_83ae6d16f701.slice. Sep 4 17:21:10.244584 update_engine[1436]: I0904 17:21:10.244536 1436 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:21:10.244977 update_engine[1436]: I0904 17:21:10.244776 1436 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:21:10.244977 update_engine[1436]: I0904 17:21:10.244953 1436 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:21:10.260842 update_engine[1436]: E0904 17:21:10.260805 1436 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:21:10.260928 update_engine[1436]: I0904 17:21:10.260854 1436 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 4 17:21:10.329476 kubelet[2519]: I0904 17:21:10.329417 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa94ba73-11fc-4f77-ab59-83ae6d16f701-calico-apiserver-certs\") pod \"calico-apiserver-7f75ddf75c-gv2zs\" (UID: \"aa94ba73-11fc-4f77-ab59-83ae6d16f701\") " pod="calico-apiserver/calico-apiserver-7f75ddf75c-gv2zs" Sep 4 17:21:10.329476 kubelet[2519]: I0904 17:21:10.329467 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrrpz\" (UniqueName: \"kubernetes.io/projected/aa94ba73-11fc-4f77-ab59-83ae6d16f701-kube-api-access-mrrpz\") pod \"calico-apiserver-7f75ddf75c-gv2zs\" (UID: \"aa94ba73-11fc-4f77-ab59-83ae6d16f701\") " pod="calico-apiserver/calico-apiserver-7f75ddf75c-gv2zs" Sep 4 17:21:10.430725 kubelet[2519]: E0904 17:21:10.430678 2519 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:21:10.431549 kubelet[2519]: E0904 17:21:10.431515 2519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa94ba73-11fc-4f77-ab59-83ae6d16f701-calico-apiserver-certs podName:aa94ba73-11fc-4f77-ab59-83ae6d16f701 nodeName:}" failed. No retries permitted until 2024-09-04 17:21:10.930749552 +0000 UTC m=+91.038011742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/aa94ba73-11fc-4f77-ab59-83ae6d16f701-calico-apiserver-certs") pod "calico-apiserver-7f75ddf75c-gv2zs" (UID: "aa94ba73-11fc-4f77-ab59-83ae6d16f701") : secret "calico-apiserver-certs" not found Sep 4 17:21:11.113910 containerd[1453]: time="2024-09-04T17:21:11.113847698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f75ddf75c-gv2zs,Uid:aa94ba73-11fc-4f77-ab59-83ae6d16f701,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:21:11.567666 systemd-networkd[1388]: calidd813a7eb5c: Link UP Sep 4 17:21:11.567891 systemd-networkd[1388]: calidd813a7eb5c: Gained carrier Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.493 [INFO][5216] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0 calico-apiserver-7f75ddf75c- calico-apiserver aa94ba73-11fc-4f77-ab59-83ae6d16f701 1115 0 2024-09-04 17:21:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f75ddf75c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f75ddf75c-gv2zs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd813a7eb5c [] []}} ContainerID="2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" Namespace="calico-apiserver" Pod="calico-apiserver-7f75ddf75c-gv2zs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-" Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.493 [INFO][5216] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" Namespace="calico-apiserver" Pod="calico-apiserver-7f75ddf75c-gv2zs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0" Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.521 [INFO][5229] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" HandleID="k8s-pod-network.2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" Workload="localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0" Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.529 [INFO][5229] ipam_plugin.go 270: Auto assigning IP ContainerID="2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" HandleID="k8s-pod-network.2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" Workload="localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5060), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f75ddf75c-gv2zs", "timestamp":"2024-09-04 17:21:11.52177238 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.530 [INFO][5229] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.530 [INFO][5229] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.530 [INFO][5229] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.531 [INFO][5229] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" host="localhost" Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.534 [INFO][5229] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.538 [INFO][5229] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.539 [INFO][5229] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.541 [INFO][5229] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.541 [INFO][5229] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" host="localhost" Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.542 [INFO][5229] ipam.go 1685: Creating new handle: k8s-pod-network.2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9 Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.550 [INFO][5229] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" host="localhost" Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.562 [INFO][5229] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" host="localhost" Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.562 [INFO][5229] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" host="localhost" Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.562 [INFO][5229] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:11.700209 containerd[1453]: 2024-09-04 17:21:11.562 [INFO][5229] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" HandleID="k8s-pod-network.2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" Workload="localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0" Sep 4 17:21:11.700868 containerd[1453]: 2024-09-04 17:21:11.565 [INFO][5216] k8s.go 386: Populated endpoint ContainerID="2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" Namespace="calico-apiserver" Pod="calico-apiserver-7f75ddf75c-gv2zs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0", GenerateName:"calico-apiserver-7f75ddf75c-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa94ba73-11fc-4f77-ab59-83ae6d16f701", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f75ddf75c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f75ddf75c-gv2zs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd813a7eb5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:11.700868 containerd[1453]: 2024-09-04 17:21:11.565 [INFO][5216] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" Namespace="calico-apiserver" Pod="calico-apiserver-7f75ddf75c-gv2zs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0" Sep 4 17:21:11.700868 containerd[1453]: 2024-09-04 17:21:11.565 [INFO][5216] dataplane_linux.go 68: Setting the host side veth name to calidd813a7eb5c ContainerID="2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" Namespace="calico-apiserver" Pod="calico-apiserver-7f75ddf75c-gv2zs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0" Sep 4 17:21:11.700868 containerd[1453]: 2024-09-04 17:21:11.567 [INFO][5216] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" Namespace="calico-apiserver" Pod="calico-apiserver-7f75ddf75c-gv2zs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0" Sep 4 17:21:11.700868 containerd[1453]: 2024-09-04 17:21:11.567 [INFO][5216] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" Namespace="calico-apiserver" Pod="calico-apiserver-7f75ddf75c-gv2zs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0", GenerateName:"calico-apiserver-7f75ddf75c-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa94ba73-11fc-4f77-ab59-83ae6d16f701", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f75ddf75c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9", Pod:"calico-apiserver-7f75ddf75c-gv2zs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd813a7eb5c", MAC:"fe:a7:84:c1:73:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:11.700868 containerd[1453]: 2024-09-04 17:21:11.696 [INFO][5216] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9" Namespace="calico-apiserver" Pod="calico-apiserver-7f75ddf75c-gv2zs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f75ddf75c--gv2zs-eth0" Sep 4 17:21:11.770453 containerd[1453]: time="2024-09-04T17:21:11.769576512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:21:11.770453 containerd[1453]: time="2024-09-04T17:21:11.770402461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:11.770453 containerd[1453]: time="2024-09-04T17:21:11.770426146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:21:11.770453 containerd[1453]: time="2024-09-04T17:21:11.770439482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:11.798715 systemd[1]: Started cri-containerd-2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9.scope - libcontainer container 2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9. Sep 4 17:21:11.811148 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:21:11.835061 containerd[1453]: time="2024-09-04T17:21:11.834954975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f75ddf75c-gv2zs,Uid:aa94ba73-11fc-4f77-ab59-83ae6d16f701,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9\"" Sep 4 17:21:11.836695 containerd[1453]: time="2024-09-04T17:21:11.836641077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:21:12.950763 systemd-networkd[1388]: calidd813a7eb5c: Gained IPv6LL Sep 4 17:21:14.017842 kubelet[2519]: E0904 17:21:14.017806 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:14.496110 systemd[1]: Started sshd@26-10.0.0.55:22-10.0.0.1:51394.service - OpenSSH per-connection server daemon (10.0.0.1:51394). Sep 4 17:21:14.552565 sshd[5299]: Accepted publickey for core from 10.0.0.1 port 51394 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:21:14.555423 sshd[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:14.560905 systemd-logind[1435]: New session 27 of user core. Sep 4 17:21:14.570644 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:21:14.712722 sshd[5299]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:14.717023 systemd[1]: sshd@26-10.0.0.55:22-10.0.0.1:51394.service: Deactivated successfully. Sep 4 17:21:14.719474 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:21:14.720432 systemd-logind[1435]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:21:14.721513 systemd-logind[1435]: Removed session 27. Sep 4 17:21:14.744209 containerd[1453]: time="2024-09-04T17:21:14.744119399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:14.745100 containerd[1453]: time="2024-09-04T17:21:14.745048602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:21:14.746127 containerd[1453]: time="2024-09-04T17:21:14.746087283Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:14.748758 containerd[1453]: time="2024-09-04T17:21:14.748625620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:14.749637 containerd[1453]: time="2024-09-04T17:21:14.749573178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.912755797s" Sep 4 17:21:14.749637 containerd[1453]: time="2024-09-04T17:21:14.749626760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:21:14.751596 containerd[1453]: time="2024-09-04T17:21:14.751560961Z" level=info msg="CreateContainer within sandbox \"2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:21:14.765313 containerd[1453]: time="2024-09-04T17:21:14.765266480Z" level=info msg="CreateContainer within sandbox \"2b2c1a81324ffe993a0225a0b43cb16c9550dc775c80f8789233e99f95011ae9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3f8ee5ab004c13b00ba289bccb96eb06698c00ba72149b190bdb7b7050b3d84d\"" Sep 4 17:21:14.767342 containerd[1453]: time="2024-09-04T17:21:14.766166026Z" level=info msg="StartContainer for \"3f8ee5ab004c13b00ba289bccb96eb06698c00ba72149b190bdb7b7050b3d84d\"" Sep 4 17:21:14.802730 systemd[1]: Started cri-containerd-3f8ee5ab004c13b00ba289bccb96eb06698c00ba72149b190bdb7b7050b3d84d.scope - libcontainer container 3f8ee5ab004c13b00ba289bccb96eb06698c00ba72149b190bdb7b7050b3d84d. Sep 4 17:21:14.866288 containerd[1453]: time="2024-09-04T17:21:14.866227382Z" level=info msg="StartContainer for \"3f8ee5ab004c13b00ba289bccb96eb06698c00ba72149b190bdb7b7050b3d84d\" returns successfully" Sep 4 17:21:15.324061 kubelet[2519]: I0904 17:21:15.324014 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f75ddf75c-gv2zs" podStartSLOduration=2.41016597 podCreationTimestamp="2024-09-04 17:21:10 +0000 UTC" firstStartedPulling="2024-09-04 17:21:11.836072978 +0000 UTC m=+91.943335168" lastFinishedPulling="2024-09-04 17:21:14.749883898 +0000 UTC m=+94.857146088" observedRunningTime="2024-09-04 17:21:15.322793655 +0000 UTC m=+95.430055845" watchObservedRunningTime="2024-09-04 17:21:15.32397689 +0000 UTC m=+95.431239070" Sep 4 17:21:17.017619 kubelet[2519]: E0904 17:21:17.017581 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:17.018258 kubelet[2519]: E0904 17:21:17.017666 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:19.723097 systemd[1]: Started sshd@27-10.0.0.55:22-10.0.0.1:49294.service - OpenSSH per-connection server daemon (10.0.0.1:49294). Sep 4 17:21:19.773190 sshd[5383]: Accepted publickey for core from 10.0.0.1 port 49294 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:21:19.775366 sshd[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:19.780565 systemd-logind[1435]: New session 28 of user core. Sep 4 17:21:19.786779 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 17:21:19.916715 sshd[5383]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:19.920639 systemd[1]: sshd@27-10.0.0.55:22-10.0.0.1:49294.service: Deactivated successfully. Sep 4 17:21:19.923450 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 17:21:19.924184 systemd-logind[1435]: Session 28 logged out. Waiting for processes to exit. Sep 4 17:21:19.925089 systemd-logind[1435]: Removed session 28. Sep 4 17:21:20.244185 update_engine[1436]: I0904 17:21:20.244132 1436 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:21:20.244686 update_engine[1436]: I0904 17:21:20.244470 1436 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:21:20.244726 update_engine[1436]: I0904 17:21:20.244706 1436 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:21:20.256278 update_engine[1436]: E0904 17:21:20.256249 1436 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:21:20.256329 update_engine[1436]: I0904 17:21:20.256299 1436 libcurl_http_fetcher.cc:283] No HTTP response, retry 3