Jun 25 18:31:36.956607 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:31:36.956638 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:31:36.956664 kernel: BIOS-provided physical RAM map: Jun 25 18:31:36.956695 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 18:31:36.956705 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 18:31:36.956715 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 18:31:36.956727 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jun 25 18:31:36.956737 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jun 25 18:31:36.956748 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 18:31:36.956762 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 18:31:36.956772 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 25 18:31:36.956782 kernel: NX (Execute Disable) protection: active Jun 25 18:31:36.956792 kernel: APIC: Static calls initialized Jun 25 18:31:36.956802 kernel: SMBIOS 2.8 present. Jun 25 18:31:36.956815 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jun 25 18:31:36.956830 kernel: Hypervisor detected: KVM Jun 25 18:31:36.956842 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 18:31:36.956853 kernel: kvm-clock: using sched offset of 2407225819 cycles Jun 25 18:31:36.956864 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 18:31:36.956876 kernel: tsc: Detected 2794.750 MHz processor Jun 25 18:31:36.956888 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:31:36.956899 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:31:36.956911 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jun 25 18:31:36.956922 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 25 18:31:36.956937 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:31:36.956948 kernel: Using GB pages for direct mapping Jun 25 18:31:36.956960 kernel: ACPI: Early table checksum verification disabled Jun 25 18:31:36.956971 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jun 25 18:31:36.956982 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:31:36.956994 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:31:36.957005 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:31:36.957016 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jun 25 18:31:36.957028 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:31:36.957042 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:31:36.957053 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:31:36.957065 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jun 25 18:31:36.957076 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jun 25 18:31:36.957087 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jun 25 18:31:36.957098 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jun 25 18:31:36.957110 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jun 25 18:31:36.957131 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jun 25 18:31:36.957143 kernel: No NUMA configuration found Jun 25 18:31:36.957155 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jun 25 18:31:36.957167 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jun 25 18:31:36.957179 kernel: Zone ranges: Jun 25 18:31:36.957191 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:31:36.957203 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jun 25 18:31:36.957218 kernel: Normal empty Jun 25 18:31:36.957230 kernel: Movable zone start for each node Jun 25 18:31:36.957242 kernel: Early memory node ranges Jun 25 18:31:36.957254 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 18:31:36.957265 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jun 25 18:31:36.957277 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jun 25 18:31:36.957289 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:31:36.957301 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 18:31:36.957313 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jun 25 18:31:36.957328 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 18:31:36.957340 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 18:31:36.957352 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:31:36.957363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 18:31:36.957375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 18:31:36.957387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:31:36.957399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 18:31:36.957411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 18:31:36.957423 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:31:36.957434 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 18:31:36.957450 kernel: TSC deadline timer available Jun 25 18:31:36.957461 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jun 25 18:31:36.957473 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 18:31:36.957485 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 25 18:31:36.957497 kernel: kvm-guest: setup PV sched yield Jun 25 18:31:36.957509 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jun 25 18:31:36.957521 kernel: Booting paravirtualized kernel on KVM Jun 25 18:31:36.957533 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:31:36.957545 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 25 18:31:36.957560 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jun 25 18:31:36.957572 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jun 25 18:31:36.957583 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 25 18:31:36.957595 kernel: kvm-guest: PV spinlocks enabled Jun 25 18:31:36.957607 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:31:36.957620 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:31:36.957633 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:31:36.957653 kernel: random: crng init done Jun 25 18:31:36.957695 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:31:36.957707 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:31:36.957719 kernel: Fallback order for Node 0: 0 Jun 25 18:31:36.957731 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jun 25 18:31:36.957743 kernel: Policy zone: DMA32 Jun 25 18:31:36.957755 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:31:36.957767 kernel: Memory: 2428452K/2571756K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 143044K reserved, 0K cma-reserved) Jun 25 18:31:36.957780 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 18:31:36.957791 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:31:36.957807 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:31:36.957819 kernel: Dynamic Preempt: voluntary Jun 25 18:31:36.957831 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:31:36.957844 kernel: rcu: RCU event tracing is enabled. Jun 25 18:31:36.957856 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 18:31:36.957868 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:31:36.957881 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:31:36.957892 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:31:36.957905 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:31:36.957921 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 18:31:36.957933 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 25 18:31:36.957945 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:31:36.957957 kernel: Console: colour VGA+ 80x25 Jun 25 18:31:36.957968 kernel: printk: console [ttyS0] enabled Jun 25 18:31:36.957980 kernel: ACPI: Core revision 20230628 Jun 25 18:31:36.957992 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 25 18:31:36.958004 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:31:36.958016 kernel: x2apic enabled Jun 25 18:31:36.958031 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 18:31:36.958043 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 25 18:31:36.958055 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 25 18:31:36.958067 kernel: kvm-guest: setup PV IPIs Jun 25 18:31:36.958079 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 18:31:36.958091 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 18:31:36.958103 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jun 25 18:31:36.958116 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 25 18:31:36.958142 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 25 18:31:36.958154 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 25 18:31:36.958167 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:31:36.958180 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:31:36.958195 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:31:36.958208 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:31:36.958220 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 25 18:31:36.958233 kernel: RETBleed: Mitigation: untrained return thunk Jun 25 18:31:36.958246 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 18:31:36.958262 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 18:31:36.958274 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 25 18:31:36.958288 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 25 18:31:36.958300 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 25 18:31:36.958313 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:31:36.958326 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:31:36.958338 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:31:36.958351 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:31:36.958366 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 18:31:36.958379 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:31:36.958391 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:31:36.958404 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:31:36.958416 kernel: SELinux: Initializing. Jun 25 18:31:36.958429 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:31:36.958441 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:31:36.958454 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 25 18:31:36.958467 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:31:36.958483 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:31:36.958495 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:31:36.958508 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 25 18:31:36.958520 kernel: ... version: 0 Jun 25 18:31:36.958532 kernel: ... bit width: 48 Jun 25 18:31:36.958545 kernel: ... generic registers: 6 Jun 25 18:31:36.958557 kernel: ... value mask: 0000ffffffffffff Jun 25 18:31:36.958570 kernel: ... max period: 00007fffffffffff Jun 25 18:31:36.958582 kernel: ... fixed-purpose events: 0 Jun 25 18:31:36.958598 kernel: ... event mask: 000000000000003f Jun 25 18:31:36.958610 kernel: signal: max sigframe size: 1776 Jun 25 18:31:36.958622 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:31:36.958635 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:31:36.958657 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:31:36.958684 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:31:36.958697 kernel: .... node #0, CPUs: #1 #2 #3 Jun 25 18:31:36.958710 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 18:31:36.958722 kernel: smpboot: Max logical packages: 1 Jun 25 18:31:36.958739 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jun 25 18:31:36.958751 kernel: devtmpfs: initialized Jun 25 18:31:36.958764 kernel: x86/mm: Memory block size: 128MB Jun 25 18:31:36.958777 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:31:36.958790 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 18:31:36.958802 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:31:36.958815 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:31:36.958827 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:31:36.958840 kernel: audit: type=2000 audit(1719340296.262:1): state=initialized audit_enabled=0 res=1 Jun 25 18:31:36.958855 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:31:36.958868 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:31:36.958880 kernel: cpuidle: using governor menu Jun 25 18:31:36.958893 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:31:36.958906 kernel: dca service started, version 1.12.1 Jun 25 18:31:36.958919 kernel: PCI: Using configuration type 1 for base access Jun 25 18:31:36.958931 kernel: PCI: Using configuration type 1 for extended access Jun 25 18:31:36.958944 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:31:36.958956 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:31:36.958972 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:31:36.958985 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:31:36.958997 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:31:36.959010 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:31:36.959022 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:31:36.959035 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:31:36.959047 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:31:36.959060 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:31:36.959072 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:31:36.959088 kernel: ACPI: Interpreter enabled Jun 25 18:31:36.959100 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 18:31:36.959113 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:31:36.959126 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:31:36.959138 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 18:31:36.959151 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 18:31:36.959163 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:31:36.959420 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:31:36.959444 kernel: acpiphp: Slot [3] registered Jun 25 18:31:36.959457 kernel: acpiphp: Slot [4] registered Jun 25 18:31:36.959469 kernel: acpiphp: Slot [5] registered Jun 25 18:31:36.959481 kernel: acpiphp: Slot [6] registered Jun 25 18:31:36.959494 kernel: acpiphp: Slot [7] registered Jun 25 18:31:36.959506 kernel: acpiphp: Slot [8] registered Jun 25 18:31:36.959518 kernel: acpiphp: Slot [9] registered Jun 25 18:31:36.959530 kernel: acpiphp: Slot [10] registered Jun 25 18:31:36.959543 kernel: acpiphp: Slot [11] registered Jun 25 18:31:36.959555 kernel: acpiphp: Slot [12] registered Jun 25 18:31:36.959570 kernel: acpiphp: Slot [13] registered Jun 25 18:31:36.959583 kernel: acpiphp: Slot [14] registered Jun 25 18:31:36.959595 kernel: acpiphp: Slot [15] registered Jun 25 18:31:36.959607 kernel: acpiphp: Slot [16] registered Jun 25 18:31:36.959620 kernel: acpiphp: Slot [17] registered Jun 25 18:31:36.959632 kernel: acpiphp: Slot [18] registered Jun 25 18:31:36.959654 kernel: acpiphp: Slot [19] registered Jun 25 18:31:36.959681 kernel: acpiphp: Slot [20] registered Jun 25 18:31:36.959693 kernel: acpiphp: Slot [21] registered Jun 25 18:31:36.959710 kernel: acpiphp: Slot [22] registered Jun 25 18:31:36.959722 kernel: acpiphp: Slot [23] registered Jun 25 18:31:36.959735 kernel: acpiphp: Slot [24] registered Jun 25 18:31:36.959747 kernel: acpiphp: Slot [25] registered Jun 25 18:31:36.959759 kernel: acpiphp: Slot [26] registered Jun 25 18:31:36.959771 kernel: acpiphp: Slot [27] registered Jun 25 18:31:36.959784 kernel: acpiphp: Slot [28] registered Jun 25 18:31:36.959796 kernel: acpiphp: Slot [29] registered Jun 25 18:31:36.959808 kernel: acpiphp: Slot [30] registered Jun 25 18:31:36.959820 kernel: acpiphp: Slot [31] registered Jun 25 18:31:36.959849 kernel: PCI host bridge to bus 0000:00 Jun 25 18:31:36.960166 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 18:31:36.960332 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 18:31:36.960489 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 18:31:36.960651 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jun 25 18:31:36.960836 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 25 18:31:36.960999 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:31:36.961276 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 18:31:36.961519 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 18:31:36.961849 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 18:31:36.962029 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jun 25 18:31:36.962203 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 18:31:36.962378 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 18:31:36.962561 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 18:31:36.962765 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 18:31:36.962960 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 18:31:36.963225 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 25 18:31:36.963402 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 25 18:31:36.963587 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jun 25 18:31:36.963803 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jun 25 18:31:36.963978 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jun 25 18:31:36.964194 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jun 25 18:31:36.964373 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 18:31:36.964559 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 18:31:36.964767 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jun 25 18:31:36.964943 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jun 25 18:31:36.965125 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jun 25 18:31:36.965317 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 18:31:36.965492 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 18:31:36.965704 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jun 25 18:31:36.965885 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jun 25 18:31:36.966079 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jun 25 18:31:36.966257 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jun 25 18:31:36.966438 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jun 25 18:31:36.966614 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jun 25 18:31:36.966855 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jun 25 18:31:36.966873 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 18:31:36.966886 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 18:31:36.966899 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 18:31:36.966912 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 18:31:36.966924 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 18:31:36.966937 kernel: iommu: Default domain type: Translated Jun 25 18:31:36.966955 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:31:36.966967 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:31:36.966980 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 18:31:36.966993 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 18:31:36.967005 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jun 25 18:31:36.967173 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 18:31:36.967347 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 18:31:36.967515 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 18:31:36.967537 kernel: vgaarb: loaded Jun 25 18:31:36.967549 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 25 18:31:36.967562 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 25 18:31:36.967575 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 18:31:36.967588 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:31:36.967600 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:31:36.967613 kernel: pnp: PnP ACPI init Jun 25 18:31:36.967847 kernel: pnp 00:02: [dma 2] Jun 25 18:31:36.967872 kernel: pnp: PnP ACPI: found 6 devices Jun 25 18:31:36.967885 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:31:36.967898 kernel: NET: Registered PF_INET protocol family Jun 25 18:31:36.967911 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:31:36.967924 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:31:36.967937 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:31:36.967949 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:31:36.967962 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:31:36.967975 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:31:36.967991 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:31:36.968004 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:31:36.968017 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:31:36.968030 kernel: NET: Registered PF_XDP protocol family Jun 25 18:31:36.968190 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 18:31:36.968349 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 18:31:36.968506 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 18:31:36.968783 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jun 25 18:31:36.968973 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 25 18:31:36.969155 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 18:31:36.969331 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 18:31:36.969348 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:31:36.969361 kernel: Initialise system trusted keyrings Jun 25 18:31:36.969374 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:31:36.969386 kernel: Key type asymmetric registered Jun 25 18:31:36.969399 kernel: Asymmetric key parser 'x509' registered Jun 25 18:31:36.969411 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:31:36.969429 kernel: io scheduler mq-deadline registered Jun 25 18:31:36.969442 kernel: io scheduler kyber registered Jun 25 18:31:36.969454 kernel: io scheduler bfq registered Jun 25 18:31:36.969467 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:31:36.969481 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 18:31:36.969494 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 25 18:31:36.969506 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 18:31:36.969519 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:31:36.969532 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:31:36.969548 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 18:31:36.969561 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 18:31:36.969574 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 18:31:36.969586 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 18:31:36.969791 kernel: rtc_cmos 00:05: RTC can wake from S4 Jun 25 18:31:36.969953 kernel: rtc_cmos 00:05: registered as rtc0 Jun 25 18:31:36.970112 kernel: rtc_cmos 00:05: setting system clock to 2024-06-25T18:31:36 UTC (1719340296) Jun 25 18:31:36.970270 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 25 18:31:36.970292 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 25 18:31:36.970305 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:31:36.970318 kernel: Segment Routing with IPv6 Jun 25 18:31:36.970330 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:31:36.970343 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:31:36.970355 kernel: Key type dns_resolver registered Jun 25 18:31:36.970368 kernel: IPI shorthand broadcast: enabled Jun 25 18:31:36.970380 kernel: sched_clock: Marking stable (840002570, 101841382)->(996950632, -55106680) Jun 25 18:31:36.970393 kernel: registered taskstats version 1 Jun 25 18:31:36.970409 kernel: Loading compiled-in X.509 certificates Jun 25 18:31:36.970422 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:31:36.970434 kernel: Key type .fscrypt registered Jun 25 18:31:36.970447 kernel: Key type fscrypt-provisioning registered Jun 25 18:31:36.970460 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:31:36.970472 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:31:36.970485 kernel: ima: No architecture policies found Jun 25 18:31:36.970498 kernel: clk: Disabling unused clocks Jun 25 18:31:36.970511 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:31:36.970526 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:31:36.970539 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:31:36.970553 kernel: Run /init as init process Jun 25 18:31:36.970565 kernel: with arguments: Jun 25 18:31:36.970578 kernel: /init Jun 25 18:31:36.970590 kernel: with environment: Jun 25 18:31:36.970603 kernel: HOME=/ Jun 25 18:31:36.970638 kernel: TERM=linux Jun 25 18:31:36.970663 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:31:36.970734 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:31:36.970751 systemd[1]: Detected virtualization kvm. Jun 25 18:31:36.970766 systemd[1]: Detected architecture x86-64. Jun 25 18:31:36.970795 systemd[1]: Running in initrd. Jun 25 18:31:36.970808 systemd[1]: No hostname configured, using default hostname. Jun 25 18:31:36.970822 systemd[1]: Hostname set to . Jun 25 18:31:36.970840 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:31:36.970854 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:31:36.970868 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:31:36.970882 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:31:36.970897 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:31:36.970911 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:31:36.970925 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:31:36.970940 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:31:36.970960 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:31:36.970974 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:31:36.970988 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:31:36.971002 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:31:36.971016 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:31:36.971030 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:31:36.971044 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:31:36.971061 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:31:36.971075 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:31:36.971089 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:31:36.971103 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:31:36.971117 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:31:36.971131 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:31:36.971145 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:31:36.971159 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:31:36.971173 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:31:36.971190 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:31:36.971204 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:31:36.971219 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:31:36.971232 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:31:36.971246 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:31:36.971263 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:31:36.971277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:31:36.971291 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:31:36.971305 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:31:36.971319 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:31:36.971334 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:31:36.971378 systemd-journald[193]: Collecting audit messages is disabled. Jun 25 18:31:36.971410 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:31:36.971424 systemd-journald[193]: Journal started Jun 25 18:31:36.971456 systemd-journald[193]: Runtime Journal (/run/log/journal/041d384a3bb54dccb53d34a72feace31) is 6.0M, max 48.4M, 42.3M free. Jun 25 18:31:36.961760 systemd-modules-load[194]: Inserted module 'overlay' Jun 25 18:31:37.001427 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:31:37.001455 kernel: Bridge firewalling registered Jun 25 18:31:36.995442 systemd-modules-load[194]: Inserted module 'br_netfilter' Jun 25 18:31:37.003696 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:31:37.003862 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:31:37.020996 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:31:37.024126 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:31:37.028029 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:31:37.030830 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:37.032696 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:31:37.037294 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:31:37.040288 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:31:37.051314 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:31:37.053161 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:31:37.066533 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:31:37.078846 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:31:37.093246 dracut-cmdline[229]: dracut-dracut-053 Jun 25 18:31:37.097473 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:31:37.098538 systemd-resolved[221]: Positive Trust Anchors: Jun 25 18:31:37.098555 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:31:37.098597 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:31:37.102076 systemd-resolved[221]: Defaulting to hostname 'linux'. Jun 25 18:31:37.103464 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:31:37.105140 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:31:37.206714 kernel: SCSI subsystem initialized Jun 25 18:31:37.218702 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:31:37.232706 kernel: iscsi: registered transport (tcp) Jun 25 18:31:37.258704 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:31:37.258771 kernel: QLogic iSCSI HBA Driver Jun 25 18:31:37.308245 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:31:37.316834 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:31:37.345866 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:31:37.345948 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:31:37.345965 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:31:37.396706 kernel: raid6: avx2x4 gen() 20928 MB/s Jun 25 18:31:37.413703 kernel: raid6: avx2x2 gen() 20610 MB/s Jun 25 18:31:37.431161 kernel: raid6: avx2x1 gen() 17329 MB/s Jun 25 18:31:37.431251 kernel: raid6: using algorithm avx2x4 gen() 20928 MB/s Jun 25 18:31:37.449056 kernel: raid6: .... xor() 5596 MB/s, rmw enabled Jun 25 18:31:37.449159 kernel: raid6: using avx2x2 recovery algorithm Jun 25 18:31:37.480702 kernel: xor: automatically using best checksumming function avx Jun 25 18:31:37.686726 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:31:37.703107 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:31:37.714930 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:31:37.732371 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jun 25 18:31:37.738035 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:31:37.745855 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:31:37.766179 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jun 25 18:31:37.804798 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:31:37.816934 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:31:37.907192 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:31:37.919432 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:31:37.930401 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:31:37.933776 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:31:37.935082 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:31:37.936325 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:31:37.943690 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 25 18:31:37.971251 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 18:31:37.971444 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:31:37.971472 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:31:37.971486 kernel: GPT:9289727 != 19775487 Jun 25 18:31:37.971500 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:31:37.971514 kernel: GPT:9289727 != 19775487 Jun 25 18:31:37.971527 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:31:37.971541 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:31:37.945839 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:31:37.977865 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:31:37.979687 kernel: libata version 3.00 loaded. Jun 25 18:31:37.980961 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 18:31:37.989365 kernel: scsi host0: ata_piix Jun 25 18:31:37.989577 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:31:37.989593 kernel: AES CTR mode by8 optimization enabled Jun 25 18:31:37.989607 kernel: scsi host1: ata_piix Jun 25 18:31:37.989835 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jun 25 18:31:37.989851 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jun 25 18:31:38.005711 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (468) Jun 25 18:31:38.008791 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (466) Jun 25 18:31:38.019567 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 18:31:38.027391 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 18:31:38.036233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:31:38.045903 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 18:31:38.049900 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 18:31:38.071012 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:31:38.075424 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:31:38.075512 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:31:38.084857 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:31:38.085136 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:31:38.085217 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:38.085605 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:31:38.087217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:31:38.145712 kernel: ata2: found unknown device (class 0) Jun 25 18:31:38.145752 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 25 18:31:38.161724 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 25 18:31:38.167392 disk-uuid[537]: Primary Header is updated. Jun 25 18:31:38.167392 disk-uuid[537]: Secondary Entries is updated. Jun 25 18:31:38.167392 disk-uuid[537]: Secondary Header is updated. Jun 25 18:31:38.205989 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:31:38.206019 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:31:38.206030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:31:38.204008 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:38.235966 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:31:38.251864 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 25 18:31:38.264509 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:31:38.264530 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 25 18:31:38.252334 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:31:39.178692 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:31:39.178769 disk-uuid[539]: The operation has completed successfully. Jun 25 18:31:39.211334 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:31:39.211485 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:31:39.232932 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:31:39.236404 sh[579]: Success Jun 25 18:31:39.250695 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 25 18:31:39.288955 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:31:39.302035 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:31:39.304308 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:31:39.320328 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:31:39.320404 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:31:39.320422 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:31:39.321521 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:31:39.322390 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:31:39.328516 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:31:39.329885 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:31:39.343913 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:31:39.346272 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:31:39.357088 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:31:39.357117 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:31:39.357129 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:31:39.360704 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:31:39.372503 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:31:39.375261 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:31:39.386717 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:31:39.395001 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:31:39.553555 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:31:39.563073 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:31:39.572968 ignition[671]: Ignition 2.19.0 Jun 25 18:31:39.572981 ignition[671]: Stage: fetch-offline Jun 25 18:31:39.573023 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:39.573033 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:31:39.573118 ignition[671]: parsed url from cmdline: "" Jun 25 18:31:39.573123 ignition[671]: no config URL provided Jun 25 18:31:39.573128 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:31:39.573137 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:31:39.573166 ignition[671]: op(1): [started] loading QEMU firmware config module Jun 25 18:31:39.573172 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 18:31:39.587283 ignition[671]: op(1): [finished] loading QEMU firmware config module Jun 25 18:31:39.596167 systemd-networkd[766]: lo: Link UP Jun 25 18:31:39.596180 systemd-networkd[766]: lo: Gained carrier Jun 25 18:31:39.598458 systemd-networkd[766]: Enumeration completed Jun 25 18:31:39.598959 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:31:39.598964 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:31:39.600564 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:31:39.603633 systemd-networkd[766]: eth0: Link UP Jun 25 18:31:39.603638 systemd-networkd[766]: eth0: Gained carrier Jun 25 18:31:39.603648 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:31:39.606539 systemd[1]: Reached target network.target - Network. Jun 25 18:31:39.640344 ignition[671]: parsing config with SHA512: 08887eecf57e867c07a62b69cfc83b1cec5b5d0e63cddb84f26f2a2396daa33aaefbd78466cd3a56adf3423ca82289e51a0cd9484d3f58d9535b032e65a51a0b Jun 25 18:31:39.646563 unknown[671]: fetched base config from "system" Jun 25 18:31:39.646597 unknown[671]: fetched user config from "qemu" Jun 25 18:31:39.649225 ignition[671]: fetch-offline: fetch-offline passed Jun 25 18:31:39.649337 ignition[671]: Ignition finished successfully Jun 25 18:31:39.654013 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:31:39.654301 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 18:31:39.656868 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:31:39.661990 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:31:39.677124 ignition[773]: Ignition 2.19.0 Jun 25 18:31:39.677140 ignition[773]: Stage: kargs Jun 25 18:31:39.677376 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:39.677392 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:31:39.678507 ignition[773]: kargs: kargs passed Jun 25 18:31:39.678566 ignition[773]: Ignition finished successfully Jun 25 18:31:39.683307 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:31:39.687265 systemd-resolved[221]: Detected conflict on linux IN A 10.0.0.13 Jun 25 18:31:39.687286 systemd-resolved[221]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Jun 25 18:31:39.696889 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:31:39.713515 ignition[783]: Ignition 2.19.0 Jun 25 18:31:39.713530 ignition[783]: Stage: disks Jun 25 18:31:39.713772 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:39.713789 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:31:39.714655 ignition[783]: disks: disks passed Jun 25 18:31:39.714726 ignition[783]: Ignition finished successfully Jun 25 18:31:39.721476 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:31:39.721810 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:31:39.725192 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:31:39.725455 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:31:39.726071 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:31:39.733384 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:31:39.744910 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:31:39.759984 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:31:39.767336 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:31:39.778809 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:31:39.906701 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:31:39.907103 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:31:39.907758 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:31:39.919764 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:31:39.921946 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:31:39.923869 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:31:39.946590 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Jun 25 18:31:39.946652 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:31:39.946681 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:31:39.946699 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:31:39.923913 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:31:39.952114 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:31:39.923942 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:31:39.933890 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:31:39.947866 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:31:39.954974 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:31:39.992610 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:31:40.008518 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:31:40.012942 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:31:40.017470 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:31:40.101453 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:31:40.110825 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:31:40.114951 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:31:40.121693 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:31:40.146738 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:31:40.150950 ignition[916]: INFO : Ignition 2.19.0 Jun 25 18:31:40.150950 ignition[916]: INFO : Stage: mount Jun 25 18:31:40.153217 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:40.153217 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:31:40.153217 ignition[916]: INFO : mount: mount passed Jun 25 18:31:40.153217 ignition[916]: INFO : Ignition finished successfully Jun 25 18:31:40.156275 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:31:40.169802 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:31:40.319214 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:31:40.338890 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:31:40.347753 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Jun 25 18:31:40.347778 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:31:40.347790 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:31:40.349264 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:31:40.352725 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:31:40.354110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:31:40.377737 ignition[948]: INFO : Ignition 2.19.0 Jun 25 18:31:40.377737 ignition[948]: INFO : Stage: files Jun 25 18:31:40.379745 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:40.379745 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:31:40.379745 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:31:40.379745 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:31:40.379745 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:31:40.386422 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:31:40.388015 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:31:40.389986 unknown[948]: wrote ssh authorized keys file for user: core Jun 25 18:31:40.391262 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:31:40.393633 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:31:40.395649 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:31:40.479371 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:31:40.556995 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:31:40.559141 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:31:40.561108 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jun 25 18:31:40.928860 systemd-networkd[766]: eth0: Gained IPv6LL Jun 25 18:31:41.022610 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:31:41.288745 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:31:41.288745 ignition[948]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:31:41.293245 ignition[948]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:31:41.295955 ignition[948]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:31:41.295955 ignition[948]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:31:41.295955 ignition[948]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 25 18:31:41.301220 ignition[948]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:31:41.303534 ignition[948]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:31:41.303534 ignition[948]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 25 18:31:41.303534 ignition[948]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 18:31:41.330186 ignition[948]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:31:41.337190 ignition[948]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:31:41.338922 ignition[948]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 18:31:41.338922 ignition[948]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:31:41.341929 ignition[948]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:31:41.343501 ignition[948]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:31:41.345339 ignition[948]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:31:41.347051 ignition[948]: INFO : files: files passed Jun 25 18:31:41.347830 ignition[948]: INFO : Ignition finished successfully Jun 25 18:31:41.350941 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:31:41.362830 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:31:41.365284 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:31:41.368466 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:31:41.368651 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:31:41.381789 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 18:31:41.385982 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:31:41.385982 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:31:41.390324 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:31:41.393848 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:31:41.394180 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:31:41.404906 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:31:41.441991 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:31:41.442152 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:31:41.444691 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:31:41.446839 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:31:41.448923 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:31:41.449842 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:31:41.470319 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:31:41.479883 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:31:41.490388 systemd[1]: Stopped target network.target - Network. Jun 25 18:31:41.491487 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:31:41.493486 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:31:41.496028 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:31:41.498111 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:31:41.498262 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:31:41.500768 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:31:41.502439 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:31:41.504652 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:31:41.506845 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:31:41.508893 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:31:41.511152 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:31:41.513306 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:31:41.515655 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:31:41.517700 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:31:41.519895 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:31:41.521783 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:31:41.521914 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:31:41.524428 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:31:41.525977 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:31:41.528064 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:31:41.528224 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:31:41.530321 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:31:41.530455 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:31:41.532861 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:31:41.532976 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:31:41.534851 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:31:41.536666 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:31:41.540795 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:31:41.542713 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:31:41.544746 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:31:41.546695 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:31:41.546798 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:31:41.548813 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:31:41.548912 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:31:41.551320 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:31:41.551438 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:31:41.553772 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:31:41.553883 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:31:41.567987 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:31:41.570381 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:31:41.570580 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:31:41.574114 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:31:41.576286 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:31:41.577733 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:31:41.579471 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:31:41.579807 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:31:41.582204 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:31:41.582504 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:31:41.582911 systemd-networkd[766]: eth0: DHCPv6 lease lost Jun 25 18:31:41.589838 ignition[1002]: INFO : Ignition 2.19.0 Jun 25 18:31:41.589838 ignition[1002]: INFO : Stage: umount Jun 25 18:31:41.590441 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:31:41.595080 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:41.595080 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:31:41.595080 ignition[1002]: INFO : umount: umount passed Jun 25 18:31:41.595080 ignition[1002]: INFO : Ignition finished successfully Jun 25 18:31:41.590636 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:31:41.595176 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:31:41.595359 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:31:41.598116 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:31:41.598261 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:31:41.602846 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:31:41.602974 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:31:41.605421 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:31:41.605491 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:31:41.613079 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:31:41.613156 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:31:41.615353 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:31:41.615417 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:31:41.617839 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:31:41.617902 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:31:41.620028 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:31:41.620090 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:31:41.629875 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:31:41.631209 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:31:41.631290 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:31:41.633870 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:31:41.633937 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:31:41.636461 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:31:41.636525 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:31:41.637904 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:31:41.637965 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:31:41.640780 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:31:41.644688 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:31:41.655321 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:31:41.656381 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:31:41.662637 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:31:41.663834 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:31:41.666761 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:31:41.666831 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:31:41.669859 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:31:41.669908 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:31:41.672035 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:31:41.673035 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:31:41.676625 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:31:41.677822 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:31:41.680444 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:31:41.680511 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:31:41.714992 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:31:41.716229 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:31:41.717497 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:31:41.720440 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:31:41.722117 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:41.727305 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:31:41.728772 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:31:41.754150 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:31:41.755468 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:31:41.758185 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:31:41.760661 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:31:41.761894 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:31:41.777043 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:31:41.788431 systemd[1]: Switching root. Jun 25 18:31:41.824316 systemd-journald[193]: Journal stopped Jun 25 18:31:42.970283 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jun 25 18:31:42.970367 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:31:42.970388 kernel: SELinux: policy capability open_perms=1 Jun 25 18:31:42.970409 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:31:42.970426 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:31:42.970452 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:31:42.970469 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:31:42.970482 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:31:42.970510 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:31:42.970527 kernel: audit: type=1403 audit(1719340302.116:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:31:42.970549 systemd[1]: Successfully loaded SELinux policy in 43.883ms. Jun 25 18:31:42.970586 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.548ms. Jun 25 18:31:42.970605 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:31:42.970622 systemd[1]: Detected virtualization kvm. Jun 25 18:31:42.970638 systemd[1]: Detected architecture x86-64. Jun 25 18:31:42.970654 systemd[1]: Detected first boot. Jun 25 18:31:42.970686 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:31:42.970704 zram_generator::config[1046]: No configuration found. Jun 25 18:31:42.970728 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:31:42.970744 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:31:42.970761 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:31:42.970778 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:31:42.970821 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:31:42.970848 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:31:42.970865 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:31:42.970882 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:31:42.970902 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:31:42.970924 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:31:42.970941 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:31:42.970959 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:31:42.970976 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:31:42.970993 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:31:42.971010 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:31:42.971027 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:31:42.971043 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:31:42.971064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:31:42.971080 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:31:42.971097 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:31:42.971115 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:31:42.971132 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:31:42.971146 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:31:42.971163 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:31:42.971181 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:31:42.971209 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:31:42.971226 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:31:42.971244 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:31:42.971261 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:31:42.971281 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:31:42.971299 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:31:42.971316 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:31:42.971332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:31:42.971350 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:31:42.971370 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:31:42.971387 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:31:42.971404 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:31:42.971420 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:31:42.971437 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:31:42.971454 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:31:42.971470 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:31:42.971487 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:31:42.971516 systemd[1]: Reached target machines.target - Containers. Jun 25 18:31:42.971540 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:31:42.971559 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:31:42.971577 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:31:42.971595 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:31:42.971613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:31:42.971630 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:31:42.971648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:31:42.971939 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:31:42.971974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:31:42.972008 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:31:42.972028 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:31:42.972075 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:31:42.972099 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:31:42.972123 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:31:42.972155 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:31:42.972171 kernel: loop: module loaded Jun 25 18:31:42.972195 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:31:42.972215 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:31:42.972231 kernel: fuse: init (API version 7.39) Jun 25 18:31:42.972248 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:31:42.972265 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:31:42.972281 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:31:42.972297 systemd[1]: Stopped verity-setup.service. Jun 25 18:31:42.972315 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:31:42.972332 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:31:42.972349 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:31:42.972384 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:31:42.972466 systemd-journald[1115]: Collecting audit messages is disabled. Jun 25 18:31:42.972519 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:31:42.972541 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:31:42.972557 systemd-journald[1115]: Journal started Jun 25 18:31:42.972587 systemd-journald[1115]: Runtime Journal (/run/log/journal/041d384a3bb54dccb53d34a72feace31) is 6.0M, max 48.4M, 42.3M free. Jun 25 18:31:42.693762 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:31:42.713794 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 18:31:42.714305 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:31:42.974760 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:31:42.976912 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:31:42.978524 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:31:42.980657 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:31:42.980921 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:31:42.982967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:31:42.988575 kernel: ACPI: bus type drm_connector registered Jun 25 18:31:42.983180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:31:42.985365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:31:42.985636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:31:42.988412 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:31:42.988731 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:31:42.991119 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:31:42.993105 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:31:42.993310 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:31:42.995025 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:31:42.995219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:31:42.996814 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:31:42.998564 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:31:43.000410 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:31:43.021883 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:31:43.032813 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:31:43.035758 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:31:43.037196 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:31:43.037233 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:31:43.039614 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:31:43.042528 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:31:43.049551 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:31:43.051387 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:31:43.056895 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:31:43.060368 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:31:43.062475 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:31:43.066482 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:31:43.068240 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:31:43.075270 systemd-journald[1115]: Time spent on flushing to /var/log/journal/041d384a3bb54dccb53d34a72feace31 is 30.419ms for 945 entries. Jun 25 18:31:43.075270 systemd-journald[1115]: System Journal (/var/log/journal/041d384a3bb54dccb53d34a72feace31) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:31:43.171118 systemd-journald[1115]: Received client request to flush runtime journal. Jun 25 18:31:43.171183 kernel: loop0: detected capacity change from 0 to 139760 Jun 25 18:31:43.171215 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:31:43.071903 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:31:43.078532 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:31:43.088239 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:31:43.092193 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:31:43.093855 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:31:43.095875 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:31:43.102564 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:31:43.125957 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:31:43.130945 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:31:43.152799 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:31:43.160913 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:31:43.162883 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:31:43.178290 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:31:43.182699 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:31:43.186883 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 18:31:43.188599 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:31:43.199911 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:31:43.203430 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:31:43.204250 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:31:43.216810 kernel: loop1: detected capacity change from 0 to 80568 Jun 25 18:31:43.222439 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jun 25 18:31:43.222459 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jun 25 18:31:43.230972 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:31:43.272699 kernel: loop2: detected capacity change from 0 to 210664 Jun 25 18:31:43.313609 kernel: loop3: detected capacity change from 0 to 139760 Jun 25 18:31:43.350698 kernel: loop4: detected capacity change from 0 to 80568 Jun 25 18:31:43.358705 kernel: loop5: detected capacity change from 0 to 210664 Jun 25 18:31:43.364869 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 18:31:43.365636 (sd-merge)[1185]: Merged extensions into '/usr'. Jun 25 18:31:43.370431 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:31:43.370448 systemd[1]: Reloading... Jun 25 18:31:43.483479 zram_generator::config[1209]: No configuration found. Jun 25 18:31:43.590356 ldconfig[1154]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:31:43.625813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:31:43.689660 systemd[1]: Reloading finished in 318 ms. Jun 25 18:31:43.730292 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:31:43.732212 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:31:43.748957 systemd[1]: Starting ensure-sysext.service... Jun 25 18:31:43.752035 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:31:43.761300 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:31:43.761320 systemd[1]: Reloading... Jun 25 18:31:43.780729 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:31:43.781105 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:31:43.782111 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:31:43.782418 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jun 25 18:31:43.782506 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jun 25 18:31:43.786444 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:31:43.786460 systemd-tmpfiles[1247]: Skipping /boot Jun 25 18:31:43.799451 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:31:43.799474 systemd-tmpfiles[1247]: Skipping /boot Jun 25 18:31:43.821733 zram_generator::config[1275]: No configuration found. Jun 25 18:31:43.953911 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:31:44.023622 systemd[1]: Reloading finished in 261 ms. Jun 25 18:31:44.050253 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:31:44.066545 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:31:44.078501 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:31:44.081596 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:31:44.084494 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:31:44.089730 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:31:44.094551 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:31:44.100022 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:31:44.104356 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:31:44.104575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:31:44.108688 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:31:44.123133 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:31:44.127374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:31:44.129004 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:31:44.132970 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Jun 25 18:31:44.136022 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:31:44.137706 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:31:44.139432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:31:44.139777 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:31:44.142131 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:31:44.144403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:31:44.144629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:31:44.146627 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:31:44.146836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:31:44.155389 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:31:44.156145 augenrules[1339]: No rules Jun 25 18:31:44.155759 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:31:44.162074 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:31:44.164601 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:31:44.166725 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:31:44.173453 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:31:44.179509 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:31:44.191288 systemd[1]: Finished ensure-sysext.service. Jun 25 18:31:44.194981 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:31:44.199263 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:31:44.199442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:31:44.206852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:31:44.209894 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:31:44.212999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:31:44.216869 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:31:44.218804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:31:44.228901 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:31:44.234051 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 18:31:44.236830 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:31:44.237567 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:31:44.240028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:31:44.240257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:31:44.242161 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:31:44.242334 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:31:44.244347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:31:44.244535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:31:44.246171 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:31:44.246340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:31:44.256462 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:31:44.256521 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:31:44.256545 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:31:44.260914 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1349) Jun 25 18:31:44.272715 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1347) Jun 25 18:31:44.293051 systemd-resolved[1315]: Positive Trust Anchors: Jun 25 18:31:44.293489 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:31:44.293588 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:31:44.298333 systemd-resolved[1315]: Defaulting to hostname 'linux'. Jun 25 18:31:44.300995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:31:44.312545 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:31:44.337038 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:31:44.392905 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 18:31:44.395841 systemd-networkd[1378]: lo: Link UP Jun 25 18:31:44.395852 systemd-networkd[1378]: lo: Gained carrier Jun 25 18:31:44.397486 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:31:44.398294 systemd-networkd[1378]: Enumeration completed Jun 25 18:31:44.398885 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:31:44.400212 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 18:31:44.400288 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:31:44.400294 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:31:44.401366 systemd[1]: Reached target network.target - Network. Jun 25 18:31:44.402133 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:31:44.402180 systemd-networkd[1378]: eth0: Link UP Jun 25 18:31:44.402185 systemd-networkd[1378]: eth0: Gained carrier Jun 25 18:31:44.402198 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:31:44.403723 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 25 18:31:44.407110 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:31:44.412765 kernel: ACPI: button: Power Button [PWRF] Jun 25 18:31:44.418855 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:31:44.419159 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:31:44.420025 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Jun 25 18:31:44.420895 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 18:31:45.057031 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:31:45.057056 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 18:31:45.057106 systemd-timesyncd[1379]: Initial clock synchronization to Tue 2024-06-25 18:31:45.056952 UTC. Jun 25 18:31:45.057133 systemd-resolved[1315]: Clock change detected. Flushing caches. Jun 25 18:31:45.066531 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 18:31:45.102287 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:31:45.118519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:31:45.203792 kernel: kvm_amd: TSC scaling supported Jun 25 18:31:45.203865 kernel: kvm_amd: Nested Virtualization enabled Jun 25 18:31:45.203879 kernel: kvm_amd: Nested Paging enabled Jun 25 18:31:45.203914 kernel: kvm_amd: LBR virtualization supported Jun 25 18:31:45.205386 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jun 25 18:31:45.205411 kernel: kvm_amd: Virtual GIF supported Jun 25 18:31:45.232621 kernel: EDAC MC: Ver: 3.0.0 Jun 25 18:31:45.270331 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:31:45.272252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:45.290806 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:31:45.302204 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:31:45.334160 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:31:45.336095 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:31:45.337497 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:31:45.339003 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:31:45.341152 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:31:45.343025 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:31:45.344516 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:31:45.346111 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:31:45.347644 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:31:45.347673 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:31:45.348796 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:31:45.350967 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:31:45.354054 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:31:45.361273 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:31:45.363988 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:31:45.365876 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:31:45.367368 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:31:45.368550 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:31:45.369696 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:31:45.369728 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:31:45.371321 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:31:45.373934 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:31:45.380398 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:31:45.379681 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:31:45.383579 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:31:45.385096 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:31:45.387557 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:31:45.389534 jq[1417]: false Jun 25 18:31:45.391943 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:31:45.394701 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:31:45.402411 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:31:45.408516 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:31:45.411783 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:31:45.412365 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:31:45.415412 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:31:45.419397 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:31:45.423421 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:31:45.423733 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:31:45.425968 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:31:45.426158 dbus-daemon[1416]: [system] SELinux support is enabled Jun 25 18:31:45.427730 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:31:45.431610 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:31:45.431861 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:31:45.437080 extend-filesystems[1418]: Found loop3 Jun 25 18:31:45.437080 extend-filesystems[1418]: Found loop4 Jun 25 18:31:45.437080 extend-filesystems[1418]: Found loop5 Jun 25 18:31:45.437080 extend-filesystems[1418]: Found sr0 Jun 25 18:31:45.437080 extend-filesystems[1418]: Found vda Jun 25 18:31:45.437080 extend-filesystems[1418]: Found vda1 Jun 25 18:31:45.437080 extend-filesystems[1418]: Found vda2 Jun 25 18:31:45.437080 extend-filesystems[1418]: Found vda3 Jun 25 18:31:45.437080 extend-filesystems[1418]: Found usr Jun 25 18:31:45.437080 extend-filesystems[1418]: Found vda4 Jun 25 18:31:45.437080 extend-filesystems[1418]: Found vda6 Jun 25 18:31:45.437080 extend-filesystems[1418]: Found vda7 Jun 25 18:31:45.437080 extend-filesystems[1418]: Found vda9 Jun 25 18:31:45.437080 extend-filesystems[1418]: Checking size of /dev/vda9 Jun 25 18:31:45.435113 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:31:45.485111 extend-filesystems[1418]: Resized partition /dev/vda9 Jun 25 18:31:45.492321 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 18:31:45.492364 update_engine[1426]: I0625 18:31:45.468816 1426 main.cc:92] Flatcar Update Engine starting Jun 25 18:31:45.492364 update_engine[1426]: I0625 18:31:45.471662 1426 update_check_scheduler.cc:74] Next update check in 7m12s Jun 25 18:31:45.435694 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:31:45.497026 extend-filesystems[1451]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:31:45.445751 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:31:45.580196 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1377) Jun 25 18:31:45.580334 jq[1431]: true Jun 25 18:31:45.450160 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:31:45.585278 tar[1433]: linux-amd64/helm Jun 25 18:31:45.450203 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:31:45.452809 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:31:45.452833 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:31:45.593463 jq[1446]: true Jun 25 18:31:45.471396 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:31:45.487476 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:31:45.624257 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 18:31:45.626069 systemd-logind[1424]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 18:31:45.626094 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:31:45.627052 systemd-logind[1424]: New seat seat0. Jun 25 18:31:45.631711 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:31:45.642141 locksmithd[1450]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:31:45.652325 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:31:45.652983 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 18:31:45.652983 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:31:45.652983 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 18:31:45.669140 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Jun 25 18:31:45.670750 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:31:45.654393 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:31:45.654622 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:31:45.666982 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:31:45.674722 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:31:45.708778 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:31:45.725601 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:31:45.735601 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:31:45.735914 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:31:45.745631 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:31:45.755324 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:31:45.773504 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:41918.service - OpenSSH per-connection server daemon (10.0.0.1:41918). Jun 25 18:31:45.803441 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:31:45.811740 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:31:45.815939 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:31:45.818240 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:31:45.852416 sshd[1498]: Accepted publickey for core from 10.0.0.1 port 41918 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:31:45.853927 sshd[1498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:45.865727 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:31:45.885690 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:31:45.890024 systemd-logind[1424]: New session 1 of user core. Jun 25 18:31:45.917990 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:31:45.976644 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:31:45.981642 (systemd)[1505]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:46.066356 containerd[1439]: time="2024-06-25T18:31:46.066199645Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:31:46.168742 containerd[1439]: time="2024-06-25T18:31:46.168314520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:31:46.168742 containerd[1439]: time="2024-06-25T18:31:46.168398227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:31:46.170472 containerd[1439]: time="2024-06-25T18:31:46.170411271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:31:46.170472 containerd[1439]: time="2024-06-25T18:31:46.170464661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:31:46.170782 containerd[1439]: time="2024-06-25T18:31:46.170758192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:31:46.170782 containerd[1439]: time="2024-06-25T18:31:46.170778720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:31:46.170934 containerd[1439]: time="2024-06-25T18:31:46.170901901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:31:46.172088 containerd[1439]: time="2024-06-25T18:31:46.171125430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:31:46.172088 containerd[1439]: time="2024-06-25T18:31:46.171143504Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:31:46.172088 containerd[1439]: time="2024-06-25T18:31:46.171255955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:31:46.172088 containerd[1439]: time="2024-06-25T18:31:46.171560506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:31:46.172088 containerd[1439]: time="2024-06-25T18:31:46.171581055Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:31:46.172088 containerd[1439]: time="2024-06-25T18:31:46.171594510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:31:46.172088 containerd[1439]: time="2024-06-25T18:31:46.171747677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:31:46.172088 containerd[1439]: time="2024-06-25T18:31:46.171767384Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:31:46.172088 containerd[1439]: time="2024-06-25T18:31:46.171839880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:31:46.172088 containerd[1439]: time="2024-06-25T18:31:46.171861791Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:31:46.183909 systemd[1505]: Queued start job for default target default.target. Jun 25 18:31:46.198896 systemd[1505]: Created slice app.slice - User Application Slice. Jun 25 18:31:46.198934 systemd[1505]: Reached target paths.target - Paths. Jun 25 18:31:46.198949 systemd[1505]: Reached target timers.target - Timers. Jun 25 18:31:46.200767 systemd[1505]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:31:46.220534 systemd[1505]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:31:46.220724 systemd[1505]: Reached target sockets.target - Sockets. Jun 25 18:31:46.220753 systemd[1505]: Reached target basic.target - Basic System. Jun 25 18:31:46.220806 systemd[1505]: Reached target default.target - Main User Target. Jun 25 18:31:46.220848 systemd[1505]: Startup finished in 226ms. Jun 25 18:31:46.221151 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:31:46.239616 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:31:46.280842 containerd[1439]: time="2024-06-25T18:31:46.280695604Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:31:46.280842 containerd[1439]: time="2024-06-25T18:31:46.280812514Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:31:46.280842 containerd[1439]: time="2024-06-25T18:31:46.280831760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:31:46.281024 containerd[1439]: time="2024-06-25T18:31:46.280892514Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:31:46.281024 containerd[1439]: time="2024-06-25T18:31:46.280907472Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:31:46.281024 containerd[1439]: time="2024-06-25T18:31:46.280920526Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:31:46.281024 containerd[1439]: time="2024-06-25T18:31:46.280949300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:31:46.281295 containerd[1439]: time="2024-06-25T18:31:46.281267607Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:31:46.281295 containerd[1439]: time="2024-06-25T18:31:46.281289658Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:31:46.281339 containerd[1439]: time="2024-06-25T18:31:46.281324824Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:31:46.281368 containerd[1439]: time="2024-06-25T18:31:46.281339983Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:31:46.281368 containerd[1439]: time="2024-06-25T18:31:46.281360130Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:31:46.281405 containerd[1439]: time="2024-06-25T18:31:46.281386931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:31:46.281405 containerd[1439]: time="2024-06-25T18:31:46.281401658Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:31:46.281440 containerd[1439]: time="2024-06-25T18:31:46.281414843Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:31:46.281440 containerd[1439]: time="2024-06-25T18:31:46.281430843Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:31:46.281482 containerd[1439]: time="2024-06-25T18:31:46.281445210Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:31:46.281482 containerd[1439]: time="2024-06-25T18:31:46.281464456Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:31:46.281526 containerd[1439]: time="2024-06-25T18:31:46.281482790Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:31:46.281705 containerd[1439]: time="2024-06-25T18:31:46.281669791Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:31:46.282334 containerd[1439]: time="2024-06-25T18:31:46.282304371Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:31:46.282379 containerd[1439]: time="2024-06-25T18:31:46.282336882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282379 containerd[1439]: time="2024-06-25T18:31:46.282356879Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:31:46.282429 containerd[1439]: time="2024-06-25T18:31:46.282381836Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:31:46.282502 containerd[1439]: time="2024-06-25T18:31:46.282482565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282527 containerd[1439]: time="2024-06-25T18:31:46.282503033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282527 containerd[1439]: time="2024-06-25T18:31:46.282516008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282527 containerd[1439]: time="2024-06-25T18:31:46.282528150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282595 containerd[1439]: time="2024-06-25T18:31:46.282541976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282595 containerd[1439]: time="2024-06-25T18:31:46.282558487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282595 containerd[1439]: time="2024-06-25T18:31:46.282569889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282595 containerd[1439]: time="2024-06-25T18:31:46.282581430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282595 containerd[1439]: time="2024-06-25T18:31:46.282594224Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:31:46.282791 containerd[1439]: time="2024-06-25T18:31:46.282773130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282822 containerd[1439]: time="2024-06-25T18:31:46.282793097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282822 containerd[1439]: time="2024-06-25T18:31:46.282805781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282822 containerd[1439]: time="2024-06-25T18:31:46.282819427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282885 containerd[1439]: time="2024-06-25T18:31:46.282834304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282885 containerd[1439]: time="2024-06-25T18:31:46.282848361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282885 containerd[1439]: time="2024-06-25T18:31:46.282860624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.282885 containerd[1439]: time="2024-06-25T18:31:46.282871133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:31:46.283428 containerd[1439]: time="2024-06-25T18:31:46.283373576Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:31:46.283577 containerd[1439]: time="2024-06-25T18:31:46.283435161Z" level=info msg="Connect containerd service" Jun 25 18:31:46.283577 containerd[1439]: time="2024-06-25T18:31:46.283472190Z" level=info msg="using legacy CRI server" Jun 25 18:31:46.283577 containerd[1439]: time="2024-06-25T18:31:46.283478793Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:31:46.283644 containerd[1439]: time="2024-06-25T18:31:46.283597045Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:31:46.284587 containerd[1439]: time="2024-06-25T18:31:46.284543118Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:31:46.284685 containerd[1439]: time="2024-06-25T18:31:46.284660509Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:31:46.284720 containerd[1439]: time="2024-06-25T18:31:46.284691797Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:31:46.284720 containerd[1439]: time="2024-06-25T18:31:46.284703880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:31:46.284777 containerd[1439]: time="2024-06-25T18:31:46.284715963Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:31:46.284870 containerd[1439]: time="2024-06-25T18:31:46.284775494Z" level=info msg="Start subscribing containerd event" Jun 25 18:31:46.284926 containerd[1439]: time="2024-06-25T18:31:46.284909135Z" level=info msg="Start recovering state" Jun 25 18:31:46.285118 containerd[1439]: time="2024-06-25T18:31:46.285086056Z" level=info msg="Start event monitor" Jun 25 18:31:46.285178 containerd[1439]: time="2024-06-25T18:31:46.285143895Z" level=info msg="Start snapshots syncer" Jun 25 18:31:46.285259 containerd[1439]: time="2024-06-25T18:31:46.285189761Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:31:46.285301 containerd[1439]: time="2024-06-25T18:31:46.285223664Z" level=info msg="Start streaming server" Jun 25 18:31:46.285301 containerd[1439]: time="2024-06-25T18:31:46.285273348Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:31:46.285392 containerd[1439]: time="2024-06-25T18:31:46.285349060Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:31:46.285798 containerd[1439]: time="2024-06-25T18:31:46.285773846Z" level=info msg="containerd successfully booted in 0.222928s" Jun 25 18:31:46.285928 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:31:46.317986 tar[1433]: linux-amd64/LICENSE Jun 25 18:31:46.318163 tar[1433]: linux-amd64/README.md Jun 25 18:31:46.334203 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:44990.service - OpenSSH per-connection server daemon (10.0.0.1:44990). Jun 25 18:31:46.338724 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:31:46.376737 sshd[1522]: Accepted publickey for core from 10.0.0.1 port 44990 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:31:46.378833 sshd[1522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:46.383919 systemd-logind[1424]: New session 2 of user core. Jun 25 18:31:46.393465 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:31:46.451616 sshd[1522]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:46.463951 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:44990.service: Deactivated successfully. Jun 25 18:31:46.466282 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:31:46.468084 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:31:46.477864 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:44998.service - OpenSSH per-connection server daemon (10.0.0.1:44998). Jun 25 18:31:46.480766 systemd-logind[1424]: Removed session 2. Jun 25 18:31:46.509978 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 44998 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:31:46.512215 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:46.518897 systemd-logind[1424]: New session 3 of user core. Jun 25 18:31:46.532640 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:31:46.594601 sshd[1530]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:46.599248 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:44998.service: Deactivated successfully. Jun 25 18:31:46.601431 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:31:46.602100 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:31:46.603085 systemd-logind[1424]: Removed session 3. Jun 25 18:31:47.002477 systemd-networkd[1378]: eth0: Gained IPv6LL Jun 25 18:31:47.005792 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:31:47.007647 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:31:47.018518 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 18:31:47.021438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:31:47.023692 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:31:47.044208 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 18:31:47.044749 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 18:31:47.046415 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:31:47.048112 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:31:48.270508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:31:48.272257 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:31:48.273633 systemd[1]: Startup finished in 989ms (kernel) + 5.395s (initrd) + 5.566s (userspace) = 11.951s. Jun 25 18:31:48.285933 (kubelet)[1558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:31:48.788051 kubelet[1558]: E0625 18:31:48.787897 1558 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:31:48.793616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:31:48.793871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:31:48.794322 systemd[1]: kubelet.service: Consumed 1.626s CPU time. Jun 25 18:31:56.605725 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:33384.service - OpenSSH per-connection server daemon (10.0.0.1:33384). Jun 25 18:31:56.638205 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 33384 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:31:56.639823 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:56.644055 systemd-logind[1424]: New session 4 of user core. Jun 25 18:31:56.654478 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:31:56.709821 sshd[1572]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:56.730310 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:33384.service: Deactivated successfully. Jun 25 18:31:56.733047 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:31:56.734889 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:31:56.744766 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:33388.service - OpenSSH per-connection server daemon (10.0.0.1:33388). Jun 25 18:31:56.746019 systemd-logind[1424]: Removed session 4. Jun 25 18:31:56.775531 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 33388 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:31:56.777077 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:56.781472 systemd-logind[1424]: New session 5 of user core. Jun 25 18:31:56.791362 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:31:56.843225 sshd[1579]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:56.855756 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:33388.service: Deactivated successfully. Jun 25 18:31:56.857907 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:31:56.860260 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:31:56.876636 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:33396.service - OpenSSH per-connection server daemon (10.0.0.1:33396). Jun 25 18:31:56.877888 systemd-logind[1424]: Removed session 5. Jun 25 18:31:56.906571 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 33396 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:31:56.908218 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:56.912630 systemd-logind[1424]: New session 6 of user core. Jun 25 18:31:56.927451 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:31:56.983649 sshd[1587]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:56.997995 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:33396.service: Deactivated successfully. Jun 25 18:31:57.000555 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:31:57.002375 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:31:57.012747 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:33412.service - OpenSSH per-connection server daemon (10.0.0.1:33412). Jun 25 18:31:57.013984 systemd-logind[1424]: Removed session 6. Jun 25 18:31:57.047191 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 33412 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:31:57.049109 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:57.053632 systemd-logind[1424]: New session 7 of user core. Jun 25 18:31:57.063538 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:31:57.127489 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:31:57.127876 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:31:57.149181 sudo[1597]: pam_unix(sudo:session): session closed for user root Jun 25 18:31:57.151974 sshd[1594]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:57.167151 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:33412.service: Deactivated successfully. Jun 25 18:31:57.169474 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:31:57.171829 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:31:57.181724 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:33420.service - OpenSSH per-connection server daemon (10.0.0.1:33420). Jun 25 18:31:57.182991 systemd-logind[1424]: Removed session 7. Jun 25 18:31:57.214466 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 33420 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:31:57.216458 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:57.220950 systemd-logind[1424]: New session 8 of user core. Jun 25 18:31:57.232632 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:31:57.293981 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:31:57.294345 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:31:57.302175 sudo[1607]: pam_unix(sudo:session): session closed for user root Jun 25 18:31:57.309552 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:31:57.309892 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:31:57.331713 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:31:57.333959 auditctl[1610]: No rules Jun 25 18:31:57.335473 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:31:57.335800 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:31:57.337874 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:31:57.383046 augenrules[1628]: No rules Jun 25 18:31:57.385763 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:31:57.387659 sudo[1606]: pam_unix(sudo:session): session closed for user root Jun 25 18:31:57.390124 sshd[1602]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:57.407848 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:33420.service: Deactivated successfully. Jun 25 18:31:57.410038 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:31:57.411881 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:31:57.421560 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:33430.service - OpenSSH per-connection server daemon (10.0.0.1:33430). Jun 25 18:31:57.422610 systemd-logind[1424]: Removed session 8. Jun 25 18:31:57.452185 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 33430 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:31:57.453938 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:57.458372 systemd-logind[1424]: New session 9 of user core. Jun 25 18:31:57.470417 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:31:57.526554 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:31:57.526888 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:31:57.642573 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:31:57.642680 (dockerd)[1650]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:31:58.115518 dockerd[1650]: time="2024-06-25T18:31:58.115313440Z" level=info msg="Starting up" Jun 25 18:31:58.183547 dockerd[1650]: time="2024-06-25T18:31:58.183484881Z" level=info msg="Loading containers: start." Jun 25 18:31:58.467440 kernel: Initializing XFRM netlink socket Jun 25 18:31:58.578988 systemd-networkd[1378]: docker0: Link UP Jun 25 18:31:58.597192 dockerd[1650]: time="2024-06-25T18:31:58.597133973Z" level=info msg="Loading containers: done." Jun 25 18:31:58.659590 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1384587074-merged.mount: Deactivated successfully. Jun 25 18:31:58.746830 dockerd[1650]: time="2024-06-25T18:31:58.746747809Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:31:58.747063 dockerd[1650]: time="2024-06-25T18:31:58.747037061Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:31:58.747277 dockerd[1650]: time="2024-06-25T18:31:58.747246344Z" level=info msg="Daemon has completed initialization" Jun 25 18:31:58.880750 dockerd[1650]: time="2024-06-25T18:31:58.880489765Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:31:58.880900 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:31:58.882368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:31:58.889510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:31:59.096752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:31:59.104203 (kubelet)[1790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:31:59.168525 kubelet[1790]: E0625 18:31:59.168343 1790 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:31:59.177196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:31:59.177473 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:31:59.757067 containerd[1439]: time="2024-06-25T18:31:59.757012154Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 18:32:01.203134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687644250.mount: Deactivated successfully. Jun 25 18:32:03.226273 containerd[1439]: time="2024-06-25T18:32:03.226183790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:03.228100 containerd[1439]: time="2024-06-25T18:32:03.228042215Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jun 25 18:32:03.230098 containerd[1439]: time="2024-06-25T18:32:03.230058616Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:03.234708 containerd[1439]: time="2024-06-25T18:32:03.234642150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:03.235712 containerd[1439]: time="2024-06-25T18:32:03.235670848Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 3.478602529s" Jun 25 18:32:03.235712 containerd[1439]: time="2024-06-25T18:32:03.235711174Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jun 25 18:32:03.271026 containerd[1439]: time="2024-06-25T18:32:03.270966147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 18:32:05.492093 containerd[1439]: time="2024-06-25T18:32:05.491919163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:05.493468 containerd[1439]: time="2024-06-25T18:32:05.493380213Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jun 25 18:32:05.494806 containerd[1439]: time="2024-06-25T18:32:05.494734632Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:05.498316 containerd[1439]: time="2024-06-25T18:32:05.498200371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:05.499758 containerd[1439]: time="2024-06-25T18:32:05.499701846Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 2.228684833s" Jun 25 18:32:05.499843 containerd[1439]: time="2024-06-25T18:32:05.499767659Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jun 25 18:32:05.531464 containerd[1439]: time="2024-06-25T18:32:05.531393698Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 18:32:06.823444 containerd[1439]: time="2024-06-25T18:32:06.823371116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:06.824314 containerd[1439]: time="2024-06-25T18:32:06.824239023Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jun 25 18:32:06.825469 containerd[1439]: time="2024-06-25T18:32:06.825439263Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:06.828428 containerd[1439]: time="2024-06-25T18:32:06.828373024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:06.829698 containerd[1439]: time="2024-06-25T18:32:06.829655098Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.298215363s" Jun 25 18:32:06.829698 containerd[1439]: time="2024-06-25T18:32:06.829696045Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jun 25 18:32:06.857211 containerd[1439]: time="2024-06-25T18:32:06.857163296Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 18:32:08.460582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2356453288.mount: Deactivated successfully. Jun 25 18:32:09.427807 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:32:09.437456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:32:09.647154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:32:09.651920 (kubelet)[1907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:32:09.783051 kubelet[1907]: E0625 18:32:09.782890 1907 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:32:09.787545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:32:09.787763 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:32:10.265120 containerd[1439]: time="2024-06-25T18:32:10.264943753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:10.266499 containerd[1439]: time="2024-06-25T18:32:10.265950060Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jun 25 18:32:10.270514 containerd[1439]: time="2024-06-25T18:32:10.270463403Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:10.272559 containerd[1439]: time="2024-06-25T18:32:10.272504730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:10.273581 containerd[1439]: time="2024-06-25T18:32:10.273437098Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 3.41622476s" Jun 25 18:32:10.273581 containerd[1439]: time="2024-06-25T18:32:10.273475671Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jun 25 18:32:10.304091 containerd[1439]: time="2024-06-25T18:32:10.304034098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 18:32:10.886550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327615184.mount: Deactivated successfully. Jun 25 18:32:12.468678 containerd[1439]: time="2024-06-25T18:32:12.468616649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:12.469587 containerd[1439]: time="2024-06-25T18:32:12.469543407Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jun 25 18:32:12.470887 containerd[1439]: time="2024-06-25T18:32:12.470853443Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:12.473829 containerd[1439]: time="2024-06-25T18:32:12.473767678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:12.475062 containerd[1439]: time="2024-06-25T18:32:12.475017822Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.170930263s" Jun 25 18:32:12.475062 containerd[1439]: time="2024-06-25T18:32:12.475057336Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 18:32:12.508587 containerd[1439]: time="2024-06-25T18:32:12.508543953Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:32:13.128450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount954878294.mount: Deactivated successfully. Jun 25 18:32:13.135084 containerd[1439]: time="2024-06-25T18:32:13.135025015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:13.137841 containerd[1439]: time="2024-06-25T18:32:13.137797063Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 18:32:13.139223 containerd[1439]: time="2024-06-25T18:32:13.139183672Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:13.142013 containerd[1439]: time="2024-06-25T18:32:13.141967883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:13.142745 containerd[1439]: time="2024-06-25T18:32:13.142697190Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 633.951048ms" Jun 25 18:32:13.142745 containerd[1439]: time="2024-06-25T18:32:13.142738868Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:32:13.167750 containerd[1439]: time="2024-06-25T18:32:13.167709809Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 18:32:14.621584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1047330059.mount: Deactivated successfully. Jun 25 18:32:17.078957 containerd[1439]: time="2024-06-25T18:32:17.078853783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:17.080269 containerd[1439]: time="2024-06-25T18:32:17.080210266Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jun 25 18:32:17.081561 containerd[1439]: time="2024-06-25T18:32:17.081532125Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:17.084615 containerd[1439]: time="2024-06-25T18:32:17.084585069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:17.085835 containerd[1439]: time="2024-06-25T18:32:17.085772155Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.918018884s" Jun 25 18:32:17.085835 containerd[1439]: time="2024-06-25T18:32:17.085827569Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jun 25 18:32:20.037999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 18:32:20.051577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:32:20.165678 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:32:20.165795 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:32:20.166074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:32:20.176487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:32:20.195388 systemd[1]: Reloading requested from client PID 2114 ('systemctl') (unit session-9.scope)... Jun 25 18:32:20.195403 systemd[1]: Reloading... Jun 25 18:32:20.270262 zram_generator::config[2152]: No configuration found. Jun 25 18:32:20.529019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:32:20.608559 systemd[1]: Reloading finished in 412 ms. Jun 25 18:32:20.663469 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:32:20.663565 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:32:20.663825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:32:20.666493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:32:20.814196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:32:20.820000 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:32:20.860490 kubelet[2200]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:32:20.860490 kubelet[2200]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:32:20.860490 kubelet[2200]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:32:20.862306 kubelet[2200]: I0625 18:32:20.862246 2200 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:32:21.135259 kubelet[2200]: I0625 18:32:21.135121 2200 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:32:21.135259 kubelet[2200]: I0625 18:32:21.135153 2200 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:32:21.135413 kubelet[2200]: I0625 18:32:21.135395 2200 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:32:21.153362 kubelet[2200]: I0625 18:32:21.153299 2200 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:32:21.154743 kubelet[2200]: E0625 18:32:21.154717 2200 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:21.170388 kubelet[2200]: I0625 18:32:21.170358 2200 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:32:21.172104 kubelet[2200]: I0625 18:32:21.172058 2200 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:32:21.172367 kubelet[2200]: I0625 18:32:21.172100 2200 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:32:21.173078 kubelet[2200]: I0625 18:32:21.173055 2200 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:32:21.173078 kubelet[2200]: I0625 18:32:21.173075 2200 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:32:21.173239 kubelet[2200]: I0625 18:32:21.173209 2200 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:32:21.174177 kubelet[2200]: I0625 18:32:21.174158 2200 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:32:21.174177 kubelet[2200]: I0625 18:32:21.174174 2200 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:32:21.174238 kubelet[2200]: I0625 18:32:21.174197 2200 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:32:21.174238 kubelet[2200]: I0625 18:32:21.174217 2200 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:32:21.176600 kubelet[2200]: W0625 18:32:21.176493 2200 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:21.176600 kubelet[2200]: W0625 18:32:21.176527 2200 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:21.176600 kubelet[2200]: E0625 18:32:21.176567 2200 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:21.176600 kubelet[2200]: E0625 18:32:21.176579 2200 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:21.179165 kubelet[2200]: I0625 18:32:21.179148 2200 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:32:21.180706 kubelet[2200]: I0625 18:32:21.180687 2200 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:32:21.180750 kubelet[2200]: W0625 18:32:21.180745 2200 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:32:21.181398 kubelet[2200]: I0625 18:32:21.181378 2200 server.go:1264] "Started kubelet" Jun 25 18:32:21.181483 kubelet[2200]: I0625 18:32:21.181433 2200 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:32:21.182361 kubelet[2200]: I0625 18:32:21.182343 2200 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:32:21.185401 kubelet[2200]: I0625 18:32:21.183624 2200 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:32:21.185401 kubelet[2200]: I0625 18:32:21.184417 2200 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:32:21.185401 kubelet[2200]: I0625 18:32:21.184637 2200 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:32:21.190711 kubelet[2200]: I0625 18:32:21.190141 2200 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:32:21.192584 kubelet[2200]: E0625 18:32:21.192548 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Jun 25 18:32:21.192892 kubelet[2200]: I0625 18:32:21.192749 2200 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:32:21.194609 kubelet[2200]: I0625 18:32:21.193042 2200 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:32:21.194609 kubelet[2200]: W0625 18:32:21.193494 2200 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:21.194609 kubelet[2200]: E0625 18:32:21.193545 2200 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:21.194609 kubelet[2200]: I0625 18:32:21.193994 2200 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:32:21.194609 kubelet[2200]: I0625 18:32:21.194091 2200 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:32:21.194609 kubelet[2200]: E0625 18:32:21.194332 2200 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17dc52ee4c1d19f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-06-25 18:32:21.181356534 +0000 UTC m=+0.357187190,LastTimestamp:2024-06-25 18:32:21.181356534 +0000 UTC m=+0.357187190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 25 18:32:21.195686 kubelet[2200]: I0625 18:32:21.195667 2200 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:32:21.195965 kubelet[2200]: E0625 18:32:21.195699 2200 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:32:21.208197 kubelet[2200]: I0625 18:32:21.208141 2200 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:32:21.209475 kubelet[2200]: I0625 18:32:21.209448 2200 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:32:21.209524 kubelet[2200]: I0625 18:32:21.209488 2200 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:32:21.209524 kubelet[2200]: I0625 18:32:21.209509 2200 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:32:21.209569 kubelet[2200]: E0625 18:32:21.209555 2200 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:32:21.210016 kubelet[2200]: W0625 18:32:21.209974 2200 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:21.210016 kubelet[2200]: E0625 18:32:21.210015 2200 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:21.212532 kubelet[2200]: I0625 18:32:21.212499 2200 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:32:21.212532 kubelet[2200]: I0625 18:32:21.212519 2200 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:32:21.212615 kubelet[2200]: I0625 18:32:21.212536 2200 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:32:21.292415 kubelet[2200]: I0625 18:32:21.292352 2200 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:32:21.292788 kubelet[2200]: E0625 18:32:21.292753 2200 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jun 25 18:32:21.310130 kubelet[2200]: E0625 18:32:21.310099 2200 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:32:21.393516 kubelet[2200]: E0625 18:32:21.393344 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Jun 25 18:32:21.406498 kubelet[2200]: I0625 18:32:21.406442 2200 policy_none.go:49] "None policy: Start" Jun 25 18:32:21.407393 kubelet[2200]: I0625 18:32:21.407352 2200 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:32:21.407393 kubelet[2200]: I0625 18:32:21.407387 2200 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:32:21.413917 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:32:21.429654 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:32:21.433084 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:32:21.444248 kubelet[2200]: I0625 18:32:21.444199 2200 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:32:21.444525 kubelet[2200]: I0625 18:32:21.444478 2200 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:32:21.444655 kubelet[2200]: I0625 18:32:21.444613 2200 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:32:21.446115 kubelet[2200]: E0625 18:32:21.446076 2200 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 18:32:21.494633 kubelet[2200]: I0625 18:32:21.494601 2200 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:32:21.494946 kubelet[2200]: E0625 18:32:21.494908 2200 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jun 25 18:32:21.511252 kubelet[2200]: I0625 18:32:21.511151 2200 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:32:21.512835 kubelet[2200]: I0625 18:32:21.512811 2200 topology_manager.go:215] "Topology Admit Handler" podUID="343a0065ed02c177442615ebcd128a6c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:32:21.513870 kubelet[2200]: I0625 18:32:21.513844 2200 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:32:21.519939 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jun 25 18:32:21.532855 systemd[1]: Created slice kubepods-burstable-pod343a0065ed02c177442615ebcd128a6c.slice - libcontainer container kubepods-burstable-pod343a0065ed02c177442615ebcd128a6c.slice. Jun 25 18:32:21.536252 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jun 25 18:32:21.694630 kubelet[2200]: I0625 18:32:21.694434 2200 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:32:21.694630 kubelet[2200]: I0625 18:32:21.694492 2200 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/343a0065ed02c177442615ebcd128a6c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"343a0065ed02c177442615ebcd128a6c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:32:21.694630 kubelet[2200]: I0625 18:32:21.694528 2200 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/343a0065ed02c177442615ebcd128a6c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"343a0065ed02c177442615ebcd128a6c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:32:21.694630 kubelet[2200]: I0625 18:32:21.694552 2200 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/343a0065ed02c177442615ebcd128a6c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"343a0065ed02c177442615ebcd128a6c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:32:21.694630 kubelet[2200]: I0625 18:32:21.694602 2200 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:32:21.694868 kubelet[2200]: I0625 18:32:21.694675 2200 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:32:21.694868 kubelet[2200]: I0625 18:32:21.694713 2200 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:32:21.694868 kubelet[2200]: I0625 18:32:21.694743 2200 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:32:21.694868 kubelet[2200]: I0625 18:32:21.694789 2200 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:32:21.794044 kubelet[2200]: E0625 18:32:21.793975 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Jun 25 18:32:21.831285 kubelet[2200]: E0625 18:32:21.831253 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:21.831958 containerd[1439]: time="2024-06-25T18:32:21.831911694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jun 25 18:32:21.835250 kubelet[2200]: E0625 18:32:21.835205 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:21.835859 containerd[1439]: time="2024-06-25T18:32:21.835816506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:343a0065ed02c177442615ebcd128a6c,Namespace:kube-system,Attempt:0,}" Jun 25 18:32:21.839056 kubelet[2200]: E0625 18:32:21.839032 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:21.839383 containerd[1439]: time="2024-06-25T18:32:21.839349378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jun 25 18:32:21.899569 kubelet[2200]: I0625 18:32:21.899527 2200 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:32:21.899979 kubelet[2200]: E0625 18:32:21.899859 2200 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jun 25 18:32:22.306839 kubelet[2200]: W0625 18:32:22.306763 2200 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:22.306839 kubelet[2200]: E0625 18:32:22.306840 2200 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:22.526778 kubelet[2200]: W0625 18:32:22.526709 2200 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:22.526778 kubelet[2200]: E0625 18:32:22.526782 2200 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:22.564339 kubelet[2200]: W0625 18:32:22.564182 2200 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:22.564339 kubelet[2200]: E0625 18:32:22.564286 2200 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:22.594612 kubelet[2200]: E0625 18:32:22.594548 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Jun 25 18:32:22.598078 kubelet[2200]: W0625 18:32:22.597992 2200 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:22.598149 kubelet[2200]: E0625 18:32:22.598083 2200 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:22.701186 kubelet[2200]: I0625 18:32:22.701149 2200 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:32:22.701539 kubelet[2200]: E0625 18:32:22.701506 2200 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jun 25 18:32:22.780943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount483901496.mount: Deactivated successfully. Jun 25 18:32:22.790059 containerd[1439]: time="2024-06-25T18:32:22.790004992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:32:22.790947 containerd[1439]: time="2024-06-25T18:32:22.790915380Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:32:22.791882 containerd[1439]: time="2024-06-25T18:32:22.791840807Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:32:22.792922 containerd[1439]: time="2024-06-25T18:32:22.792857056Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:32:22.794119 containerd[1439]: time="2024-06-25T18:32:22.793772544Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:32:22.794774 containerd[1439]: time="2024-06-25T18:32:22.794721996Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:32:22.795658 containerd[1439]: time="2024-06-25T18:32:22.795595544Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 18:32:22.797528 containerd[1439]: time="2024-06-25T18:32:22.797485060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:32:22.799153 containerd[1439]: time="2024-06-25T18:32:22.799116836Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 959.701421ms" Jun 25 18:32:22.803731 containerd[1439]: time="2024-06-25T18:32:22.803688322Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 967.759881ms" Jun 25 18:32:22.804459 containerd[1439]: time="2024-06-25T18:32:22.804437532Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 972.406941ms" Jun 25 18:32:22.982013 containerd[1439]: time="2024-06-25T18:32:22.981923572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:32:22.982013 containerd[1439]: time="2024-06-25T18:32:22.981982484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:22.982474 containerd[1439]: time="2024-06-25T18:32:22.982010418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:32:22.982474 containerd[1439]: time="2024-06-25T18:32:22.982032419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:22.984947 containerd[1439]: time="2024-06-25T18:32:22.984873573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:32:22.984947 containerd[1439]: time="2024-06-25T18:32:22.984924250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:22.985214 containerd[1439]: time="2024-06-25T18:32:22.984945490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:32:22.985214 containerd[1439]: time="2024-06-25T18:32:22.984967552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:22.987325 containerd[1439]: time="2024-06-25T18:32:22.985722153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:32:22.987325 containerd[1439]: time="2024-06-25T18:32:22.985799290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:22.987325 containerd[1439]: time="2024-06-25T18:32:22.985864415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:32:22.987325 containerd[1439]: time="2024-06-25T18:32:22.985879152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:23.012448 systemd[1]: Started cri-containerd-1d1e4849e22e487a97ce576ec594dfcc0897fee5d9e26fcbb2a1dc4e1062568a.scope - libcontainer container 1d1e4849e22e487a97ce576ec594dfcc0897fee5d9e26fcbb2a1dc4e1062568a. Jun 25 18:32:23.014432 systemd[1]: Started cri-containerd-d6e5392834699cf8343591ce62f6e879ecb7d9a035d6591ea414c1ced4473f27.scope - libcontainer container d6e5392834699cf8343591ce62f6e879ecb7d9a035d6591ea414c1ced4473f27. Jun 25 18:32:23.016520 systemd[1]: Started cri-containerd-f15e46a1091075b5016e23ef61d2e0e54b017b171aabc1898a3da199a07067a0.scope - libcontainer container f15e46a1091075b5016e23ef61d2e0e54b017b171aabc1898a3da199a07067a0. Jun 25 18:32:23.055617 containerd[1439]: time="2024-06-25T18:32:23.055541502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:343a0065ed02c177442615ebcd128a6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d1e4849e22e487a97ce576ec594dfcc0897fee5d9e26fcbb2a1dc4e1062568a\"" Jun 25 18:32:23.057603 kubelet[2200]: E0625 18:32:23.057117 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:23.063655 containerd[1439]: time="2024-06-25T18:32:23.063610448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6e5392834699cf8343591ce62f6e879ecb7d9a035d6591ea414c1ced4473f27\"" Jun 25 18:32:23.064501 containerd[1439]: time="2024-06-25T18:32:23.064464777Z" level=info msg="CreateContainer within sandbox \"1d1e4849e22e487a97ce576ec594dfcc0897fee5d9e26fcbb2a1dc4e1062568a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:32:23.064906 kubelet[2200]: E0625 18:32:23.064879 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:23.065360 containerd[1439]: time="2024-06-25T18:32:23.065341068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f15e46a1091075b5016e23ef61d2e0e54b017b171aabc1898a3da199a07067a0\"" Jun 25 18:32:23.066791 containerd[1439]: time="2024-06-25T18:32:23.066751066Z" level=info msg="CreateContainer within sandbox \"d6e5392834699cf8343591ce62f6e879ecb7d9a035d6591ea414c1ced4473f27\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:32:23.066977 kubelet[2200]: E0625 18:32:23.066957 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:23.069538 containerd[1439]: time="2024-06-25T18:32:23.069498656Z" level=info msg="CreateContainer within sandbox \"f15e46a1091075b5016e23ef61d2e0e54b017b171aabc1898a3da199a07067a0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:32:23.136863 containerd[1439]: time="2024-06-25T18:32:23.136792659Z" level=info msg="CreateContainer within sandbox \"f15e46a1091075b5016e23ef61d2e0e54b017b171aabc1898a3da199a07067a0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d03c559138366810cedf8b86cedb984687b91db390e5b73d235d006991eeb2b0\"" Jun 25 18:32:23.137567 containerd[1439]: time="2024-06-25T18:32:23.137533692Z" level=info msg="StartContainer for \"d03c559138366810cedf8b86cedb984687b91db390e5b73d235d006991eeb2b0\"" Jun 25 18:32:23.138575 containerd[1439]: time="2024-06-25T18:32:23.138532987Z" level=info msg="CreateContainer within sandbox \"d6e5392834699cf8343591ce62f6e879ecb7d9a035d6591ea414c1ced4473f27\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"72a11fdd0b1bf5fa986080e32ad68b4f90a3675b68899063eab308e40cd4ab03\"" Jun 25 18:32:23.138904 containerd[1439]: time="2024-06-25T18:32:23.138869809Z" level=info msg="StartContainer for \"72a11fdd0b1bf5fa986080e32ad68b4f90a3675b68899063eab308e40cd4ab03\"" Jun 25 18:32:23.139531 containerd[1439]: time="2024-06-25T18:32:23.139420089Z" level=info msg="CreateContainer within sandbox \"1d1e4849e22e487a97ce576ec594dfcc0897fee5d9e26fcbb2a1dc4e1062568a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1dfc9947fe16bd82fd8eb35484ab47dcafe802ef248d5916dc2ec00ee24a90db\"" Jun 25 18:32:23.139971 containerd[1439]: time="2024-06-25T18:32:23.139921135Z" level=info msg="StartContainer for \"1dfc9947fe16bd82fd8eb35484ab47dcafe802ef248d5916dc2ec00ee24a90db\"" Jun 25 18:32:23.168066 systemd[1]: Started cri-containerd-d03c559138366810cedf8b86cedb984687b91db390e5b73d235d006991eeb2b0.scope - libcontainer container d03c559138366810cedf8b86cedb984687b91db390e5b73d235d006991eeb2b0. Jun 25 18:32:23.168447 kubelet[2200]: E0625 18:32:23.168397 2200 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.13:6443: connect: connection refused Jun 25 18:32:23.180381 systemd[1]: Started cri-containerd-1dfc9947fe16bd82fd8eb35484ab47dcafe802ef248d5916dc2ec00ee24a90db.scope - libcontainer container 1dfc9947fe16bd82fd8eb35484ab47dcafe802ef248d5916dc2ec00ee24a90db. Jun 25 18:32:23.181633 systemd[1]: Started cri-containerd-72a11fdd0b1bf5fa986080e32ad68b4f90a3675b68899063eab308e40cd4ab03.scope - libcontainer container 72a11fdd0b1bf5fa986080e32ad68b4f90a3675b68899063eab308e40cd4ab03. Jun 25 18:32:23.310562 containerd[1439]: time="2024-06-25T18:32:23.310371242Z" level=info msg="StartContainer for \"72a11fdd0b1bf5fa986080e32ad68b4f90a3675b68899063eab308e40cd4ab03\" returns successfully" Jun 25 18:32:23.310562 containerd[1439]: time="2024-06-25T18:32:23.310485440Z" level=info msg="StartContainer for \"d03c559138366810cedf8b86cedb984687b91db390e5b73d235d006991eeb2b0\" returns successfully" Jun 25 18:32:23.310562 containerd[1439]: time="2024-06-25T18:32:23.310514415Z" level=info msg="StartContainer for \"1dfc9947fe16bd82fd8eb35484ab47dcafe802ef248d5916dc2ec00ee24a90db\" returns successfully" Jun 25 18:32:24.221893 kubelet[2200]: E0625 18:32:24.221835 2200 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 18:32:24.303910 kubelet[2200]: I0625 18:32:24.303809 2200 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:32:24.318272 kubelet[2200]: I0625 18:32:24.318207 2200 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 18:32:24.319885 kubelet[2200]: E0625 18:32:24.319863 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:24.324712 kubelet[2200]: E0625 18:32:24.324654 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:24.326990 kubelet[2200]: E0625 18:32:24.326961 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:24.327297 kubelet[2200]: E0625 18:32:24.327274 2200 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:32:24.427921 kubelet[2200]: E0625 18:32:24.427858 2200 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:32:24.528891 kubelet[2200]: E0625 18:32:24.528729 2200 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:32:24.629607 kubelet[2200]: E0625 18:32:24.629538 2200 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:32:24.730696 kubelet[2200]: E0625 18:32:24.730649 2200 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:32:24.831493 kubelet[2200]: E0625 18:32:24.831361 2200 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:32:24.931868 kubelet[2200]: E0625 18:32:24.931833 2200 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:32:25.032504 kubelet[2200]: E0625 18:32:25.032458 2200 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:32:25.133120 kubelet[2200]: E0625 18:32:25.133072 2200 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:32:25.233485 kubelet[2200]: E0625 18:32:25.233443 2200 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:32:25.329173 kubelet[2200]: E0625 18:32:25.329098 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:25.330115 kubelet[2200]: E0625 18:32:25.329949 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:25.330224 kubelet[2200]: E0625 18:32:25.330186 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:25.333915 kubelet[2200]: E0625 18:32:25.333852 2200 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:32:26.177759 kubelet[2200]: I0625 18:32:26.177681 2200 apiserver.go:52] "Watching apiserver" Jun 25 18:32:26.193596 kubelet[2200]: I0625 18:32:26.193553 2200 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:32:26.253800 systemd[1]: Reloading requested from client PID 2480 ('systemctl') (unit session-9.scope)... Jun 25 18:32:26.253819 systemd[1]: Reloading... Jun 25 18:32:26.342328 kubelet[2200]: E0625 18:32:26.342018 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:26.344462 kubelet[2200]: E0625 18:32:26.344394 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:26.352269 zram_generator::config[2520]: No configuration found. Jun 25 18:32:26.464220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:32:26.561624 systemd[1]: Reloading finished in 307 ms. Jun 25 18:32:26.611111 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:32:26.618430 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:32:26.618764 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:32:26.651535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:32:26.816280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:32:26.822078 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:32:26.946793 kubelet[2562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:32:26.946793 kubelet[2562]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:32:26.946793 kubelet[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:32:26.947211 kubelet[2562]: I0625 18:32:26.946866 2562 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:32:26.952426 kubelet[2562]: I0625 18:32:26.952385 2562 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:32:26.952426 kubelet[2562]: I0625 18:32:26.952419 2562 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:32:26.952656 kubelet[2562]: I0625 18:32:26.952641 2562 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:32:26.953945 kubelet[2562]: I0625 18:32:26.953921 2562 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:32:26.955143 kubelet[2562]: I0625 18:32:26.955107 2562 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:32:26.963922 kubelet[2562]: I0625 18:32:26.963886 2562 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:32:26.964149 kubelet[2562]: I0625 18:32:26.964115 2562 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:32:26.964340 kubelet[2562]: I0625 18:32:26.964146 2562 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:32:26.964422 kubelet[2562]: I0625 18:32:26.964364 2562 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:32:26.964422 kubelet[2562]: I0625 18:32:26.964373 2562 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:32:26.964467 kubelet[2562]: I0625 18:32:26.964423 2562 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:32:26.964529 kubelet[2562]: I0625 18:32:26.964518 2562 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:32:26.964555 kubelet[2562]: I0625 18:32:26.964530 2562 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:32:26.964555 kubelet[2562]: I0625 18:32:26.964551 2562 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:32:26.964595 kubelet[2562]: I0625 18:32:26.964569 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:32:26.965807 kubelet[2562]: I0625 18:32:26.965701 2562 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:32:26.965980 kubelet[2562]: I0625 18:32:26.965932 2562 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:32:26.966369 kubelet[2562]: I0625 18:32:26.966344 2562 server.go:1264] "Started kubelet" Jun 25 18:32:26.966588 kubelet[2562]: I0625 18:32:26.966553 2562 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:32:26.969259 kubelet[2562]: I0625 18:32:26.966611 2562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:32:26.969259 kubelet[2562]: I0625 18:32:26.966857 2562 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:32:26.969259 kubelet[2562]: I0625 18:32:26.967554 2562 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:32:26.969965 kubelet[2562]: I0625 18:32:26.969940 2562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:32:26.970249 kubelet[2562]: E0625 18:32:26.970200 2562 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:32:26.974943 kubelet[2562]: I0625 18:32:26.974913 2562 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:32:26.975291 kubelet[2562]: I0625 18:32:26.975269 2562 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:32:26.975455 kubelet[2562]: I0625 18:32:26.975435 2562 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:32:26.976539 kubelet[2562]: I0625 18:32:26.976507 2562 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:32:26.982865 kubelet[2562]: I0625 18:32:26.982829 2562 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:32:26.982865 kubelet[2562]: I0625 18:32:26.982854 2562 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:32:26.983938 kubelet[2562]: I0625 18:32:26.983151 2562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:32:26.984718 kubelet[2562]: I0625 18:32:26.984698 2562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:32:26.984781 kubelet[2562]: I0625 18:32:26.984731 2562 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:32:26.984781 kubelet[2562]: I0625 18:32:26.984752 2562 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:32:26.984845 kubelet[2562]: E0625 18:32:26.984797 2562 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:32:27.019041 kubelet[2562]: I0625 18:32:27.019003 2562 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:32:27.019041 kubelet[2562]: I0625 18:32:27.019023 2562 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:32:27.019041 kubelet[2562]: I0625 18:32:27.019041 2562 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:32:27.019285 kubelet[2562]: I0625 18:32:27.019183 2562 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:32:27.019285 kubelet[2562]: I0625 18:32:27.019193 2562 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:32:27.019285 kubelet[2562]: I0625 18:32:27.019216 2562 policy_none.go:49] "None policy: Start" Jun 25 18:32:27.019857 kubelet[2562]: I0625 18:32:27.019807 2562 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:32:27.019857 kubelet[2562]: I0625 18:32:27.019833 2562 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:32:27.019959 kubelet[2562]: I0625 18:32:27.019943 2562 state_mem.go:75] "Updated machine memory state" Jun 25 18:32:27.024326 kubelet[2562]: I0625 18:32:27.024196 2562 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:32:27.024492 kubelet[2562]: I0625 18:32:27.024387 2562 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:32:27.024801 kubelet[2562]: I0625 18:32:27.024778 2562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:32:27.085518 kubelet[2562]: I0625 18:32:27.085332 2562 topology_manager.go:215] "Topology Admit Handler" podUID="343a0065ed02c177442615ebcd128a6c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:32:27.085518 kubelet[2562]: I0625 18:32:27.085437 2562 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:32:27.085518 kubelet[2562]: I0625 18:32:27.085482 2562 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:32:27.092305 kubelet[2562]: E0625 18:32:27.092249 2562 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 18:32:27.094574 kubelet[2562]: E0625 18:32:27.094542 2562 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 25 18:32:27.130950 kubelet[2562]: I0625 18:32:27.130903 2562 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:32:27.137140 kubelet[2562]: I0625 18:32:27.137093 2562 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jun 25 18:32:27.137329 kubelet[2562]: I0625 18:32:27.137210 2562 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 18:32:27.276842 kubelet[2562]: I0625 18:32:27.276799 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:32:27.276842 kubelet[2562]: I0625 18:32:27.276842 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:32:27.277005 kubelet[2562]: I0625 18:32:27.276872 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:32:27.277005 kubelet[2562]: I0625 18:32:27.276893 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:32:27.277005 kubelet[2562]: I0625 18:32:27.276913 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/343a0065ed02c177442615ebcd128a6c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"343a0065ed02c177442615ebcd128a6c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:32:27.277005 kubelet[2562]: I0625 18:32:27.276935 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/343a0065ed02c177442615ebcd128a6c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"343a0065ed02c177442615ebcd128a6c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:32:27.277005 kubelet[2562]: I0625 18:32:27.276955 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/343a0065ed02c177442615ebcd128a6c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"343a0065ed02c177442615ebcd128a6c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:32:27.277119 kubelet[2562]: I0625 18:32:27.276974 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:32:27.277119 kubelet[2562]: I0625 18:32:27.276992 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:32:27.393738 kubelet[2562]: E0625 18:32:27.393700 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:27.394026 kubelet[2562]: E0625 18:32:27.393806 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:27.395199 kubelet[2562]: E0625 18:32:27.395161 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:27.968424 kubelet[2562]: I0625 18:32:27.968355 2562 apiserver.go:52] "Watching apiserver" Jun 25 18:32:28.007129 kubelet[2562]: E0625 18:32:28.007074 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:28.009616 kubelet[2562]: E0625 18:32:28.008772 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:28.023516 kubelet[2562]: E0625 18:32:28.023452 2562 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 18:32:28.023947 kubelet[2562]: E0625 18:32:28.023915 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:28.049204 kubelet[2562]: I0625 18:32:28.049148 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.049129179 podStartE2EDuration="1.049129179s" podCreationTimestamp="2024-06-25 18:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:32:28.048667292 +0000 UTC m=+1.145100017" watchObservedRunningTime="2024-06-25 18:32:28.049129179 +0000 UTC m=+1.145561904" Jun 25 18:32:28.049587 kubelet[2562]: I0625 18:32:28.049488 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.049483642 podStartE2EDuration="2.049483642s" podCreationTimestamp="2024-06-25 18:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:32:28.038474967 +0000 UTC m=+1.134907692" watchObservedRunningTime="2024-06-25 18:32:28.049483642 +0000 UTC m=+1.145916367" Jun 25 18:32:28.075639 kubelet[2562]: I0625 18:32:28.075552 2562 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:32:29.013386 kubelet[2562]: E0625 18:32:29.011927 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:29.018347 kubelet[2562]: E0625 18:32:29.015446 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:30.326165 update_engine[1426]: I0625 18:32:30.326097 1426 update_attempter.cc:509] Updating boot flags... Jun 25 18:32:30.365266 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2638) Jun 25 18:32:30.411263 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2636) Jun 25 18:32:30.449261 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2636) Jun 25 18:32:31.649370 sudo[1639]: pam_unix(sudo:session): session closed for user root Jun 25 18:32:31.651330 sshd[1636]: pam_unix(sshd:session): session closed for user core Jun 25 18:32:31.655778 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:33430.service: Deactivated successfully. Jun 25 18:32:31.657588 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:32:31.657793 systemd[1]: session-9.scope: Consumed 5.407s CPU time, 147.5M memory peak, 0B memory swap peak. Jun 25 18:32:31.658289 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:32:31.659587 systemd-logind[1424]: Removed session 9. Jun 25 18:32:33.539509 kubelet[2562]: E0625 18:32:33.539468 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:33.552197 kubelet[2562]: I0625 18:32:33.552120 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.552081026 podStartE2EDuration="7.552081026s" podCreationTimestamp="2024-06-25 18:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:32:28.061357729 +0000 UTC m=+1.157790464" watchObservedRunningTime="2024-06-25 18:32:33.552081026 +0000 UTC m=+6.648513761" Jun 25 18:32:34.021931 kubelet[2562]: E0625 18:32:34.021879 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:36.795736 kubelet[2562]: E0625 18:32:36.795599 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:37.027355 kubelet[2562]: E0625 18:32:37.027118 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:38.510563 kubelet[2562]: E0625 18:32:38.510520 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:42.673793 kubelet[2562]: I0625 18:32:42.673717 2562 topology_manager.go:215] "Topology Admit Handler" podUID="5aac1fcd-5188-4e04-b4df-3cd55f2e4235" podNamespace="kube-system" podName="kube-proxy-nlff2" Jun 25 18:32:42.686623 systemd[1]: Created slice kubepods-besteffort-pod5aac1fcd_5188_4e04_b4df_3cd55f2e4235.slice - libcontainer container kubepods-besteffort-pod5aac1fcd_5188_4e04_b4df_3cd55f2e4235.slice. Jun 25 18:32:42.743525 kubelet[2562]: I0625 18:32:42.743486 2562 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:32:42.744574 containerd[1439]: time="2024-06-25T18:32:42.744331500Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:32:42.745897 kubelet[2562]: I0625 18:32:42.745215 2562 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:32:42.812794 kubelet[2562]: I0625 18:32:42.812735 2562 topology_manager.go:215] "Topology Admit Handler" podUID="89f74f6c-9e43-4996-b4ed-7cafb64af2c6" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-pl85d" Jun 25 18:32:42.825461 systemd[1]: Created slice kubepods-besteffort-pod89f74f6c_9e43_4996_b4ed_7cafb64af2c6.slice - libcontainer container kubepods-besteffort-pod89f74f6c_9e43_4996_b4ed_7cafb64af2c6.slice. Jun 25 18:32:42.859716 kubelet[2562]: I0625 18:32:42.859658 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5aac1fcd-5188-4e04-b4df-3cd55f2e4235-kube-proxy\") pod \"kube-proxy-nlff2\" (UID: \"5aac1fcd-5188-4e04-b4df-3cd55f2e4235\") " pod="kube-system/kube-proxy-nlff2" Jun 25 18:32:42.859716 kubelet[2562]: I0625 18:32:42.859718 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5aac1fcd-5188-4e04-b4df-3cd55f2e4235-xtables-lock\") pod \"kube-proxy-nlff2\" (UID: \"5aac1fcd-5188-4e04-b4df-3cd55f2e4235\") " pod="kube-system/kube-proxy-nlff2" Jun 25 18:32:42.859907 kubelet[2562]: I0625 18:32:42.859745 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5aac1fcd-5188-4e04-b4df-3cd55f2e4235-lib-modules\") pod \"kube-proxy-nlff2\" (UID: \"5aac1fcd-5188-4e04-b4df-3cd55f2e4235\") " pod="kube-system/kube-proxy-nlff2" Jun 25 18:32:42.859907 kubelet[2562]: I0625 18:32:42.859765 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgqzw\" (UniqueName: \"kubernetes.io/projected/5aac1fcd-5188-4e04-b4df-3cd55f2e4235-kube-api-access-wgqzw\") pod \"kube-proxy-nlff2\" (UID: \"5aac1fcd-5188-4e04-b4df-3cd55f2e4235\") " pod="kube-system/kube-proxy-nlff2" Jun 25 18:32:42.960729 kubelet[2562]: I0625 18:32:42.960490 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/89f74f6c-9e43-4996-b4ed-7cafb64af2c6-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-pl85d\" (UID: \"89f74f6c-9e43-4996-b4ed-7cafb64af2c6\") " pod="tigera-operator/tigera-operator-76ff79f7fd-pl85d" Jun 25 18:32:42.960729 kubelet[2562]: I0625 18:32:42.960551 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnk2r\" (UniqueName: \"kubernetes.io/projected/89f74f6c-9e43-4996-b4ed-7cafb64af2c6-kube-api-access-bnk2r\") pod \"tigera-operator-76ff79f7fd-pl85d\" (UID: \"89f74f6c-9e43-4996-b4ed-7cafb64af2c6\") " pod="tigera-operator/tigera-operator-76ff79f7fd-pl85d" Jun 25 18:32:43.000285 kubelet[2562]: E0625 18:32:43.000212 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:43.000779 containerd[1439]: time="2024-06-25T18:32:43.000735227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nlff2,Uid:5aac1fcd-5188-4e04-b4df-3cd55f2e4235,Namespace:kube-system,Attempt:0,}" Jun 25 18:32:43.030495 containerd[1439]: time="2024-06-25T18:32:43.030345705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:32:43.030495 containerd[1439]: time="2024-06-25T18:32:43.030455803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:43.030785 containerd[1439]: time="2024-06-25T18:32:43.030483164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:32:43.030785 containerd[1439]: time="2024-06-25T18:32:43.030502079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:43.056461 systemd[1]: Started cri-containerd-fca821c8e33610f780f5cb002b77e591db63bb05c6d1014cc617d5ce963f35a7.scope - libcontainer container fca821c8e33610f780f5cb002b77e591db63bb05c6d1014cc617d5ce963f35a7. Jun 25 18:32:43.083178 containerd[1439]: time="2024-06-25T18:32:43.083098438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nlff2,Uid:5aac1fcd-5188-4e04-b4df-3cd55f2e4235,Namespace:kube-system,Attempt:0,} returns sandbox id \"fca821c8e33610f780f5cb002b77e591db63bb05c6d1014cc617d5ce963f35a7\"" Jun 25 18:32:43.083996 kubelet[2562]: E0625 18:32:43.083969 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:43.086398 containerd[1439]: time="2024-06-25T18:32:43.086329424Z" level=info msg="CreateContainer within sandbox \"fca821c8e33610f780f5cb002b77e591db63bb05c6d1014cc617d5ce963f35a7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:32:43.106924 containerd[1439]: time="2024-06-25T18:32:43.106858586Z" level=info msg="CreateContainer within sandbox \"fca821c8e33610f780f5cb002b77e591db63bb05c6d1014cc617d5ce963f35a7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"91500a8f1deb14d04ecb856b4f089ecd5562331d98002f22a989bbeec028942a\"" Jun 25 18:32:43.107589 containerd[1439]: time="2024-06-25T18:32:43.107557502Z" level=info msg="StartContainer for \"91500a8f1deb14d04ecb856b4f089ecd5562331d98002f22a989bbeec028942a\"" Jun 25 18:32:43.128963 containerd[1439]: time="2024-06-25T18:32:43.128894656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-pl85d,Uid:89f74f6c-9e43-4996-b4ed-7cafb64af2c6,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:32:43.138513 systemd[1]: Started cri-containerd-91500a8f1deb14d04ecb856b4f089ecd5562331d98002f22a989bbeec028942a.scope - libcontainer container 91500a8f1deb14d04ecb856b4f089ecd5562331d98002f22a989bbeec028942a. Jun 25 18:32:43.166166 containerd[1439]: time="2024-06-25T18:32:43.164611196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:32:43.166166 containerd[1439]: time="2024-06-25T18:32:43.164721143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:43.166166 containerd[1439]: time="2024-06-25T18:32:43.164778181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:32:43.166166 containerd[1439]: time="2024-06-25T18:32:43.164798048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:43.180738 containerd[1439]: time="2024-06-25T18:32:43.180698971Z" level=info msg="StartContainer for \"91500a8f1deb14d04ecb856b4f089ecd5562331d98002f22a989bbeec028942a\" returns successfully" Jun 25 18:32:43.193446 systemd[1]: Started cri-containerd-80f69348f87ea71292ccd56f50684d40d640f538dfac64532f9643969ef47b17.scope - libcontainer container 80f69348f87ea71292ccd56f50684d40d640f538dfac64532f9643969ef47b17. Jun 25 18:32:43.238706 containerd[1439]: time="2024-06-25T18:32:43.238508489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-pl85d,Uid:89f74f6c-9e43-4996-b4ed-7cafb64af2c6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"80f69348f87ea71292ccd56f50684d40d640f538dfac64532f9643969ef47b17\"" Jun 25 18:32:43.240384 containerd[1439]: time="2024-06-25T18:32:43.240208311Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:32:44.041265 kubelet[2562]: E0625 18:32:44.041215 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:44.051430 kubelet[2562]: I0625 18:32:44.051364 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nlff2" podStartSLOduration=2.051342527 podStartE2EDuration="2.051342527s" podCreationTimestamp="2024-06-25 18:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:32:44.051194699 +0000 UTC m=+17.147627414" watchObservedRunningTime="2024-06-25 18:32:44.051342527 +0000 UTC m=+17.147775252" Jun 25 18:32:44.622855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2461875896.mount: Deactivated successfully. Jun 25 18:32:44.926189 containerd[1439]: time="2024-06-25T18:32:44.926110101Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:44.927386 containerd[1439]: time="2024-06-25T18:32:44.927343834Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076048" Jun 25 18:32:44.933631 containerd[1439]: time="2024-06-25T18:32:44.933570391Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:44.963144 containerd[1439]: time="2024-06-25T18:32:44.963066597Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:44.964108 containerd[1439]: time="2024-06-25T18:32:44.964037886Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 1.723764203s" Jun 25 18:32:44.964108 containerd[1439]: time="2024-06-25T18:32:44.964083332Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 18:32:44.966603 containerd[1439]: time="2024-06-25T18:32:44.966558914Z" level=info msg="CreateContainer within sandbox \"80f69348f87ea71292ccd56f50684d40d640f538dfac64532f9643969ef47b17\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:32:45.082815 containerd[1439]: time="2024-06-25T18:32:45.082746265Z" level=info msg="CreateContainer within sandbox \"80f69348f87ea71292ccd56f50684d40d640f538dfac64532f9643969ef47b17\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d23caa6521cb05cf84d143f0922f4df92a856b9f61f50f79048fc12fb66c810d\"" Jun 25 18:32:45.083329 containerd[1439]: time="2024-06-25T18:32:45.083297513Z" level=info msg="StartContainer for \"d23caa6521cb05cf84d143f0922f4df92a856b9f61f50f79048fc12fb66c810d\"" Jun 25 18:32:45.121374 systemd[1]: Started cri-containerd-d23caa6521cb05cf84d143f0922f4df92a856b9f61f50f79048fc12fb66c810d.scope - libcontainer container d23caa6521cb05cf84d143f0922f4df92a856b9f61f50f79048fc12fb66c810d. Jun 25 18:32:45.151897 containerd[1439]: time="2024-06-25T18:32:45.151850125Z" level=info msg="StartContainer for \"d23caa6521cb05cf84d143f0922f4df92a856b9f61f50f79048fc12fb66c810d\" returns successfully" Jun 25 18:32:46.997004 kubelet[2562]: I0625 18:32:46.996845 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-pl85d" podStartSLOduration=3.271443875 podStartE2EDuration="4.996820524s" podCreationTimestamp="2024-06-25 18:32:42 +0000 UTC" firstStartedPulling="2024-06-25 18:32:43.239817966 +0000 UTC m=+16.336250691" lastFinishedPulling="2024-06-25 18:32:44.965194615 +0000 UTC m=+18.061627340" observedRunningTime="2024-06-25 18:32:46.052554377 +0000 UTC m=+19.148987102" watchObservedRunningTime="2024-06-25 18:32:46.996820524 +0000 UTC m=+20.093253249" Jun 25 18:32:48.145275 kubelet[2562]: I0625 18:32:48.145194 2562 topology_manager.go:215] "Topology Admit Handler" podUID="e30c93a1-06e8-4349-add3-e0acdd429cea" podNamespace="calico-system" podName="calico-typha-6ff6b46-tg9gq" Jun 25 18:32:48.162657 systemd[1]: Created slice kubepods-besteffort-pode30c93a1_06e8_4349_add3_e0acdd429cea.slice - libcontainer container kubepods-besteffort-pode30c93a1_06e8_4349_add3_e0acdd429cea.slice. Jun 25 18:32:48.194828 kubelet[2562]: I0625 18:32:48.192723 2562 topology_manager.go:215] "Topology Admit Handler" podUID="2f293453-7e71-4949-97a4-494b0e084ee2" podNamespace="calico-system" podName="calico-node-rwd7p" Jun 25 18:32:48.207574 systemd[1]: Created slice kubepods-besteffort-pod2f293453_7e71_4949_97a4_494b0e084ee2.slice - libcontainer container kubepods-besteffort-pod2f293453_7e71_4949_97a4_494b0e084ee2.slice. Jun 25 18:32:48.292568 kubelet[2562]: I0625 18:32:48.292142 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e30c93a1-06e8-4349-add3-e0acdd429cea-typha-certs\") pod \"calico-typha-6ff6b46-tg9gq\" (UID: \"e30c93a1-06e8-4349-add3-e0acdd429cea\") " pod="calico-system/calico-typha-6ff6b46-tg9gq" Jun 25 18:32:48.292568 kubelet[2562]: I0625 18:32:48.292212 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2f293453-7e71-4949-97a4-494b0e084ee2-flexvol-driver-host\") pod \"calico-node-rwd7p\" (UID: \"2f293453-7e71-4949-97a4-494b0e084ee2\") " pod="calico-system/calico-node-rwd7p" Jun 25 18:32:48.292568 kubelet[2562]: I0625 18:32:48.292274 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7ndg\" (UniqueName: \"kubernetes.io/projected/e30c93a1-06e8-4349-add3-e0acdd429cea-kube-api-access-p7ndg\") pod \"calico-typha-6ff6b46-tg9gq\" (UID: \"e30c93a1-06e8-4349-add3-e0acdd429cea\") " pod="calico-system/calico-typha-6ff6b46-tg9gq" Jun 25 18:32:48.292568 kubelet[2562]: I0625 18:32:48.292295 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2f293453-7e71-4949-97a4-494b0e084ee2-cni-bin-dir\") pod \"calico-node-rwd7p\" (UID: \"2f293453-7e71-4949-97a4-494b0e084ee2\") " pod="calico-system/calico-node-rwd7p" Jun 25 18:32:48.292568 kubelet[2562]: I0625 18:32:48.292318 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2f293453-7e71-4949-97a4-494b0e084ee2-cni-net-dir\") pod \"calico-node-rwd7p\" (UID: \"2f293453-7e71-4949-97a4-494b0e084ee2\") " pod="calico-system/calico-node-rwd7p" Jun 25 18:32:48.292944 kubelet[2562]: I0625 18:32:48.292337 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2f293453-7e71-4949-97a4-494b0e084ee2-cni-log-dir\") pod \"calico-node-rwd7p\" (UID: \"2f293453-7e71-4949-97a4-494b0e084ee2\") " pod="calico-system/calico-node-rwd7p" Jun 25 18:32:48.292944 kubelet[2562]: I0625 18:32:48.292368 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2f293453-7e71-4949-97a4-494b0e084ee2-node-certs\") pod \"calico-node-rwd7p\" (UID: \"2f293453-7e71-4949-97a4-494b0e084ee2\") " pod="calico-system/calico-node-rwd7p" Jun 25 18:32:48.292944 kubelet[2562]: I0625 18:32:48.292389 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f293453-7e71-4949-97a4-494b0e084ee2-lib-modules\") pod \"calico-node-rwd7p\" (UID: \"2f293453-7e71-4949-97a4-494b0e084ee2\") " pod="calico-system/calico-node-rwd7p" Jun 25 18:32:48.292944 kubelet[2562]: I0625 18:32:48.292410 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2f293453-7e71-4949-97a4-494b0e084ee2-policysync\") pod \"calico-node-rwd7p\" (UID: \"2f293453-7e71-4949-97a4-494b0e084ee2\") " pod="calico-system/calico-node-rwd7p" Jun 25 18:32:48.292944 kubelet[2562]: I0625 18:32:48.292427 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f293453-7e71-4949-97a4-494b0e084ee2-var-lib-calico\") pod \"calico-node-rwd7p\" (UID: \"2f293453-7e71-4949-97a4-494b0e084ee2\") " pod="calico-system/calico-node-rwd7p" Jun 25 18:32:48.293149 kubelet[2562]: I0625 18:32:48.292459 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f293453-7e71-4949-97a4-494b0e084ee2-xtables-lock\") pod \"calico-node-rwd7p\" (UID: \"2f293453-7e71-4949-97a4-494b0e084ee2\") " pod="calico-system/calico-node-rwd7p" Jun 25 18:32:48.293149 kubelet[2562]: I0625 18:32:48.292490 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e30c93a1-06e8-4349-add3-e0acdd429cea-tigera-ca-bundle\") pod \"calico-typha-6ff6b46-tg9gq\" (UID: \"e30c93a1-06e8-4349-add3-e0acdd429cea\") " pod="calico-system/calico-typha-6ff6b46-tg9gq" Jun 25 18:32:48.293149 kubelet[2562]: I0625 18:32:48.292510 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f293453-7e71-4949-97a4-494b0e084ee2-tigera-ca-bundle\") pod \"calico-node-rwd7p\" (UID: \"2f293453-7e71-4949-97a4-494b0e084ee2\") " pod="calico-system/calico-node-rwd7p" Jun 25 18:32:48.293149 kubelet[2562]: I0625 18:32:48.292529 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2f293453-7e71-4949-97a4-494b0e084ee2-var-run-calico\") pod \"calico-node-rwd7p\" (UID: \"2f293453-7e71-4949-97a4-494b0e084ee2\") " pod="calico-system/calico-node-rwd7p" Jun 25 18:32:48.293149 kubelet[2562]: I0625 18:32:48.292615 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chtw2\" (UniqueName: \"kubernetes.io/projected/2f293453-7e71-4949-97a4-494b0e084ee2-kube-api-access-chtw2\") pod \"calico-node-rwd7p\" (UID: \"2f293453-7e71-4949-97a4-494b0e084ee2\") " pod="calico-system/calico-node-rwd7p" Jun 25 18:32:48.311747 kubelet[2562]: I0625 18:32:48.311677 2562 topology_manager.go:215] "Topology Admit Handler" podUID="1a67d707-aaf3-4ccc-84ad-f6f0070d2909" podNamespace="calico-system" podName="csi-node-driver-shh4z" Jun 25 18:32:48.312139 kubelet[2562]: E0625 18:32:48.312080 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shh4z" podUID="1a67d707-aaf3-4ccc-84ad-f6f0070d2909" Jun 25 18:32:48.394677 kubelet[2562]: I0625 18:32:48.394605 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1a67d707-aaf3-4ccc-84ad-f6f0070d2909-socket-dir\") pod \"csi-node-driver-shh4z\" (UID: \"1a67d707-aaf3-4ccc-84ad-f6f0070d2909\") " pod="calico-system/csi-node-driver-shh4z" Jun 25 18:32:48.394893 kubelet[2562]: I0625 18:32:48.394723 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsc67\" (UniqueName: \"kubernetes.io/projected/1a67d707-aaf3-4ccc-84ad-f6f0070d2909-kube-api-access-qsc67\") pod \"csi-node-driver-shh4z\" (UID: \"1a67d707-aaf3-4ccc-84ad-f6f0070d2909\") " pod="calico-system/csi-node-driver-shh4z" Jun 25 18:32:48.394893 kubelet[2562]: I0625 18:32:48.394790 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1a67d707-aaf3-4ccc-84ad-f6f0070d2909-varrun\") pod \"csi-node-driver-shh4z\" (UID: \"1a67d707-aaf3-4ccc-84ad-f6f0070d2909\") " pod="calico-system/csi-node-driver-shh4z" Jun 25 18:32:48.394893 kubelet[2562]: I0625 18:32:48.394812 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1a67d707-aaf3-4ccc-84ad-f6f0070d2909-registration-dir\") pod \"csi-node-driver-shh4z\" (UID: \"1a67d707-aaf3-4ccc-84ad-f6f0070d2909\") " pod="calico-system/csi-node-driver-shh4z" Jun 25 18:32:48.394996 kubelet[2562]: I0625 18:32:48.394911 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a67d707-aaf3-4ccc-84ad-f6f0070d2909-kubelet-dir\") pod \"csi-node-driver-shh4z\" (UID: \"1a67d707-aaf3-4ccc-84ad-f6f0070d2909\") " pod="calico-system/csi-node-driver-shh4z" Jun 25 18:32:48.400144 kubelet[2562]: E0625 18:32:48.399895 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.400144 kubelet[2562]: W0625 18:32:48.399921 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.400144 kubelet[2562]: E0625 18:32:48.399944 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.401263 kubelet[2562]: E0625 18:32:48.401042 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.401263 kubelet[2562]: W0625 18:32:48.401052 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.401263 kubelet[2562]: E0625 18:32:48.401064 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.409497 kubelet[2562]: E0625 18:32:48.409464 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.409780 kubelet[2562]: W0625 18:32:48.409671 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.409780 kubelet[2562]: E0625 18:32:48.409732 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.410252 kubelet[2562]: E0625 18:32:48.410168 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.410252 kubelet[2562]: W0625 18:32:48.410187 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.410252 kubelet[2562]: E0625 18:32:48.410200 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.423161 kubelet[2562]: E0625 18:32:48.423109 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.423161 kubelet[2562]: W0625 18:32:48.423139 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.423161 kubelet[2562]: E0625 18:32:48.423161 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.424450 kubelet[2562]: E0625 18:32:48.423521 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.424450 kubelet[2562]: W0625 18:32:48.423530 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.424450 kubelet[2562]: E0625 18:32:48.423539 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.468786 kubelet[2562]: E0625 18:32:48.468513 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:48.469561 containerd[1439]: time="2024-06-25T18:32:48.469222349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ff6b46-tg9gq,Uid:e30c93a1-06e8-4349-add3-e0acdd429cea,Namespace:calico-system,Attempt:0,}" Jun 25 18:32:48.495706 kubelet[2562]: E0625 18:32:48.495637 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.495706 kubelet[2562]: W0625 18:32:48.495668 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.495706 kubelet[2562]: E0625 18:32:48.495693 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.496135 kubelet[2562]: E0625 18:32:48.496083 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.496135 kubelet[2562]: W0625 18:32:48.496108 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.496135 kubelet[2562]: E0625 18:32:48.496125 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.496491 kubelet[2562]: E0625 18:32:48.496444 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.496491 kubelet[2562]: W0625 18:32:48.496474 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.496660 kubelet[2562]: E0625 18:32:48.496519 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.496913 kubelet[2562]: E0625 18:32:48.496894 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.496913 kubelet[2562]: W0625 18:32:48.496911 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.496988 kubelet[2562]: E0625 18:32:48.496933 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.497482 kubelet[2562]: E0625 18:32:48.497462 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.497482 kubelet[2562]: W0625 18:32:48.497476 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.497576 kubelet[2562]: E0625 18:32:48.497534 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.497866 kubelet[2562]: E0625 18:32:48.497834 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.497866 kubelet[2562]: W0625 18:32:48.497849 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.498031 kubelet[2562]: E0625 18:32:48.497958 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.498460 kubelet[2562]: E0625 18:32:48.498370 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.498460 kubelet[2562]: W0625 18:32:48.498385 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.498460 kubelet[2562]: E0625 18:32:48.498427 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.498776 kubelet[2562]: E0625 18:32:48.498737 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.498776 kubelet[2562]: W0625 18:32:48.498750 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.498900 kubelet[2562]: E0625 18:32:48.498791 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.501334 kubelet[2562]: E0625 18:32:48.501298 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.501334 kubelet[2562]: W0625 18:32:48.501314 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.501448 kubelet[2562]: E0625 18:32:48.501429 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.502514 kubelet[2562]: E0625 18:32:48.502491 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.502514 kubelet[2562]: W0625 18:32:48.502504 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.502639 kubelet[2562]: E0625 18:32:48.502607 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.502782 kubelet[2562]: E0625 18:32:48.502747 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.502782 kubelet[2562]: W0625 18:32:48.502759 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.502865 kubelet[2562]: E0625 18:32:48.502813 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.503034 kubelet[2562]: E0625 18:32:48.503012 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.503113 kubelet[2562]: W0625 18:32:48.503032 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.503223 kubelet[2562]: E0625 18:32:48.503193 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.503448 kubelet[2562]: E0625 18:32:48.503409 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.503448 kubelet[2562]: W0625 18:32:48.503431 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.503811 kubelet[2562]: E0625 18:32:48.503536 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.503811 kubelet[2562]: E0625 18:32:48.503666 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.503811 kubelet[2562]: W0625 18:32:48.503685 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.503811 kubelet[2562]: E0625 18:32:48.503753 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.503985 kubelet[2562]: E0625 18:32:48.503941 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.503985 kubelet[2562]: W0625 18:32:48.503952 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.504905 kubelet[2562]: E0625 18:32:48.504119 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.504905 kubelet[2562]: E0625 18:32:48.504365 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.504905 kubelet[2562]: W0625 18:32:48.504374 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.504905 kubelet[2562]: E0625 18:32:48.504413 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.504905 kubelet[2562]: E0625 18:32:48.504862 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.504905 kubelet[2562]: W0625 18:32:48.504873 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.505249 kubelet[2562]: E0625 18:32:48.505006 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.505394 kubelet[2562]: E0625 18:32:48.505362 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.505394 kubelet[2562]: W0625 18:32:48.505383 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.505523 kubelet[2562]: E0625 18:32:48.505461 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.505756 kubelet[2562]: E0625 18:32:48.505738 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.505756 kubelet[2562]: W0625 18:32:48.505752 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.505843 kubelet[2562]: E0625 18:32:48.505828 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.506062 kubelet[2562]: E0625 18:32:48.506043 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.506062 kubelet[2562]: W0625 18:32:48.506058 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.506164 kubelet[2562]: E0625 18:32:48.506142 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.506614 kubelet[2562]: E0625 18:32:48.506579 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.506614 kubelet[2562]: W0625 18:32:48.506610 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.506714 kubelet[2562]: E0625 18:32:48.506698 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.507513 kubelet[2562]: E0625 18:32:48.507487 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.507513 kubelet[2562]: W0625 18:32:48.507508 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.507695 kubelet[2562]: E0625 18:32:48.507662 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.508016 kubelet[2562]: E0625 18:32:48.507996 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.508016 kubelet[2562]: W0625 18:32:48.508014 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.509921 kubelet[2562]: E0625 18:32:48.509836 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.512341 kubelet[2562]: E0625 18:32:48.510602 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.512462 kubelet[2562]: W0625 18:32:48.512348 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.512509 kubelet[2562]: E0625 18:32:48.511064 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:48.512843 kubelet[2562]: E0625 18:32:48.512817 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.513170 kubelet[2562]: E0625 18:32:48.513117 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.513170 kubelet[2562]: W0625 18:32:48.513133 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.513170 kubelet[2562]: E0625 18:32:48.513145 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.513657 containerd[1439]: time="2024-06-25T18:32:48.513587806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rwd7p,Uid:2f293453-7e71-4949-97a4-494b0e084ee2,Namespace:calico-system,Attempt:0,}" Jun 25 18:32:48.521703 kubelet[2562]: E0625 18:32:48.521541 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:48.521703 kubelet[2562]: W0625 18:32:48.521573 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:48.521703 kubelet[2562]: E0625 18:32:48.521598 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:48.527432 containerd[1439]: time="2024-06-25T18:32:48.526738626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:32:48.527432 containerd[1439]: time="2024-06-25T18:32:48.526869152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:48.527432 containerd[1439]: time="2024-06-25T18:32:48.526892396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:32:48.527432 containerd[1439]: time="2024-06-25T18:32:48.526918104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:48.554525 systemd[1]: Started cri-containerd-cff1d4fc5f25e43b5c5c79f5ce0f954d5abf2e1d6ec3310e10fdebaad54b2ea8.scope - libcontainer container cff1d4fc5f25e43b5c5c79f5ce0f954d5abf2e1d6ec3310e10fdebaad54b2ea8. Jun 25 18:32:48.561223 containerd[1439]: time="2024-06-25T18:32:48.561074043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:32:48.561223 containerd[1439]: time="2024-06-25T18:32:48.561178318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:48.561601 containerd[1439]: time="2024-06-25T18:32:48.561203445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:32:48.561601 containerd[1439]: time="2024-06-25T18:32:48.561218403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:48.584506 systemd[1]: Started cri-containerd-a76811caf701841544f7ec50d9d3bd41205b27586c8438d449d6327d1fb1c105.scope - libcontainer container a76811caf701841544f7ec50d9d3bd41205b27586c8438d449d6327d1fb1c105. Jun 25 18:32:48.617475 containerd[1439]: time="2024-06-25T18:32:48.617293580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ff6b46-tg9gq,Uid:e30c93a1-06e8-4349-add3-e0acdd429cea,Namespace:calico-system,Attempt:0,} returns sandbox id \"cff1d4fc5f25e43b5c5c79f5ce0f954d5abf2e1d6ec3310e10fdebaad54b2ea8\"" Jun 25 18:32:48.620519 kubelet[2562]: E0625 18:32:48.620372 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:48.622767 containerd[1439]: time="2024-06-25T18:32:48.622673802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:32:48.630824 containerd[1439]: time="2024-06-25T18:32:48.630471872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rwd7p,Uid:2f293453-7e71-4949-97a4-494b0e084ee2,Namespace:calico-system,Attempt:0,} returns sandbox id \"a76811caf701841544f7ec50d9d3bd41205b27586c8438d449d6327d1fb1c105\"" Jun 25 18:32:48.631725 kubelet[2562]: E0625 18:32:48.631647 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:49.985246 kubelet[2562]: E0625 18:32:49.985147 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shh4z" podUID="1a67d707-aaf3-4ccc-84ad-f6f0070d2909" Jun 25 18:32:51.678301 containerd[1439]: time="2024-06-25T18:32:51.678194539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:51.679321 containerd[1439]: time="2024-06-25T18:32:51.679246527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 18:32:51.680776 containerd[1439]: time="2024-06-25T18:32:51.680730127Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:51.683320 containerd[1439]: time="2024-06-25T18:32:51.683280022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:51.684032 containerd[1439]: time="2024-06-25T18:32:51.683979556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.059846399s" Jun 25 18:32:51.684155 containerd[1439]: time="2024-06-25T18:32:51.684028919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 18:32:51.686351 containerd[1439]: time="2024-06-25T18:32:51.686315919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:32:51.695894 containerd[1439]: time="2024-06-25T18:32:51.695813276Z" level=info msg="CreateContainer within sandbox \"cff1d4fc5f25e43b5c5c79f5ce0f954d5abf2e1d6ec3310e10fdebaad54b2ea8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:32:51.718920 containerd[1439]: time="2024-06-25T18:32:51.718856813Z" level=info msg="CreateContainer within sandbox \"cff1d4fc5f25e43b5c5c79f5ce0f954d5abf2e1d6ec3310e10fdebaad54b2ea8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"408179edb7472ef11e991a48341287228945c42d120705969e2eb7ce2954084d\"" Jun 25 18:32:51.719437 containerd[1439]: time="2024-06-25T18:32:51.719417116Z" level=info msg="StartContainer for \"408179edb7472ef11e991a48341287228945c42d120705969e2eb7ce2954084d\"" Jun 25 18:32:51.750556 systemd[1]: Started cri-containerd-408179edb7472ef11e991a48341287228945c42d120705969e2eb7ce2954084d.scope - libcontainer container 408179edb7472ef11e991a48341287228945c42d120705969e2eb7ce2954084d. Jun 25 18:32:51.802102 containerd[1439]: time="2024-06-25T18:32:51.802054172Z" level=info msg="StartContainer for \"408179edb7472ef11e991a48341287228945c42d120705969e2eb7ce2954084d\" returns successfully" Jun 25 18:32:51.986113 kubelet[2562]: E0625 18:32:51.985928 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shh4z" podUID="1a67d707-aaf3-4ccc-84ad-f6f0070d2909" Jun 25 18:32:52.062837 kubelet[2562]: E0625 18:32:52.062520 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:52.109475 kubelet[2562]: I0625 18:32:52.109406 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6ff6b46-tg9gq" podStartSLOduration=1.045995741 podStartE2EDuration="4.109385786s" podCreationTimestamp="2024-06-25 18:32:48 +0000 UTC" firstStartedPulling="2024-06-25 18:32:48.621874668 +0000 UTC m=+21.718307393" lastFinishedPulling="2024-06-25 18:32:51.685264683 +0000 UTC m=+24.781697438" observedRunningTime="2024-06-25 18:32:52.107269037 +0000 UTC m=+25.203701762" watchObservedRunningTime="2024-06-25 18:32:52.109385786 +0000 UTC m=+25.205818511" Jun 25 18:32:52.123759 kubelet[2562]: E0625 18:32:52.122221 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.123759 kubelet[2562]: W0625 18:32:52.122251 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.123759 kubelet[2562]: E0625 18:32:52.122269 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.123759 kubelet[2562]: E0625 18:32:52.122490 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.123759 kubelet[2562]: W0625 18:32:52.122499 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.123759 kubelet[2562]: E0625 18:32:52.122509 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.123759 kubelet[2562]: E0625 18:32:52.122681 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.123759 kubelet[2562]: W0625 18:32:52.122688 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.123759 kubelet[2562]: E0625 18:32:52.122695 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.123759 kubelet[2562]: E0625 18:32:52.122930 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.124416 kubelet[2562]: W0625 18:32:52.122940 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.124416 kubelet[2562]: E0625 18:32:52.122951 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.124416 kubelet[2562]: E0625 18:32:52.123326 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.124416 kubelet[2562]: W0625 18:32:52.123339 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.124416 kubelet[2562]: E0625 18:32:52.123350 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.124416 kubelet[2562]: E0625 18:32:52.123562 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.124416 kubelet[2562]: W0625 18:32:52.123570 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.124416 kubelet[2562]: E0625 18:32:52.123579 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.124416 kubelet[2562]: E0625 18:32:52.123800 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.124416 kubelet[2562]: W0625 18:32:52.123809 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.124875 kubelet[2562]: E0625 18:32:52.123817 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.124875 kubelet[2562]: E0625 18:32:52.124056 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.124875 kubelet[2562]: W0625 18:32:52.124064 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.124875 kubelet[2562]: E0625 18:32:52.124073 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.124875 kubelet[2562]: E0625 18:32:52.124414 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.124875 kubelet[2562]: W0625 18:32:52.124423 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.124875 kubelet[2562]: E0625 18:32:52.124431 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.124875 kubelet[2562]: E0625 18:32:52.124628 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.124875 kubelet[2562]: W0625 18:32:52.124638 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.124875 kubelet[2562]: E0625 18:32:52.124648 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.125241 kubelet[2562]: E0625 18:32:52.124834 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.125241 kubelet[2562]: W0625 18:32:52.124851 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.125241 kubelet[2562]: E0625 18:32:52.124861 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.125241 kubelet[2562]: E0625 18:32:52.125097 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.125241 kubelet[2562]: W0625 18:32:52.125105 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.125241 kubelet[2562]: E0625 18:32:52.125112 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.125447 kubelet[2562]: E0625 18:32:52.125303 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.125447 kubelet[2562]: W0625 18:32:52.125310 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.125447 kubelet[2562]: E0625 18:32:52.125317 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.125534 kubelet[2562]: E0625 18:32:52.125521 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.125534 kubelet[2562]: W0625 18:32:52.125528 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.125608 kubelet[2562]: E0625 18:32:52.125537 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.125728 kubelet[2562]: E0625 18:32:52.125716 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.125728 kubelet[2562]: W0625 18:32:52.125725 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.125807 kubelet[2562]: E0625 18:32:52.125732 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.222387 kubelet[2562]: E0625 18:32:52.222341 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.222387 kubelet[2562]: W0625 18:32:52.222366 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.222387 kubelet[2562]: E0625 18:32:52.222388 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.222687 kubelet[2562]: E0625 18:32:52.222664 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.222687 kubelet[2562]: W0625 18:32:52.222680 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.222736 kubelet[2562]: E0625 18:32:52.222700 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.223092 kubelet[2562]: E0625 18:32:52.223075 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.223092 kubelet[2562]: W0625 18:32:52.223090 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.223159 kubelet[2562]: E0625 18:32:52.223106 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.223456 kubelet[2562]: E0625 18:32:52.223422 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.223504 kubelet[2562]: W0625 18:32:52.223452 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.223504 kubelet[2562]: E0625 18:32:52.223487 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.223756 kubelet[2562]: E0625 18:32:52.223729 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.223756 kubelet[2562]: W0625 18:32:52.223753 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.223816 kubelet[2562]: E0625 18:32:52.223769 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.224104 kubelet[2562]: E0625 18:32:52.224063 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.224104 kubelet[2562]: W0625 18:32:52.224093 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.224613 kubelet[2562]: E0625 18:32:52.224306 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.224613 kubelet[2562]: E0625 18:32:52.224400 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.224613 kubelet[2562]: W0625 18:32:52.224425 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.224613 kubelet[2562]: E0625 18:32:52.224482 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.224768 kubelet[2562]: E0625 18:32:52.224668 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.224768 kubelet[2562]: W0625 18:32:52.224678 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.224768 kubelet[2562]: E0625 18:32:52.224706 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.225455 kubelet[2562]: E0625 18:32:52.224899 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.225455 kubelet[2562]: W0625 18:32:52.224912 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.225455 kubelet[2562]: E0625 18:32:52.224934 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.225455 kubelet[2562]: E0625 18:32:52.225217 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.225455 kubelet[2562]: W0625 18:32:52.225241 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.225455 kubelet[2562]: E0625 18:32:52.225262 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.225455 kubelet[2562]: E0625 18:32:52.225452 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.225455 kubelet[2562]: W0625 18:32:52.225459 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.225731 kubelet[2562]: E0625 18:32:52.225467 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.225731 kubelet[2562]: E0625 18:32:52.225658 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.225731 kubelet[2562]: W0625 18:32:52.225665 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.225731 kubelet[2562]: E0625 18:32:52.225677 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.226159 kubelet[2562]: E0625 18:32:52.225997 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.226159 kubelet[2562]: W0625 18:32:52.226018 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.226159 kubelet[2562]: E0625 18:32:52.226042 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.226307 kubelet[2562]: E0625 18:32:52.226287 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.226307 kubelet[2562]: W0625 18:32:52.226302 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.226375 kubelet[2562]: E0625 18:32:52.226312 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.226549 kubelet[2562]: E0625 18:32:52.226530 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.226549 kubelet[2562]: W0625 18:32:52.226546 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.226627 kubelet[2562]: E0625 18:32:52.226562 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.226788 kubelet[2562]: E0625 18:32:52.226773 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.226788 kubelet[2562]: W0625 18:32:52.226782 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.226862 kubelet[2562]: E0625 18:32:52.226796 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.227043 kubelet[2562]: E0625 18:32:52.227014 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.227043 kubelet[2562]: W0625 18:32:52.227040 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.227114 kubelet[2562]: E0625 18:32:52.227060 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:52.227286 kubelet[2562]: E0625 18:32:52.227270 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:52.227286 kubelet[2562]: W0625 18:32:52.227282 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:52.227385 kubelet[2562]: E0625 18:32:52.227292 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.063473 containerd[1439]: time="2024-06-25T18:32:53.063421767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:53.064920 kubelet[2562]: E0625 18:32:53.064885 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:53.065491 containerd[1439]: time="2024-06-25T18:32:53.065442906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 18:32:53.066902 containerd[1439]: time="2024-06-25T18:32:53.066870388Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:53.069558 containerd[1439]: time="2024-06-25T18:32:53.069485624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:53.070303 containerd[1439]: time="2024-06-25T18:32:53.070261623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.383905959s" Jun 25 18:32:53.070303 containerd[1439]: time="2024-06-25T18:32:53.070295727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 18:32:53.072596 containerd[1439]: time="2024-06-25T18:32:53.072550274Z" level=info msg="CreateContainer within sandbox \"a76811caf701841544f7ec50d9d3bd41205b27586c8438d449d6327d1fb1c105\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:32:53.098278 containerd[1439]: time="2024-06-25T18:32:53.096961097Z" level=info msg="CreateContainer within sandbox \"a76811caf701841544f7ec50d9d3bd41205b27586c8438d449d6327d1fb1c105\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ec44441cc78a9681568869d04bda6a06a18c70877df59bee7d5f7e74d5aa49ca\"" Jun 25 18:32:53.098278 containerd[1439]: time="2024-06-25T18:32:53.097928104Z" level=info msg="StartContainer for \"ec44441cc78a9681568869d04bda6a06a18c70877df59bee7d5f7e74d5aa49ca\"" Jun 25 18:32:53.134330 kubelet[2562]: E0625 18:32:53.134282 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.134330 kubelet[2562]: W0625 18:32:53.134307 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.134330 kubelet[2562]: E0625 18:32:53.134329 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.134636 kubelet[2562]: E0625 18:32:53.134614 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.134636 kubelet[2562]: W0625 18:32:53.134636 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.134720 kubelet[2562]: E0625 18:32:53.134646 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.134886 kubelet[2562]: E0625 18:32:53.134862 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.134886 kubelet[2562]: W0625 18:32:53.134874 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.134886 kubelet[2562]: E0625 18:32:53.134882 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.135869 kubelet[2562]: E0625 18:32:53.135816 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.135869 kubelet[2562]: W0625 18:32:53.135858 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.135975 kubelet[2562]: E0625 18:32:53.135893 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.136570 systemd[1]: Started cri-containerd-ec44441cc78a9681568869d04bda6a06a18c70877df59bee7d5f7e74d5aa49ca.scope - libcontainer container ec44441cc78a9681568869d04bda6a06a18c70877df59bee7d5f7e74d5aa49ca. Jun 25 18:32:53.137576 kubelet[2562]: E0625 18:32:53.136946 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.137576 kubelet[2562]: W0625 18:32:53.136960 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.137576 kubelet[2562]: E0625 18:32:53.136977 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.137576 kubelet[2562]: E0625 18:32:53.137415 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.137576 kubelet[2562]: W0625 18:32:53.137425 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.137576 kubelet[2562]: E0625 18:32:53.137435 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.138390 kubelet[2562]: E0625 18:32:53.138177 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.138390 kubelet[2562]: W0625 18:32:53.138207 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.138390 kubelet[2562]: E0625 18:32:53.138254 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.138628 kubelet[2562]: E0625 18:32:53.138594 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.138628 kubelet[2562]: W0625 18:32:53.138606 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.138628 kubelet[2562]: E0625 18:32:53.138617 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.138893 kubelet[2562]: E0625 18:32:53.138875 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.138893 kubelet[2562]: W0625 18:32:53.138888 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.138981 kubelet[2562]: E0625 18:32:53.138918 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.139309 kubelet[2562]: E0625 18:32:53.139279 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.139309 kubelet[2562]: W0625 18:32:53.139305 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.139383 kubelet[2562]: E0625 18:32:53.139317 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.142489 kubelet[2562]: E0625 18:32:53.139677 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.142489 kubelet[2562]: W0625 18:32:53.139698 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.142489 kubelet[2562]: E0625 18:32:53.139708 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.142489 kubelet[2562]: E0625 18:32:53.139918 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.142489 kubelet[2562]: W0625 18:32:53.139926 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.142489 kubelet[2562]: E0625 18:32:53.139943 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.142489 kubelet[2562]: E0625 18:32:53.140177 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.142489 kubelet[2562]: W0625 18:32:53.140185 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.142489 kubelet[2562]: E0625 18:32:53.140204 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.142489 kubelet[2562]: E0625 18:32:53.140455 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.142937 kubelet[2562]: W0625 18:32:53.140467 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.142937 kubelet[2562]: E0625 18:32:53.140476 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.142937 kubelet[2562]: E0625 18:32:53.142344 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:32:53.142937 kubelet[2562]: W0625 18:32:53.142355 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:32:53.142937 kubelet[2562]: E0625 18:32:53.142365 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:32:53.177827 containerd[1439]: time="2024-06-25T18:32:53.177761328Z" level=info msg="StartContainer for \"ec44441cc78a9681568869d04bda6a06a18c70877df59bee7d5f7e74d5aa49ca\" returns successfully" Jun 25 18:32:53.192508 systemd[1]: cri-containerd-ec44441cc78a9681568869d04bda6a06a18c70877df59bee7d5f7e74d5aa49ca.scope: Deactivated successfully. Jun 25 18:32:53.221569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec44441cc78a9681568869d04bda6a06a18c70877df59bee7d5f7e74d5aa49ca-rootfs.mount: Deactivated successfully. Jun 25 18:32:53.842061 containerd[1439]: time="2024-06-25T18:32:53.841951788Z" level=info msg="shim disconnected" id=ec44441cc78a9681568869d04bda6a06a18c70877df59bee7d5f7e74d5aa49ca namespace=k8s.io Jun 25 18:32:53.842061 containerd[1439]: time="2024-06-25T18:32:53.842049581Z" level=warning msg="cleaning up after shim disconnected" id=ec44441cc78a9681568869d04bda6a06a18c70877df59bee7d5f7e74d5aa49ca namespace=k8s.io Jun 25 18:32:53.842061 containerd[1439]: time="2024-06-25T18:32:53.842060782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:32:53.985882 kubelet[2562]: E0625 18:32:53.985757 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shh4z" podUID="1a67d707-aaf3-4ccc-84ad-f6f0070d2909" Jun 25 18:32:54.068941 kubelet[2562]: E0625 18:32:54.068902 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:54.069483 kubelet[2562]: E0625 18:32:54.069034 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:54.070056 containerd[1439]: time="2024-06-25T18:32:54.069997549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:32:55.985501 kubelet[2562]: E0625 18:32:55.985415 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shh4z" podUID="1a67d707-aaf3-4ccc-84ad-f6f0070d2909" Jun 25 18:32:57.691426 containerd[1439]: time="2024-06-25T18:32:57.691295368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:57.694551 containerd[1439]: time="2024-06-25T18:32:57.694450224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 18:32:57.696802 containerd[1439]: time="2024-06-25T18:32:57.696738012Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:57.728815 containerd[1439]: time="2024-06-25T18:32:57.728531096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:57.729308 containerd[1439]: time="2024-06-25T18:32:57.729279161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 3.659235225s" Jun 25 18:32:57.729381 containerd[1439]: time="2024-06-25T18:32:57.729312825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 18:32:57.731985 containerd[1439]: time="2024-06-25T18:32:57.731930021Z" level=info msg="CreateContainer within sandbox \"a76811caf701841544f7ec50d9d3bd41205b27586c8438d449d6327d1fb1c105\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:32:57.802464 containerd[1439]: time="2024-06-25T18:32:57.802402006Z" level=info msg="CreateContainer within sandbox \"a76811caf701841544f7ec50d9d3bd41205b27586c8438d449d6327d1fb1c105\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"06d21f5346b7e1a6d85b6066a4ddd878640599a4a9353fc5456e1cf2710c00d9\"" Jun 25 18:32:57.803157 containerd[1439]: time="2024-06-25T18:32:57.803114594Z" level=info msg="StartContainer for \"06d21f5346b7e1a6d85b6066a4ddd878640599a4a9353fc5456e1cf2710c00d9\"" Jun 25 18:32:57.838798 systemd[1]: run-containerd-runc-k8s.io-06d21f5346b7e1a6d85b6066a4ddd878640599a4a9353fc5456e1cf2710c00d9-runc.0QHVIT.mount: Deactivated successfully. Jun 25 18:32:57.856449 systemd[1]: Started cri-containerd-06d21f5346b7e1a6d85b6066a4ddd878640599a4a9353fc5456e1cf2710c00d9.scope - libcontainer container 06d21f5346b7e1a6d85b6066a4ddd878640599a4a9353fc5456e1cf2710c00d9. Jun 25 18:32:57.985758 kubelet[2562]: E0625 18:32:57.985511 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shh4z" podUID="1a67d707-aaf3-4ccc-84ad-f6f0070d2909" Jun 25 18:32:57.993682 containerd[1439]: time="2024-06-25T18:32:57.993608795Z" level=info msg="StartContainer for \"06d21f5346b7e1a6d85b6066a4ddd878640599a4a9353fc5456e1cf2710c00d9\" returns successfully" Jun 25 18:32:58.084458 kubelet[2562]: E0625 18:32:58.084403 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:59.085599 kubelet[2562]: E0625 18:32:59.085566 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:59.724480 containerd[1439]: time="2024-06-25T18:32:59.724431843Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:32:59.727713 systemd[1]: cri-containerd-06d21f5346b7e1a6d85b6066a4ddd878640599a4a9353fc5456e1cf2710c00d9.scope: Deactivated successfully. Jun 25 18:32:59.753536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06d21f5346b7e1a6d85b6066a4ddd878640599a4a9353fc5456e1cf2710c00d9-rootfs.mount: Deactivated successfully. Jun 25 18:32:59.781432 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:47224.service - OpenSSH per-connection server daemon (10.0.0.1:47224). Jun 25 18:32:59.794717 kubelet[2562]: I0625 18:32:59.790402 2562 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 18:32:59.937910 kubelet[2562]: I0625 18:32:59.937795 2562 topology_manager.go:215] "Topology Admit Handler" podUID="4e6e4186-c2c5-4329-ba3d-8490ac16505e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8scvs" Jun 25 18:32:59.938168 kubelet[2562]: I0625 18:32:59.938118 2562 topology_manager.go:215] "Topology Admit Handler" podUID="78990d16-9643-4cb5-9ece-c707bc193a17" podNamespace="calico-system" podName="calico-kube-controllers-56599b7db9-s2zt2" Jun 25 18:32:59.938249 kubelet[2562]: I0625 18:32:59.938223 2562 topology_manager.go:215] "Topology Admit Handler" podUID="2bf9fcee-35db-45bb-a446-ba52c47672a7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mmzhm" Jun 25 18:32:59.949630 systemd[1]: Created slice kubepods-burstable-pod4e6e4186_c2c5_4329_ba3d_8490ac16505e.slice - libcontainer container kubepods-burstable-pod4e6e4186_c2c5_4329_ba3d_8490ac16505e.slice. Jun 25 18:32:59.955901 sshd[3311]: Accepted publickey for core from 10.0.0.1 port 47224 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:32:59.957968 sshd[3311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:59.961334 systemd[1]: Created slice kubepods-burstable-pod2bf9fcee_35db_45bb_a446_ba52c47672a7.slice - libcontainer container kubepods-burstable-pod2bf9fcee_35db_45bb_a446_ba52c47672a7.slice. Jun 25 18:32:59.969634 systemd[1]: Created slice kubepods-besteffort-pod78990d16_9643_4cb5_9ece_c707bc193a17.slice - libcontainer container kubepods-besteffort-pod78990d16_9643_4cb5_9ece_c707bc193a17.slice. Jun 25 18:32:59.971563 systemd-logind[1424]: New session 10 of user core. Jun 25 18:32:59.979508 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:32:59.991083 systemd[1]: Created slice kubepods-besteffort-pod1a67d707_aaf3_4ccc_84ad_f6f0070d2909.slice - libcontainer container kubepods-besteffort-pod1a67d707_aaf3_4ccc_84ad_f6f0070d2909.slice. Jun 25 18:33:00.086517 kubelet[2562]: I0625 18:33:00.086462 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brm4\" (UniqueName: \"kubernetes.io/projected/2bf9fcee-35db-45bb-a446-ba52c47672a7-kube-api-access-5brm4\") pod \"coredns-7db6d8ff4d-mmzhm\" (UID: \"2bf9fcee-35db-45bb-a446-ba52c47672a7\") " pod="kube-system/coredns-7db6d8ff4d-mmzhm" Jun 25 18:33:00.086517 kubelet[2562]: I0625 18:33:00.086512 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-557fv\" (UniqueName: \"kubernetes.io/projected/4e6e4186-c2c5-4329-ba3d-8490ac16505e-kube-api-access-557fv\") pod \"coredns-7db6d8ff4d-8scvs\" (UID: \"4e6e4186-c2c5-4329-ba3d-8490ac16505e\") " pod="kube-system/coredns-7db6d8ff4d-8scvs" Jun 25 18:33:00.087115 kubelet[2562]: I0625 18:33:00.086575 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e6e4186-c2c5-4329-ba3d-8490ac16505e-config-volume\") pod \"coredns-7db6d8ff4d-8scvs\" (UID: \"4e6e4186-c2c5-4329-ba3d-8490ac16505e\") " pod="kube-system/coredns-7db6d8ff4d-8scvs" Jun 25 18:33:00.087115 kubelet[2562]: I0625 18:33:00.086607 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78990d16-9643-4cb5-9ece-c707bc193a17-tigera-ca-bundle\") pod \"calico-kube-controllers-56599b7db9-s2zt2\" (UID: \"78990d16-9643-4cb5-9ece-c707bc193a17\") " pod="calico-system/calico-kube-controllers-56599b7db9-s2zt2" Jun 25 18:33:00.087115 kubelet[2562]: I0625 18:33:00.086638 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8q6x\" (UniqueName: \"kubernetes.io/projected/78990d16-9643-4cb5-9ece-c707bc193a17-kube-api-access-z8q6x\") pod \"calico-kube-controllers-56599b7db9-s2zt2\" (UID: \"78990d16-9643-4cb5-9ece-c707bc193a17\") " pod="calico-system/calico-kube-controllers-56599b7db9-s2zt2" Jun 25 18:33:00.087115 kubelet[2562]: I0625 18:33:00.086680 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bf9fcee-35db-45bb-a446-ba52c47672a7-config-volume\") pod \"coredns-7db6d8ff4d-mmzhm\" (UID: \"2bf9fcee-35db-45bb-a446-ba52c47672a7\") " pod="kube-system/coredns-7db6d8ff4d-mmzhm" Jun 25 18:33:00.254364 containerd[1439]: time="2024-06-25T18:33:00.254197994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-shh4z,Uid:1a67d707-aaf3-4ccc-84ad-f6f0070d2909,Namespace:calico-system,Attempt:0,}" Jun 25 18:33:00.339372 containerd[1439]: time="2024-06-25T18:33:00.339290840Z" level=info msg="shim disconnected" id=06d21f5346b7e1a6d85b6066a4ddd878640599a4a9353fc5456e1cf2710c00d9 namespace=k8s.io Jun 25 18:33:00.339372 containerd[1439]: time="2024-06-25T18:33:00.339363638Z" level=warning msg="cleaning up after shim disconnected" id=06d21f5346b7e1a6d85b6066a4ddd878640599a4a9353fc5456e1cf2710c00d9 namespace=k8s.io Jun 25 18:33:00.339372 containerd[1439]: time="2024-06-25T18:33:00.339376201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:33:00.386546 sshd[3311]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:00.391510 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:47224.service: Deactivated successfully. Jun 25 18:33:00.394204 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:33:00.394976 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:33:00.396135 systemd-logind[1424]: Removed session 10. Jun 25 18:33:00.558100 kubelet[2562]: E0625 18:33:00.557950 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:00.559908 containerd[1439]: time="2024-06-25T18:33:00.559743692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8scvs,Uid:4e6e4186-c2c5-4329-ba3d-8490ac16505e,Namespace:kube-system,Attempt:0,}" Jun 25 18:33:00.567159 kubelet[2562]: E0625 18:33:00.567110 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:00.568016 containerd[1439]: time="2024-06-25T18:33:00.567971163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mmzhm,Uid:2bf9fcee-35db-45bb-a446-ba52c47672a7,Namespace:kube-system,Attempt:0,}" Jun 25 18:33:00.574585 containerd[1439]: time="2024-06-25T18:33:00.574512387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56599b7db9-s2zt2,Uid:78990d16-9643-4cb5-9ece-c707bc193a17,Namespace:calico-system,Attempt:0,}" Jun 25 18:33:00.608317 containerd[1439]: time="2024-06-25T18:33:00.608192966Z" level=error msg="Failed to destroy network for sandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.608951 containerd[1439]: time="2024-06-25T18:33:00.608877602Z" level=error msg="encountered an error cleaning up failed sandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.609012 containerd[1439]: time="2024-06-25T18:33:00.608964695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-shh4z,Uid:1a67d707-aaf3-4ccc-84ad-f6f0070d2909,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.611442 kubelet[2562]: E0625 18:33:00.609686 2562 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.611442 kubelet[2562]: E0625 18:33:00.609770 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-shh4z" Jun 25 18:33:00.611442 kubelet[2562]: E0625 18:33:00.609793 2562 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-shh4z" Jun 25 18:33:00.611629 kubelet[2562]: E0625 18:33:00.609843 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-shh4z_calico-system(1a67d707-aaf3-4ccc-84ad-f6f0070d2909)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-shh4z_calico-system(1a67d707-aaf3-4ccc-84ad-f6f0070d2909)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-shh4z" podUID="1a67d707-aaf3-4ccc-84ad-f6f0070d2909" Jun 25 18:33:00.667120 containerd[1439]: time="2024-06-25T18:33:00.667050676Z" level=error msg="Failed to destroy network for sandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.667637 containerd[1439]: time="2024-06-25T18:33:00.667598354Z" level=error msg="encountered an error cleaning up failed sandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.667718 containerd[1439]: time="2024-06-25T18:33:00.667686310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8scvs,Uid:4e6e4186-c2c5-4329-ba3d-8490ac16505e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.668130 kubelet[2562]: E0625 18:33:00.668032 2562 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.668130 kubelet[2562]: E0625 18:33:00.668119 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8scvs" Jun 25 18:33:00.668320 kubelet[2562]: E0625 18:33:00.668149 2562 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8scvs" Jun 25 18:33:00.668320 kubelet[2562]: E0625 18:33:00.668214 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8scvs_kube-system(4e6e4186-c2c5-4329-ba3d-8490ac16505e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8scvs_kube-system(4e6e4186-c2c5-4329-ba3d-8490ac16505e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8scvs" podUID="4e6e4186-c2c5-4329-ba3d-8490ac16505e" Jun 25 18:33:00.684657 containerd[1439]: time="2024-06-25T18:33:00.684579604Z" level=error msg="Failed to destroy network for sandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.685165 containerd[1439]: time="2024-06-25T18:33:00.685111633Z" level=error msg="encountered an error cleaning up failed sandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.685245 containerd[1439]: time="2024-06-25T18:33:00.685191864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mmzhm,Uid:2bf9fcee-35db-45bb-a446-ba52c47672a7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.685547 kubelet[2562]: E0625 18:33:00.685469 2562 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.685547 kubelet[2562]: E0625 18:33:00.685543 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mmzhm" Jun 25 18:33:00.685657 kubelet[2562]: E0625 18:33:00.685566 2562 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mmzhm" Jun 25 18:33:00.685657 kubelet[2562]: E0625 18:33:00.685609 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-mmzhm_kube-system(2bf9fcee-35db-45bb-a446-ba52c47672a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-mmzhm_kube-system(2bf9fcee-35db-45bb-a446-ba52c47672a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mmzhm" podUID="2bf9fcee-35db-45bb-a446-ba52c47672a7" Jun 25 18:33:00.693689 containerd[1439]: time="2024-06-25T18:33:00.693607397Z" level=error msg="Failed to destroy network for sandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.694208 containerd[1439]: time="2024-06-25T18:33:00.694168761Z" level=error msg="encountered an error cleaning up failed sandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.694277 containerd[1439]: time="2024-06-25T18:33:00.694247850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56599b7db9-s2zt2,Uid:78990d16-9643-4cb5-9ece-c707bc193a17,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.694919 kubelet[2562]: E0625 18:33:00.694540 2562 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:00.694919 kubelet[2562]: E0625 18:33:00.694611 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56599b7db9-s2zt2" Jun 25 18:33:00.694919 kubelet[2562]: E0625 18:33:00.694633 2562 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56599b7db9-s2zt2" Jun 25 18:33:00.695047 kubelet[2562]: E0625 18:33:00.694685 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56599b7db9-s2zt2_calico-system(78990d16-9643-4cb5-9ece-c707bc193a17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56599b7db9-s2zt2_calico-system(78990d16-9643-4cb5-9ece-c707bc193a17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56599b7db9-s2zt2" podUID="78990d16-9643-4cb5-9ece-c707bc193a17" Jun 25 18:33:00.755089 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f-shm.mount: Deactivated successfully. Jun 25 18:33:01.092581 kubelet[2562]: E0625 18:33:01.092528 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:01.093639 containerd[1439]: time="2024-06-25T18:33:01.093600524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:33:01.094219 kubelet[2562]: I0625 18:33:01.094194 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:01.096384 containerd[1439]: time="2024-06-25T18:33:01.095450928Z" level=info msg="StopPodSandbox for \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\"" Jun 25 18:33:01.096451 containerd[1439]: time="2024-06-25T18:33:01.096339106Z" level=info msg="Ensure that sandbox adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8 in task-service has been cleanup successfully" Jun 25 18:33:01.098065 kubelet[2562]: I0625 18:33:01.097924 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:01.098670 containerd[1439]: time="2024-06-25T18:33:01.098630800Z" level=info msg="StopPodSandbox for \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\"" Jun 25 18:33:01.098946 containerd[1439]: time="2024-06-25T18:33:01.098916575Z" level=info msg="Ensure that sandbox 839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50 in task-service has been cleanup successfully" Jun 25 18:33:01.102273 kubelet[2562]: I0625 18:33:01.100644 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:01.102391 containerd[1439]: time="2024-06-25T18:33:01.101070831Z" level=info msg="StopPodSandbox for \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\"" Jun 25 18:33:01.102391 containerd[1439]: time="2024-06-25T18:33:01.101346639Z" level=info msg="Ensure that sandbox 79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586 in task-service has been cleanup successfully" Jun 25 18:33:01.111267 kubelet[2562]: I0625 18:33:01.110372 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:01.112319 containerd[1439]: time="2024-06-25T18:33:01.112007667Z" level=info msg="StopPodSandbox for \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\"" Jun 25 18:33:01.112599 containerd[1439]: time="2024-06-25T18:33:01.112545808Z" level=info msg="Ensure that sandbox 5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f in task-service has been cleanup successfully" Jun 25 18:33:01.162439 containerd[1439]: time="2024-06-25T18:33:01.162363803Z" level=error msg="StopPodSandbox for \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\" failed" error="failed to destroy network for sandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:01.162733 kubelet[2562]: E0625 18:33:01.162685 2562 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:01.162855 kubelet[2562]: E0625 18:33:01.162756 2562 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8"} Jun 25 18:33:01.162855 kubelet[2562]: E0625 18:33:01.162820 2562 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78990d16-9643-4cb5-9ece-c707bc193a17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:33:01.162983 kubelet[2562]: E0625 18:33:01.162855 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78990d16-9643-4cb5-9ece-c707bc193a17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56599b7db9-s2zt2" podUID="78990d16-9643-4cb5-9ece-c707bc193a17" Jun 25 18:33:01.164445 containerd[1439]: time="2024-06-25T18:33:01.164367746Z" level=error msg="StopPodSandbox for \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\" failed" error="failed to destroy network for sandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:01.164794 kubelet[2562]: E0625 18:33:01.164683 2562 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:01.164794 kubelet[2562]: E0625 18:33:01.164713 2562 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50"} Jun 25 18:33:01.164794 kubelet[2562]: E0625 18:33:01.164734 2562 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2bf9fcee-35db-45bb-a446-ba52c47672a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:33:01.164794 kubelet[2562]: E0625 18:33:01.164758 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2bf9fcee-35db-45bb-a446-ba52c47672a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mmzhm" podUID="2bf9fcee-35db-45bb-a446-ba52c47672a7" Jun 25 18:33:01.165000 containerd[1439]: time="2024-06-25T18:33:01.164735056Z" level=error msg="StopPodSandbox for \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\" failed" error="failed to destroy network for sandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:01.165035 kubelet[2562]: E0625 18:33:01.164928 2562 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:01.165035 kubelet[2562]: E0625 18:33:01.164989 2562 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f"} Jun 25 18:33:01.165035 kubelet[2562]: E0625 18:33:01.165025 2562 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a67d707-aaf3-4ccc-84ad-f6f0070d2909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:33:01.165155 kubelet[2562]: E0625 18:33:01.165051 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a67d707-aaf3-4ccc-84ad-f6f0070d2909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-shh4z" podUID="1a67d707-aaf3-4ccc-84ad-f6f0070d2909" Jun 25 18:33:01.167947 containerd[1439]: time="2024-06-25T18:33:01.167915637Z" level=error msg="StopPodSandbox for \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\" failed" error="failed to destroy network for sandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:33:01.168133 kubelet[2562]: E0625 18:33:01.168108 2562 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:01.168170 kubelet[2562]: E0625 18:33:01.168136 2562 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586"} Jun 25 18:33:01.168170 kubelet[2562]: E0625 18:33:01.168159 2562 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e6e4186-c2c5-4329-ba3d-8490ac16505e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:33:01.168248 kubelet[2562]: E0625 18:33:01.168177 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e6e4186-c2c5-4329-ba3d-8490ac16505e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8scvs" podUID="4e6e4186-c2c5-4329-ba3d-8490ac16505e" Jun 25 18:33:05.398508 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:47234.service - OpenSSH per-connection server daemon (10.0.0.1:47234). Jun 25 18:33:05.992997 sshd[3590]: Accepted publickey for core from 10.0.0.1 port 47234 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:05.995000 sshd[3590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:06.092952 systemd-logind[1424]: New session 11 of user core. Jun 25 18:33:06.100556 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:33:06.262507 sshd[3590]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:06.268670 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:47234.service: Deactivated successfully. Jun 25 18:33:06.271665 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:33:06.272874 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:33:06.275089 systemd-logind[1424]: Removed session 11. Jun 25 18:33:07.812149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891842787.mount: Deactivated successfully. Jun 25 18:33:08.795412 containerd[1439]: time="2024-06-25T18:33:08.795323194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:08.849065 containerd[1439]: time="2024-06-25T18:33:08.848932123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 18:33:08.887736 containerd[1439]: time="2024-06-25T18:33:08.887611164Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:09.280160 containerd[1439]: time="2024-06-25T18:33:09.280092730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:09.281081 containerd[1439]: time="2024-06-25T18:33:09.280987269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 8.187321122s" Jun 25 18:33:09.281081 containerd[1439]: time="2024-06-25T18:33:09.281029358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 18:33:09.291960 containerd[1439]: time="2024-06-25T18:33:09.290779075Z" level=info msg="CreateContainer within sandbox \"a76811caf701841544f7ec50d9d3bd41205b27586c8438d449d6327d1fb1c105\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:33:10.111686 containerd[1439]: time="2024-06-25T18:33:10.111578090Z" level=info msg="CreateContainer within sandbox \"a76811caf701841544f7ec50d9d3bd41205b27586c8438d449d6327d1fb1c105\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2f2bc72987a0de50b91143068297f0e3e104773b6aa5b502ea44e687194d6f06\"" Jun 25 18:33:10.112341 containerd[1439]: time="2024-06-25T18:33:10.112296869Z" level=info msg="StartContainer for \"2f2bc72987a0de50b91143068297f0e3e104773b6aa5b502ea44e687194d6f06\"" Jun 25 18:33:10.186431 systemd[1]: Started cri-containerd-2f2bc72987a0de50b91143068297f0e3e104773b6aa5b502ea44e687194d6f06.scope - libcontainer container 2f2bc72987a0de50b91143068297f0e3e104773b6aa5b502ea44e687194d6f06. Jun 25 18:33:10.223585 containerd[1439]: time="2024-06-25T18:33:10.223497611Z" level=info msg="StartContainer for \"2f2bc72987a0de50b91143068297f0e3e104773b6aa5b502ea44e687194d6f06\" returns successfully" Jun 25 18:33:10.305721 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:33:10.305914 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:33:11.135221 kubelet[2562]: E0625 18:33:11.135184 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:11.276399 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:35486.service - OpenSSH per-connection server daemon (10.0.0.1:35486). Jun 25 18:33:11.315267 sshd[3700]: Accepted publickey for core from 10.0.0.1 port 35486 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:11.317542 sshd[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:11.322572 systemd-logind[1424]: New session 12 of user core. Jun 25 18:33:11.339546 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:33:11.463827 sshd[3700]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:11.468491 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:35486.service: Deactivated successfully. Jun 25 18:33:11.470662 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:33:11.471490 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:33:11.472636 systemd-logind[1424]: Removed session 12. Jun 25 18:33:11.987594 containerd[1439]: time="2024-06-25T18:33:11.987114956Z" level=info msg="StopPodSandbox for \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\"" Jun 25 18:33:11.988158 containerd[1439]: time="2024-06-25T18:33:11.988130893Z" level=info msg="StopPodSandbox for \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\"" Jun 25 18:33:11.989316 containerd[1439]: time="2024-06-25T18:33:11.989090824Z" level=info msg="StopPodSandbox for \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\"" Jun 25 18:33:12.127315 kubelet[2562]: I0625 18:33:12.127221 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rwd7p" podStartSLOduration=3.477844455 podStartE2EDuration="24.12719726s" podCreationTimestamp="2024-06-25 18:32:48 +0000 UTC" firstStartedPulling="2024-06-25 18:32:48.632462157 +0000 UTC m=+21.728894882" lastFinishedPulling="2024-06-25 18:33:09.281814962 +0000 UTC m=+42.378247687" observedRunningTime="2024-06-25 18:33:11.157312978 +0000 UTC m=+44.253745703" watchObservedRunningTime="2024-06-25 18:33:12.12719726 +0000 UTC m=+45.223629985" Jun 25 18:33:12.138507 kubelet[2562]: E0625 18:33:12.138462 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:12.181686 systemd[1]: run-containerd-runc-k8s.io-2f2bc72987a0de50b91143068297f0e3e104773b6aa5b502ea44e687194d6f06-runc.j4ONkN.mount: Deactivated successfully. Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.142 [INFO][3794] k8s.go 608: Cleaning up netns ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.143 [INFO][3794] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" iface="eth0" netns="/var/run/netns/cni-da80bd9f-a253-d6d0-bbc4-a91a1a145127" Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.143 [INFO][3794] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" iface="eth0" netns="/var/run/netns/cni-da80bd9f-a253-d6d0-bbc4-a91a1a145127" Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.144 [INFO][3794] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" iface="eth0" netns="/var/run/netns/cni-da80bd9f-a253-d6d0-bbc4-a91a1a145127" Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.144 [INFO][3794] k8s.go 615: Releasing IP address(es) ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.144 [INFO][3794] utils.go 188: Calico CNI releasing IP address ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.424 [INFO][3879] ipam_plugin.go 411: Releasing address using handleID ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" HandleID="k8s-pod-network.5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Workload="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.425 [INFO][3879] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.426 [INFO][3879] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.435 [WARNING][3879] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" HandleID="k8s-pod-network.5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Workload="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.435 [INFO][3879] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" HandleID="k8s-pod-network.5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Workload="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.436 [INFO][3879] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:12.442898 containerd[1439]: 2024-06-25 18:33:12.439 [INFO][3794] k8s.go 621: Teardown processing complete. ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:12.445667 containerd[1439]: time="2024-06-25T18:33:12.443207775Z" level=info msg="TearDown network for sandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\" successfully" Jun 25 18:33:12.445667 containerd[1439]: time="2024-06-25T18:33:12.443280000Z" level=info msg="StopPodSandbox for \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\" returns successfully" Jun 25 18:33:12.445667 containerd[1439]: time="2024-06-25T18:33:12.445435976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-shh4z,Uid:1a67d707-aaf3-4ccc-84ad-f6f0070d2909,Namespace:calico-system,Attempt:1,}" Jun 25 18:33:12.448012 systemd[1]: run-netns-cni\x2dda80bd9f\x2da253\x2dd6d0\x2dbbc4\x2da91a1a145127.mount: Deactivated successfully. Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.142 [INFO][3818] k8s.go 608: Cleaning up netns ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.142 [INFO][3818] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" iface="eth0" netns="/var/run/netns/cni-8fba8c46-85ca-aba7-582a-47e9b52bf5b9" Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.143 [INFO][3818] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" iface="eth0" netns="/var/run/netns/cni-8fba8c46-85ca-aba7-582a-47e9b52bf5b9" Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.144 [INFO][3818] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" iface="eth0" netns="/var/run/netns/cni-8fba8c46-85ca-aba7-582a-47e9b52bf5b9" Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.144 [INFO][3818] k8s.go 615: Releasing IP address(es) ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.144 [INFO][3818] utils.go 188: Calico CNI releasing IP address ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.424 [INFO][3880] ipam_plugin.go 411: Releasing address using handleID ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" HandleID="k8s-pod-network.adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Workload="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.425 [INFO][3880] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.436 [INFO][3880] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.442 [WARNING][3880] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" HandleID="k8s-pod-network.adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Workload="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.443 [INFO][3880] ipam_plugin.go 439: Releasing address using workloadID ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" HandleID="k8s-pod-network.adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Workload="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.445 [INFO][3880] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:12.454764 containerd[1439]: 2024-06-25 18:33:12.452 [INFO][3818] k8s.go 621: Teardown processing complete. ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:12.455832 containerd[1439]: time="2024-06-25T18:33:12.455420921Z" level=info msg="TearDown network for sandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\" successfully" Jun 25 18:33:12.455832 containerd[1439]: time="2024-06-25T18:33:12.455456477Z" level=info msg="StopPodSandbox for \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\" returns successfully" Jun 25 18:33:12.459779 systemd[1]: run-netns-cni\x2d8fba8c46\x2d85ca\x2daba7\x2d582a\x2d47e9b52bf5b9.mount: Deactivated successfully. Jun 25 18:33:12.459917 containerd[1439]: time="2024-06-25T18:33:12.459865390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56599b7db9-s2zt2,Uid:78990d16-9643-4cb5-9ece-c707bc193a17,Namespace:calico-system,Attempt:1,}" Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.128 [INFO][3807] k8s.go 608: Cleaning up netns ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.128 [INFO][3807] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" iface="eth0" netns="/var/run/netns/cni-fc992e49-2cb5-bc94-7462-1994493f7ba0" Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.128 [INFO][3807] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" iface="eth0" netns="/var/run/netns/cni-fc992e49-2cb5-bc94-7462-1994493f7ba0" Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.131 [INFO][3807] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" iface="eth0" netns="/var/run/netns/cni-fc992e49-2cb5-bc94-7462-1994493f7ba0" Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.131 [INFO][3807] k8s.go 615: Releasing IP address(es) ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.131 [INFO][3807] utils.go 188: Calico CNI releasing IP address ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.424 [INFO][3877] ipam_plugin.go 411: Releasing address using handleID ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" HandleID="k8s-pod-network.79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Workload="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.428 [INFO][3877] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.446 [INFO][3877] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.456 [WARNING][3877] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" HandleID="k8s-pod-network.79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Workload="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.457 [INFO][3877] ipam_plugin.go 439: Releasing address using workloadID ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" HandleID="k8s-pod-network.79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Workload="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.461 [INFO][3877] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:12.470320 containerd[1439]: 2024-06-25 18:33:12.467 [INFO][3807] k8s.go 621: Teardown processing complete. ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:12.470754 containerd[1439]: time="2024-06-25T18:33:12.470564105Z" level=info msg="TearDown network for sandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\" successfully" Jun 25 18:33:12.470754 containerd[1439]: time="2024-06-25T18:33:12.470603038Z" level=info msg="StopPodSandbox for \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\" returns successfully" Jun 25 18:33:12.471076 kubelet[2562]: E0625 18:33:12.471050 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:12.472307 containerd[1439]: time="2024-06-25T18:33:12.471618383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8scvs,Uid:4e6e4186-c2c5-4329-ba3d-8490ac16505e,Namespace:kube-system,Attempt:1,}" Jun 25 18:33:12.473884 systemd[1]: run-netns-cni\x2dfc992e49\x2d2cb5\x2dbc94\x2d7462\x2d1994493f7ba0.mount: Deactivated successfully. Jun 25 18:33:12.541904 systemd-networkd[1378]: vxlan.calico: Link UP Jun 25 18:33:12.541914 systemd-networkd[1378]: vxlan.calico: Gained carrier Jun 25 18:33:12.740197 systemd-networkd[1378]: calie6ac95dfb74: Link UP Jun 25 18:33:12.742830 systemd-networkd[1378]: calie6ac95dfb74: Gained carrier Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.638 [INFO][4028] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0 coredns-7db6d8ff4d- kube-system 4e6e4186-c2c5-4329-ba3d-8490ac16505e 825 0 2024-06-25 18:32:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-8scvs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie6ac95dfb74 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8scvs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8scvs-" Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.638 [INFO][4028] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8scvs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.693 [INFO][4070] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" HandleID="k8s-pod-network.95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" Workload="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.704 [INFO][4070] ipam_plugin.go 264: Auto assigning IP ContainerID="95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" HandleID="k8s-pod-network.95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" Workload="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000314330), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-8scvs", "timestamp":"2024-06-25 18:33:12.693956045 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.704 [INFO][4070] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.704 [INFO][4070] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.704 [INFO][4070] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.706 [INFO][4070] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" host="localhost" Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.714 [INFO][4070] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.718 [INFO][4070] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.720 [INFO][4070] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.722 [INFO][4070] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.722 [INFO][4070] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" host="localhost" Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.724 [INFO][4070] ipam.go 1685: Creating new handle: k8s-pod-network.95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.727 [INFO][4070] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" host="localhost" Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.733 [INFO][4070] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" host="localhost" Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.733 [INFO][4070] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" host="localhost" Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.733 [INFO][4070] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:12.757133 containerd[1439]: 2024-06-25 18:33:12.733 [INFO][4070] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" HandleID="k8s-pod-network.95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" Workload="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:12.758053 containerd[1439]: 2024-06-25 18:33:12.736 [INFO][4028] k8s.go 386: Populated endpoint ContainerID="95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8scvs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4e6e4186-c2c5-4329-ba3d-8490ac16505e", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-8scvs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6ac95dfb74", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:12.758053 containerd[1439]: 2024-06-25 18:33:12.736 [INFO][4028] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8scvs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:12.758053 containerd[1439]: 2024-06-25 18:33:12.736 [INFO][4028] dataplane_linux.go 68: Setting the host side veth name to calie6ac95dfb74 ContainerID="95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8scvs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:12.758053 containerd[1439]: 2024-06-25 18:33:12.743 [INFO][4028] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8scvs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:12.758053 containerd[1439]: 2024-06-25 18:33:12.743 [INFO][4028] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8scvs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4e6e4186-c2c5-4329-ba3d-8490ac16505e", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd", Pod:"coredns-7db6d8ff4d-8scvs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6ac95dfb74", MAC:"0a:f1:1a:3f:6d:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:12.758053 containerd[1439]: 2024-06-25 18:33:12.750 [INFO][4028] k8s.go 500: Wrote updated endpoint to datastore ContainerID="95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8scvs" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:12.775927 systemd-networkd[1378]: cali4038b5f7714: Link UP Jun 25 18:33:12.776552 systemd-networkd[1378]: cali4038b5f7714: Gained carrier Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.634 [INFO][4005] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--shh4z-eth0 csi-node-driver- calico-system 1a67d707-aaf3-4ccc-84ad-f6f0070d2909 827 0 2024-06-25 18:32:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-shh4z eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali4038b5f7714 [] []}} ContainerID="afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" Namespace="calico-system" Pod="csi-node-driver-shh4z" WorkloadEndpoint="localhost-k8s-csi--node--driver--shh4z-" Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.634 [INFO][4005] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" Namespace="calico-system" Pod="csi-node-driver-shh4z" WorkloadEndpoint="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.692 [INFO][4068] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" HandleID="k8s-pod-network.afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" Workload="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.703 [INFO][4068] ipam_plugin.go 264: Auto assigning IP ContainerID="afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" HandleID="k8s-pod-network.afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" Workload="localhost-k8s-csi--node--driver--shh4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000118c60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-shh4z", "timestamp":"2024-06-25 18:33:12.692937193 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.704 [INFO][4068] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.733 [INFO][4068] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.733 [INFO][4068] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.735 [INFO][4068] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" host="localhost" Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.739 [INFO][4068] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.746 [INFO][4068] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.748 [INFO][4068] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.755 [INFO][4068] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.755 [INFO][4068] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" host="localhost" Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.757 [INFO][4068] ipam.go 1685: Creating new handle: k8s-pod-network.afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859 Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.760 [INFO][4068] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" host="localhost" Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.769 [INFO][4068] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" host="localhost" Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.769 [INFO][4068] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" host="localhost" Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.769 [INFO][4068] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:12.790475 containerd[1439]: 2024-06-25 18:33:12.769 [INFO][4068] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" HandleID="k8s-pod-network.afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" Workload="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:12.791027 containerd[1439]: 2024-06-25 18:33:12.773 [INFO][4005] k8s.go 386: Populated endpoint ContainerID="afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" Namespace="calico-system" Pod="csi-node-driver-shh4z" WorkloadEndpoint="localhost-k8s-csi--node--driver--shh4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--shh4z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a67d707-aaf3-4ccc-84ad-f6f0070d2909", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-shh4z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4038b5f7714", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:12.791027 containerd[1439]: 2024-06-25 18:33:12.773 [INFO][4005] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" Namespace="calico-system" Pod="csi-node-driver-shh4z" WorkloadEndpoint="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:12.791027 containerd[1439]: 2024-06-25 18:33:12.773 [INFO][4005] dataplane_linux.go 68: Setting the host side veth name to cali4038b5f7714 ContainerID="afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" Namespace="calico-system" Pod="csi-node-driver-shh4z" WorkloadEndpoint="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:12.791027 containerd[1439]: 2024-06-25 18:33:12.776 [INFO][4005] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" Namespace="calico-system" Pod="csi-node-driver-shh4z" WorkloadEndpoint="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:12.791027 containerd[1439]: 2024-06-25 18:33:12.777 [INFO][4005] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" Namespace="calico-system" Pod="csi-node-driver-shh4z" WorkloadEndpoint="localhost-k8s-csi--node--driver--shh4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--shh4z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a67d707-aaf3-4ccc-84ad-f6f0070d2909", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859", Pod:"csi-node-driver-shh4z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4038b5f7714", MAC:"4e:02:6e:33:99:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:12.791027 containerd[1439]: 2024-06-25 18:33:12.785 [INFO][4005] k8s.go 500: Wrote updated endpoint to datastore ContainerID="afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859" Namespace="calico-system" Pod="csi-node-driver-shh4z" WorkloadEndpoint="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:12.805685 containerd[1439]: time="2024-06-25T18:33:12.805213061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:12.805685 containerd[1439]: time="2024-06-25T18:33:12.805293281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:12.805685 containerd[1439]: time="2024-06-25T18:33:12.805312026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:12.805685 containerd[1439]: time="2024-06-25T18:33:12.805324680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:12.821452 systemd-networkd[1378]: cali6f24501af8a: Link UP Jun 25 18:33:12.821916 systemd-networkd[1378]: cali6f24501af8a: Gained carrier Jun 25 18:33:12.827217 containerd[1439]: time="2024-06-25T18:33:12.826959315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:12.827460 containerd[1439]: time="2024-06-25T18:33:12.827422784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:12.828001 containerd[1439]: time="2024-06-25T18:33:12.827963429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:12.829417 containerd[1439]: time="2024-06-25T18:33:12.829283646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.661 [INFO][4016] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0 calico-kube-controllers-56599b7db9- calico-system 78990d16-9643-4cb5-9ece-c707bc193a17 826 0 2024-06-25 18:32:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56599b7db9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-56599b7db9-s2zt2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6f24501af8a [] []}} ContainerID="ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" Namespace="calico-system" Pod="calico-kube-controllers-56599b7db9-s2zt2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-" Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.661 [INFO][4016] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" Namespace="calico-system" Pod="calico-kube-controllers-56599b7db9-s2zt2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.706 [INFO][4079] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" HandleID="k8s-pod-network.ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" Workload="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.715 [INFO][4079] ipam_plugin.go 264: Auto assigning IP ContainerID="ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" HandleID="k8s-pod-network.ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" Workload="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ff850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-56599b7db9-s2zt2", "timestamp":"2024-06-25 18:33:12.706702091 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.715 [INFO][4079] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.769 [INFO][4079] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.769 [INFO][4079] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.776 [INFO][4079] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" host="localhost" Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.782 [INFO][4079] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.789 [INFO][4079] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.793 [INFO][4079] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.796 [INFO][4079] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.796 [INFO][4079] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" host="localhost" Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.797 [INFO][4079] ipam.go 1685: Creating new handle: k8s-pod-network.ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.801 [INFO][4079] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" host="localhost" Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.805 [INFO][4079] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" host="localhost" Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.805 [INFO][4079] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" host="localhost" Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.806 [INFO][4079] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:12.842737 containerd[1439]: 2024-06-25 18:33:12.806 [INFO][4079] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" HandleID="k8s-pod-network.ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" Workload="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:12.843303 containerd[1439]: 2024-06-25 18:33:12.814 [INFO][4016] k8s.go 386: Populated endpoint ContainerID="ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" Namespace="calico-system" Pod="calico-kube-controllers-56599b7db9-s2zt2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0", GenerateName:"calico-kube-controllers-56599b7db9-", Namespace:"calico-system", SelfLink:"", UID:"78990d16-9643-4cb5-9ece-c707bc193a17", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56599b7db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-56599b7db9-s2zt2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f24501af8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:12.843303 containerd[1439]: 2024-06-25 18:33:12.814 [INFO][4016] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" Namespace="calico-system" Pod="calico-kube-controllers-56599b7db9-s2zt2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:12.843303 containerd[1439]: 2024-06-25 18:33:12.815 [INFO][4016] dataplane_linux.go 68: Setting the host side veth name to cali6f24501af8a ContainerID="ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" Namespace="calico-system" Pod="calico-kube-controllers-56599b7db9-s2zt2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:12.843303 containerd[1439]: 2024-06-25 18:33:12.822 [INFO][4016] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" Namespace="calico-system" Pod="calico-kube-controllers-56599b7db9-s2zt2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:12.843303 containerd[1439]: 2024-06-25 18:33:12.824 [INFO][4016] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" Namespace="calico-system" Pod="calico-kube-controllers-56599b7db9-s2zt2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0", GenerateName:"calico-kube-controllers-56599b7db9-", Namespace:"calico-system", SelfLink:"", UID:"78990d16-9643-4cb5-9ece-c707bc193a17", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56599b7db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f", Pod:"calico-kube-controllers-56599b7db9-s2zt2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f24501af8a", MAC:"fe:e6:7d:d6:b5:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:12.843303 containerd[1439]: 2024-06-25 18:33:12.837 [INFO][4016] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f" Namespace="calico-system" Pod="calico-kube-controllers-56599b7db9-s2zt2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:12.844470 systemd[1]: Started cri-containerd-95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd.scope - libcontainer container 95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd. Jun 25 18:33:12.863539 systemd[1]: Started cri-containerd-afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859.scope - libcontainer container afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859. Jun 25 18:33:12.869945 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:33:12.874069 containerd[1439]: time="2024-06-25T18:33:12.873662607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:12.874069 containerd[1439]: time="2024-06-25T18:33:12.873762804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:12.874069 containerd[1439]: time="2024-06-25T18:33:12.873796277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:12.874069 containerd[1439]: time="2024-06-25T18:33:12.873814261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:12.887872 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:33:12.903425 systemd[1]: Started cri-containerd-ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f.scope - libcontainer container ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f. Jun 25 18:33:12.906889 containerd[1439]: time="2024-06-25T18:33:12.906731535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8scvs,Uid:4e6e4186-c2c5-4329-ba3d-8490ac16505e,Namespace:kube-system,Attempt:1,} returns sandbox id \"95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd\"" Jun 25 18:33:12.907814 kubelet[2562]: E0625 18:33:12.907757 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:12.911883 containerd[1439]: time="2024-06-25T18:33:12.911823569Z" level=info msg="CreateContainer within sandbox \"95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:33:12.923252 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:33:12.925205 containerd[1439]: time="2024-06-25T18:33:12.924996627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-shh4z,Uid:1a67d707-aaf3-4ccc-84ad-f6f0070d2909,Namespace:calico-system,Attempt:1,} returns sandbox id \"afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859\"" Jun 25 18:33:12.927478 containerd[1439]: time="2024-06-25T18:33:12.927437035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:33:12.955394 containerd[1439]: time="2024-06-25T18:33:12.955207432Z" level=info msg="CreateContainer within sandbox \"95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6d72610118ed19cbdf3811a309531459413be92020da0aa3d4f12b9b13b8553\"" Jun 25 18:33:12.956038 containerd[1439]: time="2024-06-25T18:33:12.955995531Z" level=info msg="StartContainer for \"a6d72610118ed19cbdf3811a309531459413be92020da0aa3d4f12b9b13b8553\"" Jun 25 18:33:12.965474 containerd[1439]: time="2024-06-25T18:33:12.965340696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56599b7db9-s2zt2,Uid:78990d16-9643-4cb5-9ece-c707bc193a17,Namespace:calico-system,Attempt:1,} returns sandbox id \"ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f\"" Jun 25 18:33:13.002480 systemd[1]: Started cri-containerd-a6d72610118ed19cbdf3811a309531459413be92020da0aa3d4f12b9b13b8553.scope - libcontainer container a6d72610118ed19cbdf3811a309531459413be92020da0aa3d4f12b9b13b8553. Jun 25 18:33:13.151691 containerd[1439]: time="2024-06-25T18:33:13.151640826Z" level=info msg="StartContainer for \"a6d72610118ed19cbdf3811a309531459413be92020da0aa3d4f12b9b13b8553\" returns successfully" Jun 25 18:33:13.155819 kubelet[2562]: E0625 18:33:13.155784 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:13.197644 kubelet[2562]: I0625 18:33:13.197553 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8scvs" podStartSLOduration=31.197533723 podStartE2EDuration="31.197533723s" podCreationTimestamp="2024-06-25 18:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:33:13.197422304 +0000 UTC m=+46.293855019" watchObservedRunningTime="2024-06-25 18:33:13.197533723 +0000 UTC m=+46.293966448" Jun 25 18:33:14.160337 kubelet[2562]: E0625 18:33:14.160215 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:14.234395 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Jun 25 18:33:14.426436 systemd-networkd[1378]: calie6ac95dfb74: Gained IPv6LL Jun 25 18:33:14.746461 systemd-networkd[1378]: cali4038b5f7714: Gained IPv6LL Jun 25 18:33:14.874459 systemd-networkd[1378]: cali6f24501af8a: Gained IPv6LL Jun 25 18:33:15.161902 kubelet[2562]: E0625 18:33:15.161781 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:15.988648 containerd[1439]: time="2024-06-25T18:33:15.988588759Z" level=info msg="StopPodSandbox for \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\"" Jun 25 18:33:16.032441 containerd[1439]: time="2024-06-25T18:33:16.032363394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:16.037272 containerd[1439]: time="2024-06-25T18:33:16.036992779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 18:33:16.038180 containerd[1439]: time="2024-06-25T18:33:16.038089125Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:16.041593 containerd[1439]: time="2024-06-25T18:33:16.041545860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:16.042094 containerd[1439]: time="2024-06-25T18:33:16.042050066Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 3.114299082s" Jun 25 18:33:16.042152 containerd[1439]: time="2024-06-25T18:33:16.042091173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 18:33:16.044016 containerd[1439]: time="2024-06-25T18:33:16.043457075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:33:16.052329 containerd[1439]: time="2024-06-25T18:33:16.052265079Z" level=info msg="CreateContainer within sandbox \"afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:33:16.099258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257375644.mount: Deactivated successfully. Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.058 [INFO][4323] k8s.go 608: Cleaning up netns ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.058 [INFO][4323] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" iface="eth0" netns="/var/run/netns/cni-4636fc52-f5d4-6ac8-774d-1853534b7947" Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.058 [INFO][4323] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" iface="eth0" netns="/var/run/netns/cni-4636fc52-f5d4-6ac8-774d-1853534b7947" Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.058 [INFO][4323] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" iface="eth0" netns="/var/run/netns/cni-4636fc52-f5d4-6ac8-774d-1853534b7947" Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.058 [INFO][4323] k8s.go 615: Releasing IP address(es) ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.058 [INFO][4323] utils.go 188: Calico CNI releasing IP address ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.084 [INFO][4331] ipam_plugin.go 411: Releasing address using handleID ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" HandleID="k8s-pod-network.839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Workload="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.085 [INFO][4331] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.085 [INFO][4331] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.091 [WARNING][4331] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" HandleID="k8s-pod-network.839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Workload="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.091 [INFO][4331] ipam_plugin.go 439: Releasing address using workloadID ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" HandleID="k8s-pod-network.839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Workload="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.094 [INFO][4331] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:16.102592 containerd[1439]: 2024-06-25 18:33:16.099 [INFO][4323] k8s.go 621: Teardown processing complete. ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:16.103142 containerd[1439]: time="2024-06-25T18:33:16.102739371Z" level=info msg="TearDown network for sandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\" successfully" Jun 25 18:33:16.103142 containerd[1439]: time="2024-06-25T18:33:16.102776701Z" level=info msg="StopPodSandbox for \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\" returns successfully" Jun 25 18:33:16.103215 kubelet[2562]: E0625 18:33:16.103178 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:16.105192 containerd[1439]: time="2024-06-25T18:33:16.104758890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mmzhm,Uid:2bf9fcee-35db-45bb-a446-ba52c47672a7,Namespace:kube-system,Attempt:1,}" Jun 25 18:33:16.106693 systemd[1]: run-netns-cni\x2d4636fc52\x2df5d4\x2d6ac8\x2d774d\x2d1853534b7947.mount: Deactivated successfully. Jun 25 18:33:16.108685 containerd[1439]: time="2024-06-25T18:33:16.108556034Z" level=info msg="CreateContainer within sandbox \"afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c495dbf3ade0f90ece6169a48846fe99722419c027a0268caf3f88ab1f384a53\"" Jun 25 18:33:16.109721 containerd[1439]: time="2024-06-25T18:33:16.109688548Z" level=info msg="StartContainer for \"c495dbf3ade0f90ece6169a48846fe99722419c027a0268caf3f88ab1f384a53\"" Jun 25 18:33:16.155520 systemd[1]: Started cri-containerd-c495dbf3ade0f90ece6169a48846fe99722419c027a0268caf3f88ab1f384a53.scope - libcontainer container c495dbf3ade0f90ece6169a48846fe99722419c027a0268caf3f88ab1f384a53. Jun 25 18:33:16.165977 kubelet[2562]: E0625 18:33:16.165942 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:16.238365 containerd[1439]: time="2024-06-25T18:33:16.238308374Z" level=info msg="StartContainer for \"c495dbf3ade0f90ece6169a48846fe99722419c027a0268caf3f88ab1f384a53\" returns successfully" Jun 25 18:33:16.393333 systemd-networkd[1378]: cali72dfb255e2d: Link UP Jun 25 18:33:16.393601 systemd-networkd[1378]: cali72dfb255e2d: Gained carrier Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.252 [INFO][4351] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0 coredns-7db6d8ff4d- kube-system 2bf9fcee-35db-45bb-a446-ba52c47672a7 882 0 2024-06-25 18:32:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-mmzhm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali72dfb255e2d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mmzhm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mmzhm-" Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.253 [INFO][4351] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mmzhm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.329 [INFO][4387] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" HandleID="k8s-pod-network.43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" Workload="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.363 [INFO][4387] ipam_plugin.go 264: Auto assigning IP ContainerID="43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" HandleID="k8s-pod-network.43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" Workload="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de200), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-mmzhm", "timestamp":"2024-06-25 18:33:16.329767665 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.363 [INFO][4387] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.363 [INFO][4387] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.363 [INFO][4387] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.365 [INFO][4387] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" host="localhost" Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.369 [INFO][4387] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.373 [INFO][4387] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.375 [INFO][4387] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.378 [INFO][4387] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.378 [INFO][4387] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" host="localhost" Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.379 [INFO][4387] ipam.go 1685: Creating new handle: k8s-pod-network.43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147 Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.382 [INFO][4387] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" host="localhost" Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.387 [INFO][4387] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" host="localhost" Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.388 [INFO][4387] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" host="localhost" Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.388 [INFO][4387] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:16.404900 containerd[1439]: 2024-06-25 18:33:16.388 [INFO][4387] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" HandleID="k8s-pod-network.43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" Workload="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:16.428696 containerd[1439]: 2024-06-25 18:33:16.390 [INFO][4351] k8s.go 386: Populated endpoint ContainerID="43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mmzhm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2bf9fcee-35db-45bb-a446-ba52c47672a7", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-mmzhm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72dfb255e2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:16.428696 containerd[1439]: 2024-06-25 18:33:16.391 [INFO][4351] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mmzhm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:16.428696 containerd[1439]: 2024-06-25 18:33:16.391 [INFO][4351] dataplane_linux.go 68: Setting the host side veth name to cali72dfb255e2d ContainerID="43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mmzhm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:16.428696 containerd[1439]: 2024-06-25 18:33:16.393 [INFO][4351] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mmzhm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:16.428696 containerd[1439]: 2024-06-25 18:33:16.393 [INFO][4351] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mmzhm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2bf9fcee-35db-45bb-a446-ba52c47672a7", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147", Pod:"coredns-7db6d8ff4d-mmzhm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72dfb255e2d", MAC:"12:2e:45:5d:0c:bc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:16.428696 containerd[1439]: 2024-06-25 18:33:16.401 [INFO][4351] k8s.go 500: Wrote updated endpoint to datastore ContainerID="43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mmzhm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:16.477061 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:58580.service - OpenSSH per-connection server daemon (10.0.0.1:58580). Jun 25 18:33:16.514382 containerd[1439]: time="2024-06-25T18:33:16.513739252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:16.514382 containerd[1439]: time="2024-06-25T18:33:16.514341663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:16.514382 containerd[1439]: time="2024-06-25T18:33:16.514361370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:16.514382 containerd[1439]: time="2024-06-25T18:33:16.514373593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:16.515818 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 58580 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:16.517859 sshd[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:16.526448 systemd-logind[1424]: New session 13 of user core. Jun 25 18:33:16.535423 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:33:16.539471 systemd[1]: Started cri-containerd-43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147.scope - libcontainer container 43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147. Jun 25 18:33:16.554903 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:33:16.579522 containerd[1439]: time="2024-06-25T18:33:16.579439830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mmzhm,Uid:2bf9fcee-35db-45bb-a446-ba52c47672a7,Namespace:kube-system,Attempt:1,} returns sandbox id \"43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147\"" Jun 25 18:33:16.580943 kubelet[2562]: E0625 18:33:16.580346 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:16.583710 containerd[1439]: time="2024-06-25T18:33:16.583679804Z" level=info msg="CreateContainer within sandbox \"43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:33:16.723933 sshd[4411]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:16.734037 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:58580.service: Deactivated successfully. Jun 25 18:33:16.736491 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:33:16.737327 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:33:16.750643 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:58592.service - OpenSSH per-connection server daemon (10.0.0.1:58592). Jun 25 18:33:16.751380 systemd-logind[1424]: Removed session 13. Jun 25 18:33:16.780357 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 58592 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:16.782255 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:16.786613 systemd-logind[1424]: New session 14 of user core. Jun 25 18:33:16.798454 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:33:16.903427 containerd[1439]: time="2024-06-25T18:33:16.903348815Z" level=info msg="CreateContainer within sandbox \"43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70c2aeecbf1d9da60e04c9005a19b1b8ecfa7a20e4a76d6d27f7d406d79ccede\"" Jun 25 18:33:16.905296 containerd[1439]: time="2024-06-25T18:33:16.904208017Z" level=info msg="StartContainer for \"70c2aeecbf1d9da60e04c9005a19b1b8ecfa7a20e4a76d6d27f7d406d79ccede\"" Jun 25 18:33:16.937531 systemd[1]: Started cri-containerd-70c2aeecbf1d9da60e04c9005a19b1b8ecfa7a20e4a76d6d27f7d406d79ccede.scope - libcontainer container 70c2aeecbf1d9da60e04c9005a19b1b8ecfa7a20e4a76d6d27f7d406d79ccede. Jun 25 18:33:16.988952 sshd[4467]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:17.000428 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:58592.service: Deactivated successfully. Jun 25 18:33:17.002525 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:33:17.003262 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:33:17.014642 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:58604.service - OpenSSH per-connection server daemon (10.0.0.1:58604). Jun 25 18:33:17.015393 systemd-logind[1424]: Removed session 14. Jun 25 18:33:17.043700 containerd[1439]: time="2024-06-25T18:33:17.043631909Z" level=info msg="StartContainer for \"70c2aeecbf1d9da60e04c9005a19b1b8ecfa7a20e4a76d6d27f7d406d79ccede\" returns successfully" Jun 25 18:33:17.055374 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 58604 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:17.057413 sshd[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:17.064613 systemd-logind[1424]: New session 15 of user core. Jun 25 18:33:17.071442 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:33:17.174476 kubelet[2562]: E0625 18:33:17.174421 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:17.187553 kubelet[2562]: I0625 18:33:17.187465 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mmzhm" podStartSLOduration=35.187163811 podStartE2EDuration="35.187163811s" podCreationTimestamp="2024-06-25 18:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:33:17.186398927 +0000 UTC m=+50.282831672" watchObservedRunningTime="2024-06-25 18:33:17.187163811 +0000 UTC m=+50.283596536" Jun 25 18:33:17.208119 sshd[4516]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:17.214176 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:58604.service: Deactivated successfully. Jun 25 18:33:17.216554 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:33:17.218563 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:33:17.219703 systemd-logind[1424]: Removed session 15. Jun 25 18:33:17.562536 systemd-networkd[1378]: cali72dfb255e2d: Gained IPv6LL Jun 25 18:33:18.174807 kubelet[2562]: E0625 18:33:18.174769 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:19.021103 systemd[1]: run-containerd-runc-k8s.io-2f2bc72987a0de50b91143068297f0e3e104773b6aa5b502ea44e687194d6f06-runc.WSk1ot.mount: Deactivated successfully. Jun 25 18:33:19.177308 kubelet[2562]: E0625 18:33:19.177270 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:19.192193 kubelet[2562]: E0625 18:33:19.192155 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:19.197523 containerd[1439]: time="2024-06-25T18:33:19.197361234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:19.199522 containerd[1439]: time="2024-06-25T18:33:19.199396863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 18:33:19.201667 containerd[1439]: time="2024-06-25T18:33:19.201596229Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:19.207275 containerd[1439]: time="2024-06-25T18:33:19.206434164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:19.207275 containerd[1439]: time="2024-06-25T18:33:19.207006707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.163520177s" Jun 25 18:33:19.207275 containerd[1439]: time="2024-06-25T18:33:19.207052303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 18:33:19.210618 containerd[1439]: time="2024-06-25T18:33:19.210578928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:33:19.223644 containerd[1439]: time="2024-06-25T18:33:19.223503473Z" level=info msg="CreateContainer within sandbox \"ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:33:19.250197 containerd[1439]: time="2024-06-25T18:33:19.250055861Z" level=info msg="CreateContainer within sandbox \"ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bbe848dbf6c1ffebf8375f1e471bb24e69a7f564d05083da5b394c644ca0289b\"" Jun 25 18:33:19.250676 containerd[1439]: time="2024-06-25T18:33:19.250657601Z" level=info msg="StartContainer for \"bbe848dbf6c1ffebf8375f1e471bb24e69a7f564d05083da5b394c644ca0289b\"" Jun 25 18:33:19.286410 systemd[1]: Started cri-containerd-bbe848dbf6c1ffebf8375f1e471bb24e69a7f564d05083da5b394c644ca0289b.scope - libcontainer container bbe848dbf6c1ffebf8375f1e471bb24e69a7f564d05083da5b394c644ca0289b. Jun 25 18:33:19.335668 containerd[1439]: time="2024-06-25T18:33:19.334690975Z" level=info msg="StartContainer for \"bbe848dbf6c1ffebf8375f1e471bb24e69a7f564d05083da5b394c644ca0289b\" returns successfully" Jun 25 18:33:20.242185 kubelet[2562]: I0625 18:33:20.242119 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-56599b7db9-s2zt2" podStartSLOduration=25.999315161 podStartE2EDuration="32.242099219s" podCreationTimestamp="2024-06-25 18:32:48 +0000 UTC" firstStartedPulling="2024-06-25 18:33:12.96672335 +0000 UTC m=+46.063156075" lastFinishedPulling="2024-06-25 18:33:19.209507408 +0000 UTC m=+52.305940133" observedRunningTime="2024-06-25 18:33:20.193972342 +0000 UTC m=+53.290405067" watchObservedRunningTime="2024-06-25 18:33:20.242099219 +0000 UTC m=+53.338531944" Jun 25 18:33:20.926325 containerd[1439]: time="2024-06-25T18:33:20.926214974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:20.927305 containerd[1439]: time="2024-06-25T18:33:20.927250516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 18:33:20.928777 containerd[1439]: time="2024-06-25T18:33:20.928698753Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:20.931917 containerd[1439]: time="2024-06-25T18:33:20.931854092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:20.940438 containerd[1439]: time="2024-06-25T18:33:20.940360338Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.729730384s" Jun 25 18:33:20.940438 containerd[1439]: time="2024-06-25T18:33:20.940435830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 18:33:20.943248 containerd[1439]: time="2024-06-25T18:33:20.943086522Z" level=info msg="CreateContainer within sandbox \"afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:33:20.967389 containerd[1439]: time="2024-06-25T18:33:20.967310571Z" level=info msg="CreateContainer within sandbox \"afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"924c17d25b6054014af00b43233a4c1afa467185a5c6453a52f4fadf6aed6400\"" Jun 25 18:33:20.967990 containerd[1439]: time="2024-06-25T18:33:20.967948768Z" level=info msg="StartContainer for \"924c17d25b6054014af00b43233a4c1afa467185a5c6453a52f4fadf6aed6400\"" Jun 25 18:33:21.009582 systemd[1]: Started cri-containerd-924c17d25b6054014af00b43233a4c1afa467185a5c6453a52f4fadf6aed6400.scope - libcontainer container 924c17d25b6054014af00b43233a4c1afa467185a5c6453a52f4fadf6aed6400. Jun 25 18:33:21.053809 containerd[1439]: time="2024-06-25T18:33:21.053754330Z" level=info msg="StartContainer for \"924c17d25b6054014af00b43233a4c1afa467185a5c6453a52f4fadf6aed6400\" returns successfully" Jun 25 18:33:21.215654 kubelet[2562]: I0625 18:33:21.215422 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-shh4z" podStartSLOduration=25.201237359 podStartE2EDuration="33.215400608s" podCreationTimestamp="2024-06-25 18:32:48 +0000 UTC" firstStartedPulling="2024-06-25 18:33:12.927102728 +0000 UTC m=+46.023535453" lastFinishedPulling="2024-06-25 18:33:20.941265977 +0000 UTC m=+54.037698702" observedRunningTime="2024-06-25 18:33:21.203275986 +0000 UTC m=+54.299708711" watchObservedRunningTime="2024-06-25 18:33:21.215400608 +0000 UTC m=+54.311833333" Jun 25 18:33:22.059551 kubelet[2562]: I0625 18:33:22.059511 2562 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:33:22.059551 kubelet[2562]: I0625 18:33:22.059551 2562 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:33:22.221485 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:58608.service - OpenSSH per-connection server daemon (10.0.0.1:58608). Jun 25 18:33:22.279432 sshd[4681]: Accepted publickey for core from 10.0.0.1 port 58608 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:22.281804 sshd[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:22.288354 systemd-logind[1424]: New session 16 of user core. Jun 25 18:33:22.296637 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:33:22.436903 sshd[4681]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:22.442139 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:58608.service: Deactivated successfully. Jun 25 18:33:22.444895 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:33:22.445884 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:33:22.446888 systemd-logind[1424]: Removed session 16. Jun 25 18:33:26.972254 containerd[1439]: time="2024-06-25T18:33:26.972201742Z" level=info msg="StopPodSandbox for \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\"" Jun 25 18:33:27.077483 containerd[1439]: 2024-06-25 18:33:27.029 [WARNING][4723] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--shh4z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a67d707-aaf3-4ccc-84ad-f6f0070d2909", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859", Pod:"csi-node-driver-shh4z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4038b5f7714", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:27.077483 containerd[1439]: 2024-06-25 18:33:27.029 [INFO][4723] k8s.go 608: Cleaning up netns ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:27.077483 containerd[1439]: 2024-06-25 18:33:27.029 [INFO][4723] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" iface="eth0" netns="" Jun 25 18:33:27.077483 containerd[1439]: 2024-06-25 18:33:27.029 [INFO][4723] k8s.go 615: Releasing IP address(es) ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:27.077483 containerd[1439]: 2024-06-25 18:33:27.029 [INFO][4723] utils.go 188: Calico CNI releasing IP address ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:27.077483 containerd[1439]: 2024-06-25 18:33:27.066 [INFO][4733] ipam_plugin.go 411: Releasing address using handleID ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" HandleID="k8s-pod-network.5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Workload="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:27.077483 containerd[1439]: 2024-06-25 18:33:27.066 [INFO][4733] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:27.077483 containerd[1439]: 2024-06-25 18:33:27.066 [INFO][4733] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:27.077483 containerd[1439]: 2024-06-25 18:33:27.071 [WARNING][4733] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" HandleID="k8s-pod-network.5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Workload="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:27.077483 containerd[1439]: 2024-06-25 18:33:27.071 [INFO][4733] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" HandleID="k8s-pod-network.5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Workload="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:27.077483 containerd[1439]: 2024-06-25 18:33:27.072 [INFO][4733] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:27.077483 containerd[1439]: 2024-06-25 18:33:27.074 [INFO][4723] k8s.go 621: Teardown processing complete. ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:27.078023 containerd[1439]: time="2024-06-25T18:33:27.077552262Z" level=info msg="TearDown network for sandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\" successfully" Jun 25 18:33:27.078023 containerd[1439]: time="2024-06-25T18:33:27.077588482Z" level=info msg="StopPodSandbox for \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\" returns successfully" Jun 25 18:33:27.084155 containerd[1439]: time="2024-06-25T18:33:27.084097609Z" level=info msg="RemovePodSandbox for \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\"" Jun 25 18:33:27.087141 containerd[1439]: time="2024-06-25T18:33:27.087116215Z" level=info msg="Forcibly stopping sandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\"" Jun 25 18:33:27.160471 containerd[1439]: 2024-06-25 18:33:27.128 [WARNING][4755] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--shh4z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a67d707-aaf3-4ccc-84ad-f6f0070d2909", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afd026290fd82858335324c66e20af864480185f9c59153eff5e536a8acca859", Pod:"csi-node-driver-shh4z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4038b5f7714", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:27.160471 containerd[1439]: 2024-06-25 18:33:27.129 [INFO][4755] k8s.go 608: Cleaning up netns ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:27.160471 containerd[1439]: 2024-06-25 18:33:27.129 [INFO][4755] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" iface="eth0" netns="" Jun 25 18:33:27.160471 containerd[1439]: 2024-06-25 18:33:27.129 [INFO][4755] k8s.go 615: Releasing IP address(es) ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:27.160471 containerd[1439]: 2024-06-25 18:33:27.129 [INFO][4755] utils.go 188: Calico CNI releasing IP address ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:27.160471 containerd[1439]: 2024-06-25 18:33:27.148 [INFO][4762] ipam_plugin.go 411: Releasing address using handleID ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" HandleID="k8s-pod-network.5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Workload="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:27.160471 containerd[1439]: 2024-06-25 18:33:27.149 [INFO][4762] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:27.160471 containerd[1439]: 2024-06-25 18:33:27.149 [INFO][4762] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:27.160471 containerd[1439]: 2024-06-25 18:33:27.154 [WARNING][4762] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" HandleID="k8s-pod-network.5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Workload="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:27.160471 containerd[1439]: 2024-06-25 18:33:27.154 [INFO][4762] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" HandleID="k8s-pod-network.5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Workload="localhost-k8s-csi--node--driver--shh4z-eth0" Jun 25 18:33:27.160471 containerd[1439]: 2024-06-25 18:33:27.155 [INFO][4762] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:27.160471 containerd[1439]: 2024-06-25 18:33:27.158 [INFO][4755] k8s.go 621: Teardown processing complete. ContainerID="5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f" Jun 25 18:33:27.161124 containerd[1439]: time="2024-06-25T18:33:27.160508246Z" level=info msg="TearDown network for sandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\" successfully" Jun 25 18:33:27.234094 containerd[1439]: time="2024-06-25T18:33:27.233952487Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:33:27.234094 containerd[1439]: time="2024-06-25T18:33:27.234024455Z" level=info msg="RemovePodSandbox \"5d54ac865ece143a01e29c201f5ecf27135193e33af81118784e8b540f99dd0f\" returns successfully" Jun 25 18:33:27.234871 containerd[1439]: time="2024-06-25T18:33:27.234612527Z" level=info msg="StopPodSandbox for \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\"" Jun 25 18:33:27.310086 containerd[1439]: 2024-06-25 18:33:27.269 [WARNING][4784] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4e6e4186-c2c5-4329-ba3d-8490ac16505e", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd", Pod:"coredns-7db6d8ff4d-8scvs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6ac95dfb74", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:27.310086 containerd[1439]: 2024-06-25 18:33:27.269 [INFO][4784] k8s.go 608: Cleaning up netns ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:27.310086 containerd[1439]: 2024-06-25 18:33:27.269 [INFO][4784] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" iface="eth0" netns="" Jun 25 18:33:27.310086 containerd[1439]: 2024-06-25 18:33:27.269 [INFO][4784] k8s.go 615: Releasing IP address(es) ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:27.310086 containerd[1439]: 2024-06-25 18:33:27.270 [INFO][4784] utils.go 188: Calico CNI releasing IP address ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:27.310086 containerd[1439]: 2024-06-25 18:33:27.296 [INFO][4792] ipam_plugin.go 411: Releasing address using handleID ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" HandleID="k8s-pod-network.79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Workload="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:27.310086 containerd[1439]: 2024-06-25 18:33:27.296 [INFO][4792] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:27.310086 containerd[1439]: 2024-06-25 18:33:27.296 [INFO][4792] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:27.310086 containerd[1439]: 2024-06-25 18:33:27.302 [WARNING][4792] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" HandleID="k8s-pod-network.79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Workload="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:27.310086 containerd[1439]: 2024-06-25 18:33:27.302 [INFO][4792] ipam_plugin.go 439: Releasing address using workloadID ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" HandleID="k8s-pod-network.79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Workload="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:27.310086 containerd[1439]: 2024-06-25 18:33:27.304 [INFO][4792] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:27.310086 containerd[1439]: 2024-06-25 18:33:27.307 [INFO][4784] k8s.go 621: Teardown processing complete. ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:27.310674 containerd[1439]: time="2024-06-25T18:33:27.310152448Z" level=info msg="TearDown network for sandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\" successfully" Jun 25 18:33:27.310674 containerd[1439]: time="2024-06-25T18:33:27.310181313Z" level=info msg="StopPodSandbox for \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\" returns successfully" Jun 25 18:33:27.310846 containerd[1439]: time="2024-06-25T18:33:27.310785064Z" level=info msg="RemovePodSandbox for \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\"" Jun 25 18:33:27.310846 containerd[1439]: time="2024-06-25T18:33:27.310834089Z" level=info msg="Forcibly stopping sandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\"" Jun 25 18:33:27.407197 containerd[1439]: 2024-06-25 18:33:27.359 [WARNING][4815] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4e6e4186-c2c5-4329-ba3d-8490ac16505e", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95443b6fb1558fba8e15ccb3d1f17b3f175a70533f0e2728954fd9b790e9f0cd", Pod:"coredns-7db6d8ff4d-8scvs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6ac95dfb74", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:27.407197 containerd[1439]: 2024-06-25 18:33:27.359 [INFO][4815] k8s.go 608: Cleaning up netns ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:27.407197 containerd[1439]: 2024-06-25 18:33:27.359 [INFO][4815] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" iface="eth0" netns="" Jun 25 18:33:27.407197 containerd[1439]: 2024-06-25 18:33:27.359 [INFO][4815] k8s.go 615: Releasing IP address(es) ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:27.407197 containerd[1439]: 2024-06-25 18:33:27.359 [INFO][4815] utils.go 188: Calico CNI releasing IP address ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:27.407197 containerd[1439]: 2024-06-25 18:33:27.392 [INFO][4823] ipam_plugin.go 411: Releasing address using handleID ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" HandleID="k8s-pod-network.79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Workload="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:27.407197 containerd[1439]: 2024-06-25 18:33:27.392 [INFO][4823] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:27.407197 containerd[1439]: 2024-06-25 18:33:27.392 [INFO][4823] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:27.407197 containerd[1439]: 2024-06-25 18:33:27.398 [WARNING][4823] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" HandleID="k8s-pod-network.79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Workload="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:27.407197 containerd[1439]: 2024-06-25 18:33:27.398 [INFO][4823] ipam_plugin.go 439: Releasing address using workloadID ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" HandleID="k8s-pod-network.79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Workload="localhost-k8s-coredns--7db6d8ff4d--8scvs-eth0" Jun 25 18:33:27.407197 containerd[1439]: 2024-06-25 18:33:27.400 [INFO][4823] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:27.407197 containerd[1439]: 2024-06-25 18:33:27.403 [INFO][4815] k8s.go 621: Teardown processing complete. ContainerID="79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586" Jun 25 18:33:27.407828 containerd[1439]: time="2024-06-25T18:33:27.407286538Z" level=info msg="TearDown network for sandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\" successfully" Jun 25 18:33:27.412462 containerd[1439]: time="2024-06-25T18:33:27.412394731Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:33:27.412607 containerd[1439]: time="2024-06-25T18:33:27.412492309Z" level=info msg="RemovePodSandbox \"79fb7b30a33fdddb3634627ce27c773d796fcba59a6796833fbc0ad620bb4586\" returns successfully" Jun 25 18:33:27.413062 containerd[1439]: time="2024-06-25T18:33:27.413032819Z" level=info msg="StopPodSandbox for \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\"" Jun 25 18:33:27.456695 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:48592.service - OpenSSH per-connection server daemon (10.0.0.1:48592). Jun 25 18:33:27.489963 containerd[1439]: 2024-06-25 18:33:27.451 [WARNING][4846] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2bf9fcee-35db-45bb-a446-ba52c47672a7", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147", Pod:"coredns-7db6d8ff4d-mmzhm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72dfb255e2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:27.489963 containerd[1439]: 2024-06-25 18:33:27.451 [INFO][4846] k8s.go 608: Cleaning up netns ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:27.489963 containerd[1439]: 2024-06-25 18:33:27.452 [INFO][4846] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" iface="eth0" netns="" Jun 25 18:33:27.489963 containerd[1439]: 2024-06-25 18:33:27.452 [INFO][4846] k8s.go 615: Releasing IP address(es) ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:27.489963 containerd[1439]: 2024-06-25 18:33:27.452 [INFO][4846] utils.go 188: Calico CNI releasing IP address ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:27.489963 containerd[1439]: 2024-06-25 18:33:27.476 [INFO][4856] ipam_plugin.go 411: Releasing address using handleID ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" HandleID="k8s-pod-network.839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Workload="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:27.489963 containerd[1439]: 2024-06-25 18:33:27.476 [INFO][4856] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:27.489963 containerd[1439]: 2024-06-25 18:33:27.476 [INFO][4856] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:27.489963 containerd[1439]: 2024-06-25 18:33:27.482 [WARNING][4856] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" HandleID="k8s-pod-network.839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Workload="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:27.489963 containerd[1439]: 2024-06-25 18:33:27.482 [INFO][4856] ipam_plugin.go 439: Releasing address using workloadID ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" HandleID="k8s-pod-network.839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Workload="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:27.489963 containerd[1439]: 2024-06-25 18:33:27.483 [INFO][4856] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:27.489963 containerd[1439]: 2024-06-25 18:33:27.486 [INFO][4846] k8s.go 621: Teardown processing complete. ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:27.489963 containerd[1439]: time="2024-06-25T18:33:27.489930161Z" level=info msg="TearDown network for sandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\" successfully" Jun 25 18:33:27.490485 containerd[1439]: time="2024-06-25T18:33:27.489966851Z" level=info msg="StopPodSandbox for \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\" returns successfully" Jun 25 18:33:27.490703 containerd[1439]: time="2024-06-25T18:33:27.490680143Z" level=info msg="RemovePodSandbox for \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\"" Jun 25 18:33:27.490754 containerd[1439]: time="2024-06-25T18:33:27.490720581Z" level=info msg="Forcibly stopping sandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\"" Jun 25 18:33:27.496396 sshd[4855]: Accepted publickey for core from 10.0.0.1 port 48592 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:27.498380 sshd[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:27.503256 systemd-logind[1424]: New session 17 of user core. Jun 25 18:33:27.510525 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:33:27.572639 containerd[1439]: 2024-06-25 18:33:27.533 [WARNING][4880] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2bf9fcee-35db-45bb-a446-ba52c47672a7", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43353582bb677d900feef37596ac0a1675624b8d8aa224761d24c7fe2e90e147", Pod:"coredns-7db6d8ff4d-mmzhm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72dfb255e2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:27.572639 containerd[1439]: 2024-06-25 18:33:27.533 [INFO][4880] k8s.go 608: Cleaning up netns ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:27.572639 containerd[1439]: 2024-06-25 18:33:27.533 [INFO][4880] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" iface="eth0" netns="" Jun 25 18:33:27.572639 containerd[1439]: 2024-06-25 18:33:27.533 [INFO][4880] k8s.go 615: Releasing IP address(es) ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:27.572639 containerd[1439]: 2024-06-25 18:33:27.533 [INFO][4880] utils.go 188: Calico CNI releasing IP address ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:27.572639 containerd[1439]: 2024-06-25 18:33:27.556 [INFO][4888] ipam_plugin.go 411: Releasing address using handleID ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" HandleID="k8s-pod-network.839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Workload="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:27.572639 containerd[1439]: 2024-06-25 18:33:27.556 [INFO][4888] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:27.572639 containerd[1439]: 2024-06-25 18:33:27.556 [INFO][4888] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:27.572639 containerd[1439]: 2024-06-25 18:33:27.564 [WARNING][4888] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" HandleID="k8s-pod-network.839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Workload="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:27.572639 containerd[1439]: 2024-06-25 18:33:27.564 [INFO][4888] ipam_plugin.go 439: Releasing address using workloadID ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" HandleID="k8s-pod-network.839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Workload="localhost-k8s-coredns--7db6d8ff4d--mmzhm-eth0" Jun 25 18:33:27.572639 containerd[1439]: 2024-06-25 18:33:27.565 [INFO][4888] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:27.572639 containerd[1439]: 2024-06-25 18:33:27.568 [INFO][4880] k8s.go 621: Teardown processing complete. ContainerID="839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50" Jun 25 18:33:27.573149 containerd[1439]: time="2024-06-25T18:33:27.572682823Z" level=info msg="TearDown network for sandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\" successfully" Jun 25 18:33:27.634353 sshd[4855]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:27.638599 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:48592.service: Deactivated successfully. Jun 25 18:33:27.641182 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:33:27.641900 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:33:27.642781 systemd-logind[1424]: Removed session 17. Jun 25 18:33:27.709859 containerd[1439]: time="2024-06-25T18:33:27.709778374Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:33:27.710059 containerd[1439]: time="2024-06-25T18:33:27.709889978Z" level=info msg="RemovePodSandbox \"839471bac3a63901d0d7ee1a1a5d75ba0f7003681c490bdf22387b82e6ecaf50\" returns successfully" Jun 25 18:33:27.710473 containerd[1439]: time="2024-06-25T18:33:27.710433683Z" level=info msg="StopPodSandbox for \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\"" Jun 25 18:33:27.783010 containerd[1439]: 2024-06-25 18:33:27.741 [WARNING][4921] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0", GenerateName:"calico-kube-controllers-56599b7db9-", Namespace:"calico-system", SelfLink:"", UID:"78990d16-9643-4cb5-9ece-c707bc193a17", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56599b7db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f", Pod:"calico-kube-controllers-56599b7db9-s2zt2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f24501af8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:27.783010 containerd[1439]: 2024-06-25 18:33:27.742 [INFO][4921] k8s.go 608: Cleaning up netns ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:27.783010 containerd[1439]: 2024-06-25 18:33:27.742 [INFO][4921] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" iface="eth0" netns="" Jun 25 18:33:27.783010 containerd[1439]: 2024-06-25 18:33:27.742 [INFO][4921] k8s.go 615: Releasing IP address(es) ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:27.783010 containerd[1439]: 2024-06-25 18:33:27.742 [INFO][4921] utils.go 188: Calico CNI releasing IP address ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:27.783010 containerd[1439]: 2024-06-25 18:33:27.769 [INFO][4928] ipam_plugin.go 411: Releasing address using handleID ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" HandleID="k8s-pod-network.adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Workload="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:27.783010 containerd[1439]: 2024-06-25 18:33:27.769 [INFO][4928] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:27.783010 containerd[1439]: 2024-06-25 18:33:27.769 [INFO][4928] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:27.783010 containerd[1439]: 2024-06-25 18:33:27.775 [WARNING][4928] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" HandleID="k8s-pod-network.adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Workload="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:27.783010 containerd[1439]: 2024-06-25 18:33:27.775 [INFO][4928] ipam_plugin.go 439: Releasing address using workloadID ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" HandleID="k8s-pod-network.adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Workload="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:27.783010 containerd[1439]: 2024-06-25 18:33:27.777 [INFO][4928] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:27.783010 containerd[1439]: 2024-06-25 18:33:27.780 [INFO][4921] k8s.go 621: Teardown processing complete. ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:27.783528 containerd[1439]: time="2024-06-25T18:33:27.782999766Z" level=info msg="TearDown network for sandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\" successfully" Jun 25 18:33:27.783528 containerd[1439]: time="2024-06-25T18:33:27.783035615Z" level=info msg="StopPodSandbox for \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\" returns successfully" Jun 25 18:33:27.783839 containerd[1439]: time="2024-06-25T18:33:27.783807870Z" level=info msg="RemovePodSandbox for \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\"" Jun 25 18:33:27.783936 containerd[1439]: time="2024-06-25T18:33:27.783854339Z" level=info msg="Forcibly stopping sandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\"" Jun 25 18:33:27.863510 containerd[1439]: 2024-06-25 18:33:27.824 [WARNING][4950] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0", GenerateName:"calico-kube-controllers-56599b7db9-", Namespace:"calico-system", SelfLink:"", UID:"78990d16-9643-4cb5-9ece-c707bc193a17", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56599b7db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae6c4ecee82536646ea634b63c568578da722f7c017ad07fb248a98b28667d0f", Pod:"calico-kube-controllers-56599b7db9-s2zt2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f24501af8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:27.863510 containerd[1439]: 2024-06-25 18:33:27.825 [INFO][4950] k8s.go 608: Cleaning up netns ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:27.863510 containerd[1439]: 2024-06-25 18:33:27.825 [INFO][4950] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" iface="eth0" netns="" Jun 25 18:33:27.863510 containerd[1439]: 2024-06-25 18:33:27.825 [INFO][4950] k8s.go 615: Releasing IP address(es) ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:27.863510 containerd[1439]: 2024-06-25 18:33:27.825 [INFO][4950] utils.go 188: Calico CNI releasing IP address ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:27.863510 containerd[1439]: 2024-06-25 18:33:27.849 [INFO][4957] ipam_plugin.go 411: Releasing address using handleID ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" HandleID="k8s-pod-network.adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Workload="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:27.863510 containerd[1439]: 2024-06-25 18:33:27.850 [INFO][4957] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:27.863510 containerd[1439]: 2024-06-25 18:33:27.850 [INFO][4957] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:27.863510 containerd[1439]: 2024-06-25 18:33:27.855 [WARNING][4957] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" HandleID="k8s-pod-network.adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Workload="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:27.863510 containerd[1439]: 2024-06-25 18:33:27.855 [INFO][4957] ipam_plugin.go 439: Releasing address using workloadID ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" HandleID="k8s-pod-network.adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Workload="localhost-k8s-calico--kube--controllers--56599b7db9--s2zt2-eth0" Jun 25 18:33:27.863510 containerd[1439]: 2024-06-25 18:33:27.857 [INFO][4957] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:27.863510 containerd[1439]: 2024-06-25 18:33:27.860 [INFO][4950] k8s.go 621: Teardown processing complete. ContainerID="adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8" Jun 25 18:33:27.863991 containerd[1439]: time="2024-06-25T18:33:27.863574430Z" level=info msg="TearDown network for sandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\" successfully" Jun 25 18:33:27.868816 containerd[1439]: time="2024-06-25T18:33:27.868748391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:33:27.868966 containerd[1439]: time="2024-06-25T18:33:27.868841680Z" level=info msg="RemovePodSandbox \"adc0ceb84e3c8ffff6bb0c401b676ed24078d68a841d1035c423dc8a68c934e8\" returns successfully" Jun 25 18:33:32.662605 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:48602.service - OpenSSH per-connection server daemon (10.0.0.1:48602). Jun 25 18:33:32.694253 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 48602 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:32.695761 sshd[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:32.699569 systemd-logind[1424]: New session 18 of user core. Jun 25 18:33:32.710375 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:33:32.821284 sshd[5006]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:32.825728 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:48602.service: Deactivated successfully. Jun 25 18:33:32.827659 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:33:32.828390 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:33:32.829535 systemd-logind[1424]: Removed session 18. Jun 25 18:33:35.986259 kubelet[2562]: E0625 18:33:35.986184 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:37.833847 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:47292.service - OpenSSH per-connection server daemon (10.0.0.1:47292). Jun 25 18:33:37.867117 sshd[5033]: Accepted publickey for core from 10.0.0.1 port 47292 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:37.868731 sshd[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:37.872896 systemd-logind[1424]: New session 19 of user core. Jun 25 18:33:37.882369 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:33:37.986580 kubelet[2562]: E0625 18:33:37.986266 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:37.986480 sshd[5033]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:37.998188 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:47292.service: Deactivated successfully. Jun 25 18:33:38.000075 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:33:38.001760 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:33:38.006698 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:47308.service - OpenSSH per-connection server daemon (10.0.0.1:47308). Jun 25 18:33:38.007841 systemd-logind[1424]: Removed session 19. Jun 25 18:33:38.037451 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 47308 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:38.038863 sshd[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:38.042698 systemd-logind[1424]: New session 20 of user core. Jun 25 18:33:38.050351 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:33:38.574370 sshd[5047]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:38.584718 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:47308.service: Deactivated successfully. Jun 25 18:33:38.586896 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:33:38.588924 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:33:38.599698 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:47314.service - OpenSSH per-connection server daemon (10.0.0.1:47314). Jun 25 18:33:38.601120 systemd-logind[1424]: Removed session 20. Jun 25 18:33:38.635224 sshd[5059]: Accepted publickey for core from 10.0.0.1 port 47314 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:38.637140 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:38.641558 systemd-logind[1424]: New session 21 of user core. Jun 25 18:33:38.651416 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:33:40.416031 sshd[5059]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:40.425809 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:47314.service: Deactivated successfully. Jun 25 18:33:40.428697 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:33:40.430050 systemd-logind[1424]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:33:40.438612 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:47322.service - OpenSSH per-connection server daemon (10.0.0.1:47322). Jun 25 18:33:40.439873 systemd-logind[1424]: Removed session 21. Jun 25 18:33:40.469605 sshd[5082]: Accepted publickey for core from 10.0.0.1 port 47322 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:40.471405 sshd[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:40.476084 systemd-logind[1424]: New session 22 of user core. Jun 25 18:33:40.481484 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:33:40.742632 sshd[5082]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:40.756645 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:47322.service: Deactivated successfully. Jun 25 18:33:40.758670 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:33:40.760174 systemd-logind[1424]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:33:40.765711 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:47326.service - OpenSSH per-connection server daemon (10.0.0.1:47326). Jun 25 18:33:40.767305 systemd-logind[1424]: Removed session 22. Jun 25 18:33:40.800896 sshd[5094]: Accepted publickey for core from 10.0.0.1 port 47326 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:40.802739 sshd[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:40.807869 systemd-logind[1424]: New session 23 of user core. Jun 25 18:33:40.816453 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:33:40.937961 sshd[5094]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:40.942143 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:47326.service: Deactivated successfully. Jun 25 18:33:40.944952 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:33:40.945598 systemd-logind[1424]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:33:40.946625 systemd-logind[1424]: Removed session 23. Jun 25 18:33:43.048799 kubelet[2562]: I0625 18:33:43.048546 2562 topology_manager.go:215] "Topology Admit Handler" podUID="5c58dcda-c782-4be3-98b5-d1434e0930c3" podNamespace="calico-apiserver" podName="calico-apiserver-5df8b9d49f-vvnzq" Jun 25 18:33:43.058151 systemd[1]: Created slice kubepods-besteffort-pod5c58dcda_c782_4be3_98b5_d1434e0930c3.slice - libcontainer container kubepods-besteffort-pod5c58dcda_c782_4be3_98b5_d1434e0930c3.slice. Jun 25 18:33:43.178351 kubelet[2562]: I0625 18:33:43.178202 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5c58dcda-c782-4be3-98b5-d1434e0930c3-calico-apiserver-certs\") pod \"calico-apiserver-5df8b9d49f-vvnzq\" (UID: \"5c58dcda-c782-4be3-98b5-d1434e0930c3\") " pod="calico-apiserver/calico-apiserver-5df8b9d49f-vvnzq" Jun 25 18:33:43.178351 kubelet[2562]: I0625 18:33:43.178263 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9kvq\" (UniqueName: \"kubernetes.io/projected/5c58dcda-c782-4be3-98b5-d1434e0930c3-kube-api-access-d9kvq\") pod \"calico-apiserver-5df8b9d49f-vvnzq\" (UID: \"5c58dcda-c782-4be3-98b5-d1434e0930c3\") " pod="calico-apiserver/calico-apiserver-5df8b9d49f-vvnzq" Jun 25 18:33:43.363354 containerd[1439]: time="2024-06-25T18:33:43.363047267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df8b9d49f-vvnzq,Uid:5c58dcda-c782-4be3-98b5-d1434e0930c3,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:33:43.478921 systemd-networkd[1378]: calie3e794074ea: Link UP Jun 25 18:33:43.479710 systemd-networkd[1378]: calie3e794074ea: Gained carrier Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.408 [INFO][5118] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0 calico-apiserver-5df8b9d49f- calico-apiserver 5c58dcda-c782-4be3-98b5-d1434e0930c3 1129 0 2024-06-25 18:33:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5df8b9d49f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5df8b9d49f-vvnzq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie3e794074ea [] []}} ContainerID="5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" Namespace="calico-apiserver" Pod="calico-apiserver-5df8b9d49f-vvnzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-" Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.408 [INFO][5118] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" Namespace="calico-apiserver" Pod="calico-apiserver-5df8b9d49f-vvnzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0" Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.440 [INFO][5131] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" HandleID="k8s-pod-network.5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" Workload="localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0" Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.451 [INFO][5131] ipam_plugin.go 264: Auto assigning IP ContainerID="5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" HandleID="k8s-pod-network.5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" Workload="localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00070de80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5df8b9d49f-vvnzq", "timestamp":"2024-06-25 18:33:43.440947355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.452 [INFO][5131] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.452 [INFO][5131] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.452 [INFO][5131] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.453 [INFO][5131] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" host="localhost" Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.456 [INFO][5131] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.461 [INFO][5131] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.463 [INFO][5131] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.465 [INFO][5131] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.465 [INFO][5131] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" host="localhost" Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.466 [INFO][5131] ipam.go 1685: Creating new handle: k8s-pod-network.5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.469 [INFO][5131] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" host="localhost" Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.473 [INFO][5131] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" host="localhost" Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.473 [INFO][5131] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" host="localhost" Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.473 [INFO][5131] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:33:43.495107 containerd[1439]: 2024-06-25 18:33:43.473 [INFO][5131] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" HandleID="k8s-pod-network.5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" Workload="localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0" Jun 25 18:33:43.495896 containerd[1439]: 2024-06-25 18:33:43.476 [INFO][5118] k8s.go 386: Populated endpoint ContainerID="5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" Namespace="calico-apiserver" Pod="calico-apiserver-5df8b9d49f-vvnzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0", GenerateName:"calico-apiserver-5df8b9d49f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c58dcda-c782-4be3-98b5-d1434e0930c3", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df8b9d49f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5df8b9d49f-vvnzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3e794074ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:43.495896 containerd[1439]: 2024-06-25 18:33:43.476 [INFO][5118] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" Namespace="calico-apiserver" Pod="calico-apiserver-5df8b9d49f-vvnzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0" Jun 25 18:33:43.495896 containerd[1439]: 2024-06-25 18:33:43.476 [INFO][5118] dataplane_linux.go 68: Setting the host side veth name to calie3e794074ea ContainerID="5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" Namespace="calico-apiserver" Pod="calico-apiserver-5df8b9d49f-vvnzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0" Jun 25 18:33:43.495896 containerd[1439]: 2024-06-25 18:33:43.479 [INFO][5118] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" Namespace="calico-apiserver" Pod="calico-apiserver-5df8b9d49f-vvnzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0" Jun 25 18:33:43.495896 containerd[1439]: 2024-06-25 18:33:43.480 [INFO][5118] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" Namespace="calico-apiserver" Pod="calico-apiserver-5df8b9d49f-vvnzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0", GenerateName:"calico-apiserver-5df8b9d49f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c58dcda-c782-4be3-98b5-d1434e0930c3", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df8b9d49f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d", Pod:"calico-apiserver-5df8b9d49f-vvnzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3e794074ea", MAC:"ee:29:aa:61:53:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:33:43.495896 containerd[1439]: 2024-06-25 18:33:43.485 [INFO][5118] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d" Namespace="calico-apiserver" Pod="calico-apiserver-5df8b9d49f-vvnzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df8b9d49f--vvnzq-eth0" Jun 25 18:33:43.527073 containerd[1439]: time="2024-06-25T18:33:43.526964067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:43.527073 containerd[1439]: time="2024-06-25T18:33:43.527024943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:43.527073 containerd[1439]: time="2024-06-25T18:33:43.527043649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:43.527073 containerd[1439]: time="2024-06-25T18:33:43.527056143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:43.559406 systemd[1]: Started cri-containerd-5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d.scope - libcontainer container 5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d. Jun 25 18:33:43.573339 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:33:43.598822 containerd[1439]: time="2024-06-25T18:33:43.598782242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df8b9d49f-vvnzq,Uid:5c58dcda-c782-4be3-98b5-d1434e0930c3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d\"" Jun 25 18:33:43.601132 containerd[1439]: time="2024-06-25T18:33:43.600905329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:33:44.954456 systemd-networkd[1378]: calie3e794074ea: Gained IPv6LL Jun 25 18:33:45.961707 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:47330.service - OpenSSH per-connection server daemon (10.0.0.1:47330). Jun 25 18:33:45.995581 sshd[5206]: Accepted publickey for core from 10.0.0.1 port 47330 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:45.996103 sshd[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:46.002708 systemd-logind[1424]: New session 24 of user core. Jun 25 18:33:46.009524 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:33:46.133492 sshd[5206]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:46.138565 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:47330.service: Deactivated successfully. Jun 25 18:33:46.140759 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:33:46.141692 systemd-logind[1424]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:33:46.142806 systemd-logind[1424]: Removed session 24. Jun 25 18:33:46.459148 containerd[1439]: time="2024-06-25T18:33:46.459064997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:46.468662 containerd[1439]: time="2024-06-25T18:33:46.468579764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 18:33:46.478509 containerd[1439]: time="2024-06-25T18:33:46.478414841Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:46.484239 containerd[1439]: time="2024-06-25T18:33:46.484176001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:46.484911 containerd[1439]: time="2024-06-25T18:33:46.484873308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 2.883939565s" Jun 25 18:33:46.484958 containerd[1439]: time="2024-06-25T18:33:46.484911942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:33:46.488493 containerd[1439]: time="2024-06-25T18:33:46.488412818Z" level=info msg="CreateContainer within sandbox \"5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:33:46.553267 containerd[1439]: time="2024-06-25T18:33:46.553160204Z" level=info msg="CreateContainer within sandbox \"5145eab71a5fd89d21fa4041ea84a2b31afc647a33aa7d70a8bb3b29c971658d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2dea335b46ad61cceb4ee7a7e95ce30d10f8aba4aeb64f8d3d82748773cf282d\"" Jun 25 18:33:46.553959 containerd[1439]: time="2024-06-25T18:33:46.553911674Z" level=info msg="StartContainer for \"2dea335b46ad61cceb4ee7a7e95ce30d10f8aba4aeb64f8d3d82748773cf282d\"" Jun 25 18:33:46.594485 systemd[1]: Started cri-containerd-2dea335b46ad61cceb4ee7a7e95ce30d10f8aba4aeb64f8d3d82748773cf282d.scope - libcontainer container 2dea335b46ad61cceb4ee7a7e95ce30d10f8aba4aeb64f8d3d82748773cf282d. Jun 25 18:33:46.643082 containerd[1439]: time="2024-06-25T18:33:46.643021279Z" level=info msg="StartContainer for \"2dea335b46ad61cceb4ee7a7e95ce30d10f8aba4aeb64f8d3d82748773cf282d\" returns successfully" Jun 25 18:33:47.281595 kubelet[2562]: I0625 18:33:47.281303 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5df8b9d49f-vvnzq" podStartSLOduration=1.395912203 podStartE2EDuration="4.281279358s" podCreationTimestamp="2024-06-25 18:33:43 +0000 UTC" firstStartedPulling="2024-06-25 18:33:43.600523932 +0000 UTC m=+76.696956657" lastFinishedPulling="2024-06-25 18:33:46.485891087 +0000 UTC m=+79.582323812" observedRunningTime="2024-06-25 18:33:47.279880846 +0000 UTC m=+80.376313571" watchObservedRunningTime="2024-06-25 18:33:47.281279358 +0000 UTC m=+80.377712083" Jun 25 18:33:51.145953 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:38082.service - OpenSSH per-connection server daemon (10.0.0.1:38082). Jun 25 18:33:51.184933 sshd[5296]: Accepted publickey for core from 10.0.0.1 port 38082 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:51.186601 sshd[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:51.190886 systemd-logind[1424]: New session 25 of user core. Jun 25 18:33:51.199370 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:33:51.317906 sshd[5296]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:51.322018 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:38082.service: Deactivated successfully. Jun 25 18:33:51.324260 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:33:51.324885 systemd-logind[1424]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:33:51.325813 systemd-logind[1424]: Removed session 25. Jun 25 18:33:52.985635 kubelet[2562]: E0625 18:33:52.985601 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:33:56.332187 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:39982.service - OpenSSH per-connection server daemon (10.0.0.1:39982). Jun 25 18:33:56.368081 sshd[5325]: Accepted publickey for core from 10.0.0.1 port 39982 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:33:56.369740 sshd[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:56.374408 systemd-logind[1424]: New session 26 of user core. Jun 25 18:33:56.381523 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:33:56.497252 sshd[5325]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:56.501717 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:39982.service: Deactivated successfully. Jun 25 18:33:56.503868 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:33:56.504475 systemd-logind[1424]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:33:56.505646 systemd-logind[1424]: Removed session 26.