Jun 25 18:47:01.891264 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:47:01.891293 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:47:01.891308 kernel: BIOS-provided physical RAM map: Jun 25 18:47:01.891317 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 18:47:01.891325 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 18:47:01.891334 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 18:47:01.891344 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jun 25 18:47:01.891353 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jun 25 18:47:01.891362 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 18:47:01.891373 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 18:47:01.891382 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 25 18:47:01.891391 kernel: NX (Execute Disable) protection: active Jun 25 18:47:01.891399 kernel: APIC: Static calls initialized Jun 25 18:47:01.891408 kernel: SMBIOS 2.8 present. Jun 25 18:47:01.891420 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jun 25 18:47:01.891432 kernel: Hypervisor detected: KVM Jun 25 18:47:01.891441 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 18:47:01.891451 kernel: kvm-clock: using sched offset of 2213046853 cycles Jun 25 18:47:01.891462 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 18:47:01.891472 kernel: tsc: Detected 2794.750 MHz processor Jun 25 18:47:01.891482 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:47:01.891493 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:47:01.891502 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jun 25 18:47:01.891513 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 25 18:47:01.891525 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:47:01.891535 kernel: Using GB pages for direct mapping Jun 25 18:47:01.891545 kernel: ACPI: Early table checksum verification disabled Jun 25 18:47:01.891555 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jun 25 18:47:01.891565 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:47:01.891575 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:47:01.891585 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:47:01.891594 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jun 25 18:47:01.891605 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:47:01.891617 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:47:01.891627 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:47:01.891637 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jun 25 18:47:01.891647 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jun 25 18:47:01.891667 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jun 25 18:47:01.891677 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jun 25 18:47:01.891687 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jun 25 18:47:01.891705 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jun 25 18:47:01.891715 kernel: No NUMA configuration found Jun 25 18:47:01.891725 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jun 25 18:47:01.891735 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jun 25 18:47:01.891746 kernel: Zone ranges: Jun 25 18:47:01.891756 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:47:01.891766 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jun 25 18:47:01.891779 kernel: Normal empty Jun 25 18:47:01.891790 kernel: Movable zone start for each node Jun 25 18:47:01.891800 kernel: Early memory node ranges Jun 25 18:47:01.891810 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 18:47:01.891820 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jun 25 18:47:01.891844 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jun 25 18:47:01.891854 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:47:01.891864 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 18:47:01.891874 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jun 25 18:47:01.891888 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 18:47:01.891898 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 18:47:01.891908 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:47:01.891918 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 18:47:01.891928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 18:47:01.891939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:47:01.891949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 18:47:01.891960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 18:47:01.891970 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:47:01.891984 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 18:47:01.891995 kernel: TSC deadline timer available Jun 25 18:47:01.892006 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jun 25 18:47:01.892016 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 18:47:01.892026 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 25 18:47:01.892036 kernel: kvm-guest: setup PV sched yield Jun 25 18:47:01.892046 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jun 25 18:47:01.892057 kernel: Booting paravirtualized kernel on KVM Jun 25 18:47:01.892067 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:47:01.892078 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 25 18:47:01.892091 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jun 25 18:47:01.892101 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jun 25 18:47:01.892111 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 25 18:47:01.892121 kernel: kvm-guest: PV spinlocks enabled Jun 25 18:47:01.892132 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:47:01.892143 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:47:01.892154 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:47:01.892165 kernel: random: crng init done Jun 25 18:47:01.892178 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:47:01.892188 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:47:01.892198 kernel: Fallback order for Node 0: 0 Jun 25 18:47:01.892209 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jun 25 18:47:01.892219 kernel: Policy zone: DMA32 Jun 25 18:47:01.892229 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:47:01.892240 kernel: Memory: 2428452K/2571756K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 143044K reserved, 0K cma-reserved) Jun 25 18:47:01.892250 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 18:47:01.892261 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:47:01.892274 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:47:01.892284 kernel: Dynamic Preempt: voluntary Jun 25 18:47:01.892294 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:47:01.892306 kernel: rcu: RCU event tracing is enabled. Jun 25 18:47:01.892316 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 18:47:01.892327 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:47:01.892337 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:47:01.892348 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:47:01.892358 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:47:01.892371 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 18:47:01.892382 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 25 18:47:01.892392 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:47:01.892402 kernel: Console: colour VGA+ 80x25 Jun 25 18:47:01.892412 kernel: printk: console [ttyS0] enabled Jun 25 18:47:01.892423 kernel: ACPI: Core revision 20230628 Jun 25 18:47:01.892433 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 25 18:47:01.892444 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:47:01.892454 kernel: x2apic enabled Jun 25 18:47:01.892467 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 18:47:01.892477 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 25 18:47:01.892488 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 25 18:47:01.892498 kernel: kvm-guest: setup PV IPIs Jun 25 18:47:01.892509 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 18:47:01.892519 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 18:47:01.892529 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jun 25 18:47:01.892540 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 25 18:47:01.892562 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 25 18:47:01.892573 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 25 18:47:01.892584 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:47:01.892595 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:47:01.892608 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:47:01.892619 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:47:01.892630 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 25 18:47:01.892641 kernel: RETBleed: Mitigation: untrained return thunk Jun 25 18:47:01.892652 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 18:47:01.892674 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 18:47:01.892685 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 25 18:47:01.892697 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 25 18:47:01.892708 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 25 18:47:01.892719 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:47:01.892730 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:47:01.892741 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:47:01.892751 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:47:01.892765 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 18:47:01.892776 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:47:01.892787 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:47:01.892798 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:47:01.892809 kernel: SELinux: Initializing. Jun 25 18:47:01.892820 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:47:01.892855 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:47:01.892866 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 25 18:47:01.892877 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:47:01.892892 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:47:01.892902 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:47:01.892914 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 25 18:47:01.892925 kernel: ... version: 0 Jun 25 18:47:01.892936 kernel: ... bit width: 48 Jun 25 18:47:01.892946 kernel: ... generic registers: 6 Jun 25 18:47:01.892957 kernel: ... value mask: 0000ffffffffffff Jun 25 18:47:01.892968 kernel: ... max period: 00007fffffffffff Jun 25 18:47:01.892979 kernel: ... fixed-purpose events: 0 Jun 25 18:47:01.892994 kernel: ... event mask: 000000000000003f Jun 25 18:47:01.893004 kernel: signal: max sigframe size: 1776 Jun 25 18:47:01.893015 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:47:01.893027 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:47:01.893037 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:47:01.893048 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:47:01.893059 kernel: .... node #0, CPUs: #1 #2 #3 Jun 25 18:47:01.893069 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 18:47:01.893080 kernel: smpboot: Max logical packages: 1 Jun 25 18:47:01.893095 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jun 25 18:47:01.893106 kernel: devtmpfs: initialized Jun 25 18:47:01.893116 kernel: x86/mm: Memory block size: 128MB Jun 25 18:47:01.893127 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:47:01.893138 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 18:47:01.893149 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:47:01.893160 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:47:01.893170 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:47:01.893181 kernel: audit: type=2000 audit(1719341221.296:1): state=initialized audit_enabled=0 res=1 Jun 25 18:47:01.893195 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:47:01.893205 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:47:01.893216 kernel: cpuidle: using governor menu Jun 25 18:47:01.893226 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:47:01.893237 kernel: dca service started, version 1.12.1 Jun 25 18:47:01.893247 kernel: PCI: Using configuration type 1 for base access Jun 25 18:47:01.893258 kernel: PCI: Using configuration type 1 for extended access Jun 25 18:47:01.893268 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:47:01.893279 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:47:01.893292 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:47:01.893303 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:47:01.893313 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:47:01.893324 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:47:01.893334 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:47:01.893345 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:47:01.893355 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:47:01.893366 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:47:01.893376 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:47:01.893389 kernel: ACPI: Interpreter enabled Jun 25 18:47:01.893399 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 18:47:01.893410 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:47:01.893421 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:47:01.893431 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 18:47:01.893442 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 18:47:01.893452 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:47:01.893669 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:47:01.893690 kernel: acpiphp: Slot [3] registered Jun 25 18:47:01.893701 kernel: acpiphp: Slot [4] registered Jun 25 18:47:01.893712 kernel: acpiphp: Slot [5] registered Jun 25 18:47:01.893722 kernel: acpiphp: Slot [6] registered Jun 25 18:47:01.893733 kernel: acpiphp: Slot [7] registered Jun 25 18:47:01.893743 kernel: acpiphp: Slot [8] registered Jun 25 18:47:01.893753 kernel: acpiphp: Slot [9] registered Jun 25 18:47:01.893764 kernel: acpiphp: Slot [10] registered Jun 25 18:47:01.893774 kernel: acpiphp: Slot [11] registered Jun 25 18:47:01.893788 kernel: acpiphp: Slot [12] registered Jun 25 18:47:01.893799 kernel: acpiphp: Slot [13] registered Jun 25 18:47:01.893810 kernel: acpiphp: Slot [14] registered Jun 25 18:47:01.893820 kernel: acpiphp: Slot [15] registered Jun 25 18:47:01.893962 kernel: acpiphp: Slot [16] registered Jun 25 18:47:01.893974 kernel: acpiphp: Slot [17] registered Jun 25 18:47:01.893985 kernel: acpiphp: Slot [18] registered Jun 25 18:47:01.893995 kernel: acpiphp: Slot [19] registered Jun 25 18:47:01.894006 kernel: acpiphp: Slot [20] registered Jun 25 18:47:01.894017 kernel: acpiphp: Slot [21] registered Jun 25 18:47:01.894032 kernel: acpiphp: Slot [22] registered Jun 25 18:47:01.894043 kernel: acpiphp: Slot [23] registered Jun 25 18:47:01.894053 kernel: acpiphp: Slot [24] registered Jun 25 18:47:01.894064 kernel: acpiphp: Slot [25] registered Jun 25 18:47:01.894074 kernel: acpiphp: Slot [26] registered Jun 25 18:47:01.894085 kernel: acpiphp: Slot [27] registered Jun 25 18:47:01.894096 kernel: acpiphp: Slot [28] registered Jun 25 18:47:01.894106 kernel: acpiphp: Slot [29] registered Jun 25 18:47:01.894117 kernel: acpiphp: Slot [30] registered Jun 25 18:47:01.894132 kernel: acpiphp: Slot [31] registered Jun 25 18:47:01.894143 kernel: PCI host bridge to bus 0000:00 Jun 25 18:47:01.894317 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 18:47:01.894466 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 18:47:01.894614 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 18:47:01.894772 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jun 25 18:47:01.894935 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 25 18:47:01.895082 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:47:01.895271 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 18:47:01.895453 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 18:47:01.895627 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 18:47:01.895798 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jun 25 18:47:01.895973 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 18:47:01.896129 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 18:47:01.896290 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 18:47:01.896446 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 18:47:01.896615 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 18:47:01.896784 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 25 18:47:01.896954 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 25 18:47:01.897119 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jun 25 18:47:01.897279 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jun 25 18:47:01.897435 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jun 25 18:47:01.897592 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jun 25 18:47:01.897757 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 18:47:01.897947 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 18:47:01.898110 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jun 25 18:47:01.898271 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jun 25 18:47:01.898435 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jun 25 18:47:01.898606 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 18:47:01.898778 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 18:47:01.898975 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jun 25 18:47:01.899127 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jun 25 18:47:01.899285 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jun 25 18:47:01.899437 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jun 25 18:47:01.899596 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jun 25 18:47:01.899755 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jun 25 18:47:01.899921 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jun 25 18:47:01.899937 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 18:47:01.899949 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 18:47:01.899960 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 18:47:01.899971 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 18:47:01.899982 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 18:47:01.899997 kernel: iommu: Default domain type: Translated Jun 25 18:47:01.900008 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:47:01.900019 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:47:01.900031 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 18:47:01.900042 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 18:47:01.900054 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jun 25 18:47:01.900210 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 18:47:01.900362 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 18:47:01.900510 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 18:47:01.900529 kernel: vgaarb: loaded Jun 25 18:47:01.900540 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 25 18:47:01.900552 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 25 18:47:01.900563 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 18:47:01.900574 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:47:01.900586 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:47:01.900597 kernel: pnp: PnP ACPI init Jun 25 18:47:01.900765 kernel: pnp 00:02: [dma 2] Jun 25 18:47:01.900786 kernel: pnp: PnP ACPI: found 6 devices Jun 25 18:47:01.900797 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:47:01.900809 kernel: NET: Registered PF_INET protocol family Jun 25 18:47:01.900833 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:47:01.900845 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:47:01.900857 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:47:01.900868 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:47:01.900879 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:47:01.900890 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:47:01.900904 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:47:01.900916 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:47:01.900927 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:47:01.900938 kernel: NET: Registered PF_XDP protocol family Jun 25 18:47:01.901079 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 18:47:01.901215 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 18:47:01.901349 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 18:47:01.901485 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jun 25 18:47:01.901627 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 25 18:47:01.901772 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 18:47:01.901912 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 18:47:01.901924 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:47:01.901932 kernel: Initialise system trusted keyrings Jun 25 18:47:01.901940 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:47:01.901949 kernel: Key type asymmetric registered Jun 25 18:47:01.901957 kernel: Asymmetric key parser 'x509' registered Jun 25 18:47:01.901965 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:47:01.901977 kernel: io scheduler mq-deadline registered Jun 25 18:47:01.901985 kernel: io scheduler kyber registered Jun 25 18:47:01.901993 kernel: io scheduler bfq registered Jun 25 18:47:01.902001 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:47:01.902010 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 18:47:01.902018 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 25 18:47:01.902026 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 18:47:01.902034 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:47:01.902042 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:47:01.902053 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 18:47:01.902061 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 18:47:01.902069 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 18:47:01.902197 kernel: rtc_cmos 00:05: RTC can wake from S4 Jun 25 18:47:01.902210 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 18:47:01.902323 kernel: rtc_cmos 00:05: registered as rtc0 Jun 25 18:47:01.902438 kernel: rtc_cmos 00:05: setting system clock to 2024-06-25T18:47:01 UTC (1719341221) Jun 25 18:47:01.902552 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 25 18:47:01.902566 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 25 18:47:01.902574 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:47:01.902582 kernel: Segment Routing with IPv6 Jun 25 18:47:01.902590 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:47:01.902598 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:47:01.902606 kernel: Key type dns_resolver registered Jun 25 18:47:01.902614 kernel: IPI shorthand broadcast: enabled Jun 25 18:47:01.902622 kernel: sched_clock: Marking stable (701002269, 104721443)->(865808201, -60084489) Jun 25 18:47:01.902630 kernel: registered taskstats version 1 Jun 25 18:47:01.902641 kernel: Loading compiled-in X.509 certificates Jun 25 18:47:01.902649 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:47:01.902664 kernel: Key type .fscrypt registered Jun 25 18:47:01.902673 kernel: Key type fscrypt-provisioning registered Jun 25 18:47:01.902682 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:47:01.902690 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:47:01.902698 kernel: ima: No architecture policies found Jun 25 18:47:01.902706 kernel: clk: Disabling unused clocks Jun 25 18:47:01.902716 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:47:01.902724 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:47:01.902733 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:47:01.902741 kernel: Run /init as init process Jun 25 18:47:01.902749 kernel: with arguments: Jun 25 18:47:01.902757 kernel: /init Jun 25 18:47:01.902765 kernel: with environment: Jun 25 18:47:01.902773 kernel: HOME=/ Jun 25 18:47:01.902797 kernel: TERM=linux Jun 25 18:47:01.902808 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:47:01.902833 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:47:01.902844 systemd[1]: Detected virtualization kvm. Jun 25 18:47:01.902853 systemd[1]: Detected architecture x86-64. Jun 25 18:47:01.902862 systemd[1]: Running in initrd. Jun 25 18:47:01.902870 systemd[1]: No hostname configured, using default hostname. Jun 25 18:47:01.902879 systemd[1]: Hostname set to . Jun 25 18:47:01.902891 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:47:01.902899 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:47:01.902908 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:47:01.902917 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:47:01.902927 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:47:01.902936 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:47:01.902945 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:47:01.902954 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:47:01.902967 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:47:01.902977 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:47:01.902986 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:47:01.902995 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:47:01.903003 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:47:01.903012 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:47:01.903021 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:47:01.903032 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:47:01.903041 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:47:01.903050 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:47:01.903058 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:47:01.903067 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:47:01.903076 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:47:01.903086 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:47:01.903094 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:47:01.903103 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:47:01.903115 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:47:01.903123 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:47:01.903132 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:47:01.903141 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:47:01.903150 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:47:01.903163 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:47:01.903172 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:01.903181 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:47:01.903190 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:47:01.903199 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:47:01.903227 systemd-journald[191]: Collecting audit messages is disabled. Jun 25 18:47:01.903250 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:47:01.903259 systemd-journald[191]: Journal started Jun 25 18:47:01.903279 systemd-journald[191]: Runtime Journal (/run/log/journal/13b49c36f6f24e4cb4b9db674a19bdd9) is 6.0M, max 48.4M, 42.3M free. Jun 25 18:47:01.907890 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:47:01.924608 systemd-modules-load[194]: Inserted module 'overlay' Jun 25 18:47:01.941936 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:01.943381 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:47:01.954011 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:47:01.955346 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:47:01.958994 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:47:01.971376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:47:01.974031 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:47:01.975227 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:01.985856 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:47:01.988361 systemd-modules-load[194]: Inserted module 'br_netfilter' Jun 25 18:47:01.989420 kernel: Bridge firewalling registered Jun 25 18:47:01.991254 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:47:01.993687 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:47:02.002245 dracut-cmdline[223]: dracut-dracut-053 Jun 25 18:47:02.004204 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:47:02.006601 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:47:02.021233 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:47:02.029005 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:47:02.069438 systemd-resolved[260]: Positive Trust Anchors: Jun 25 18:47:02.069456 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:47:02.069501 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:47:02.134651 systemd-resolved[260]: Defaulting to hostname 'linux'. Jun 25 18:47:02.137056 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:47:02.140903 kernel: SCSI subsystem initialized Jun 25 18:47:02.138416 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:47:02.148851 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:47:02.161856 kernel: iscsi: registered transport (tcp) Jun 25 18:47:02.187860 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:47:02.187913 kernel: QLogic iSCSI HBA Driver Jun 25 18:47:02.232395 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:47:02.243951 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:47:02.269910 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:47:02.269985 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:47:02.271005 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:47:02.321880 kernel: raid6: avx2x4 gen() 28268 MB/s Jun 25 18:47:02.338868 kernel: raid6: avx2x2 gen() 26093 MB/s Jun 25 18:47:02.356125 kernel: raid6: avx2x1 gen() 22958 MB/s Jun 25 18:47:02.356189 kernel: raid6: using algorithm avx2x4 gen() 28268 MB/s Jun 25 18:47:02.374016 kernel: raid6: .... xor() 6596 MB/s, rmw enabled Jun 25 18:47:02.374096 kernel: raid6: using avx2x2 recovery algorithm Jun 25 18:47:02.399861 kernel: xor: automatically using best checksumming function avx Jun 25 18:47:02.575857 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:47:02.587191 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:47:02.595053 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:47:02.608130 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jun 25 18:47:02.612737 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:47:02.625985 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:47:02.640094 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Jun 25 18:47:02.671131 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:47:02.684989 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:47:02.752310 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:47:02.760052 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:47:02.779931 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:47:02.783105 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:47:02.784659 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:47:02.795070 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 25 18:47:02.835799 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 18:47:02.836009 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:47:02.836027 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:47:02.836049 kernel: GPT:9289727 != 19775487 Jun 25 18:47:02.836064 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:47:02.836078 kernel: GPT:9289727 != 19775487 Jun 25 18:47:02.836096 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:47:02.836110 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:47:02.789479 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:47:02.807034 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:47:02.820506 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:47:02.829988 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:47:02.848695 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:47:02.848725 kernel: AES CTR mode by8 optimization enabled Jun 25 18:47:02.848744 kernel: libata version 3.00 loaded. Jun 25 18:47:02.840881 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:02.848745 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:47:02.852956 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:47:02.853156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:02.860264 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Jun 25 18:47:02.860303 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 18:47:02.869209 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (474) Jun 25 18:47:02.869241 kernel: scsi host0: ata_piix Jun 25 18:47:02.869454 kernel: scsi host1: ata_piix Jun 25 18:47:02.869654 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jun 25 18:47:02.869673 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jun 25 18:47:02.860525 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:02.870761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:02.894013 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 18:47:02.918847 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:02.930400 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 18:47:02.936079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:47:02.940742 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 18:47:02.941192 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 18:47:02.957022 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:47:02.959298 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:47:02.969618 disk-uuid[545]: Primary Header is updated. Jun 25 18:47:02.969618 disk-uuid[545]: Secondary Entries is updated. Jun 25 18:47:02.969618 disk-uuid[545]: Secondary Header is updated. Jun 25 18:47:02.976347 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:47:02.978838 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:47:02.980456 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:03.029041 kernel: ata2: found unknown device (class 0) Jun 25 18:47:03.030969 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 25 18:47:03.032919 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 25 18:47:03.078294 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 25 18:47:03.091003 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:47:03.091028 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 25 18:47:03.980848 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:47:03.980907 disk-uuid[546]: The operation has completed successfully. Jun 25 18:47:04.006591 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:47:04.006723 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:47:04.035025 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:47:04.038160 sh[581]: Success Jun 25 18:47:04.051845 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 25 18:47:04.084287 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:47:04.098323 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:47:04.101125 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:47:04.113178 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:47:04.113213 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:47:04.113225 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:47:04.114200 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:47:04.114950 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:47:04.120428 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:47:04.120881 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:47:04.129010 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:47:04.131834 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:47:04.141254 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:47:04.141295 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:47:04.141306 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:47:04.145854 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:47:04.154979 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:47:04.156706 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:47:04.166096 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:47:04.173982 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:47:04.229280 ignition[675]: Ignition 2.19.0 Jun 25 18:47:04.229292 ignition[675]: Stage: fetch-offline Jun 25 18:47:04.229335 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:47:04.229345 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:47:04.229448 ignition[675]: parsed url from cmdline: "" Jun 25 18:47:04.229452 ignition[675]: no config URL provided Jun 25 18:47:04.229458 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:47:04.229467 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:47:04.229493 ignition[675]: op(1): [started] loading QEMU firmware config module Jun 25 18:47:04.229499 ignition[675]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 18:47:04.237020 ignition[675]: op(1): [finished] loading QEMU firmware config module Jun 25 18:47:04.254631 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:47:04.267973 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:47:04.281323 ignition[675]: parsing config with SHA512: 0d5724c571e7ee18f9f8b65f932ff17e7bb98824f68ecd5fd45348e88d3f69a4ed2b5669d0cc4a6d16bf9079249127b972e3787d13796f48521f438c3fdbdd02 Jun 25 18:47:04.285341 unknown[675]: fetched base config from "system" Jun 25 18:47:04.285359 unknown[675]: fetched user config from "qemu" Jun 25 18:47:04.286509 ignition[675]: fetch-offline: fetch-offline passed Jun 25 18:47:04.286612 ignition[675]: Ignition finished successfully Jun 25 18:47:04.288320 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:47:04.298191 systemd-networkd[771]: lo: Link UP Jun 25 18:47:04.298201 systemd-networkd[771]: lo: Gained carrier Jun 25 18:47:04.301051 systemd-networkd[771]: Enumeration completed Jun 25 18:47:04.301132 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:47:04.301736 systemd[1]: Reached target network.target - Network. Jun 25 18:47:04.302182 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 18:47:04.306530 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:47:04.306534 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:47:04.307701 systemd-networkd[771]: eth0: Link UP Jun 25 18:47:04.307705 systemd-networkd[771]: eth0: Gained carrier Jun 25 18:47:04.307711 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:47:04.307958 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:47:04.319895 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.161/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:47:04.323182 ignition[774]: Ignition 2.19.0 Jun 25 18:47:04.323191 ignition[774]: Stage: kargs Jun 25 18:47:04.323346 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:47:04.323359 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:47:04.327152 ignition[774]: kargs: kargs passed Jun 25 18:47:04.327203 ignition[774]: Ignition finished successfully Jun 25 18:47:04.331669 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:47:04.340056 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:47:04.352455 ignition[784]: Ignition 2.19.0 Jun 25 18:47:04.352466 ignition[784]: Stage: disks Jun 25 18:47:04.352672 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:47:04.352684 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:47:04.353521 ignition[784]: disks: disks passed Jun 25 18:47:04.353575 ignition[784]: Ignition finished successfully Jun 25 18:47:04.359491 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:47:04.360048 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:47:04.362086 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:47:04.362406 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:47:04.362750 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:47:04.363255 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:47:04.384958 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:47:04.396298 systemd-resolved[260]: Detected conflict on linux IN A 10.0.0.161 Jun 25 18:47:04.396312 systemd-resolved[260]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Jun 25 18:47:04.398715 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:47:04.406104 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:47:04.415906 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:47:04.515871 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:47:04.516583 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:47:04.518850 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:47:04.533026 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:47:04.535941 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:47:04.538640 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:47:04.538692 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:47:04.538717 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:47:04.545848 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Jun 25 18:47:04.547504 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:47:04.552075 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:47:04.552094 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:47:04.552116 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:47:04.552127 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:47:04.554347 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:47:04.564986 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:47:04.600843 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:47:04.605178 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:47:04.610385 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:47:04.615526 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:47:04.701077 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:47:04.711961 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:47:04.715055 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:47:04.723853 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:47:04.742542 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:47:04.746467 ignition[917]: INFO : Ignition 2.19.0 Jun 25 18:47:04.746467 ignition[917]: INFO : Stage: mount Jun 25 18:47:04.748218 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:47:04.748218 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:47:04.748218 ignition[917]: INFO : mount: mount passed Jun 25 18:47:04.748218 ignition[917]: INFO : Ignition finished successfully Jun 25 18:47:04.754040 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:47:04.763969 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:47:05.112414 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:47:05.130023 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:47:05.139564 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Jun 25 18:47:05.139615 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:47:05.139627 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:47:05.141240 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:47:05.143874 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:47:05.145094 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:47:05.169812 ignition[948]: INFO : Ignition 2.19.0 Jun 25 18:47:05.169812 ignition[948]: INFO : Stage: files Jun 25 18:47:05.172220 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:47:05.172220 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:47:05.172220 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:47:05.172220 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:47:05.172220 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:47:05.180404 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:47:05.180404 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:47:05.180404 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:47:05.180404 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:47:05.180404 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:47:05.174706 unknown[948]: wrote ssh authorized keys file for user: core Jun 25 18:47:05.215317 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:47:05.286270 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:47:05.286270 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:47:05.290429 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 18:47:05.648790 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:47:05.982053 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:47:05.982053 ignition[948]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:47:05.986216 ignition[948]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:47:05.988694 ignition[948]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:47:05.988694 ignition[948]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:47:05.988694 ignition[948]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 25 18:47:05.993420 ignition[948]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:47:05.995447 ignition[948]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:47:05.995447 ignition[948]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 25 18:47:05.995447 ignition[948]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 18:47:06.001057 systemd-networkd[771]: eth0: Gained IPv6LL Jun 25 18:47:06.023093 ignition[948]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:47:06.028760 ignition[948]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:47:06.030425 ignition[948]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 18:47:06.030425 ignition[948]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:47:06.033269 ignition[948]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:47:06.034695 ignition[948]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:47:06.036439 ignition[948]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:47:06.038114 ignition[948]: INFO : files: files passed Jun 25 18:47:06.038866 ignition[948]: INFO : Ignition finished successfully Jun 25 18:47:06.042239 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:47:06.054128 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:47:06.057141 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:47:06.061169 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:47:06.061294 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:47:06.067155 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 18:47:06.070037 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:47:06.070037 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:47:06.074772 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:47:06.078170 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:47:06.079905 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:47:06.092973 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:47:06.126722 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:47:06.126876 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:47:06.127395 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:47:06.129853 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:47:06.130512 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:47:06.131322 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:47:06.156657 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:47:06.166080 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:47:06.175058 systemd[1]: Stopped target network.target - Network. Jun 25 18:47:06.175461 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:47:06.176127 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:47:06.180250 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:47:06.180763 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:47:06.180939 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:47:06.186407 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:47:06.187459 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:47:06.187868 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:47:06.188451 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:47:06.188859 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:47:06.189490 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:47:06.189898 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:47:06.190549 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:47:06.190927 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:47:06.207625 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:47:06.208432 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:47:06.208614 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:47:06.211188 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:47:06.211566 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:47:06.211877 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:47:06.217260 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:47:06.217860 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:47:06.217968 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:47:06.218696 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:47:06.218810 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:47:06.223864 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:47:06.225853 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:47:06.230933 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:47:06.232343 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:47:06.234641 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:47:06.236502 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:47:06.236615 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:47:06.238348 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:47:06.238434 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:47:06.240196 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:47:06.240315 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:47:06.242598 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:47:06.242702 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:47:06.255974 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:47:06.264411 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:47:06.264528 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:47:06.270851 ignition[1004]: INFO : Ignition 2.19.0 Jun 25 18:47:06.270851 ignition[1004]: INFO : Stage: umount Jun 25 18:47:06.270851 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:47:06.270851 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:47:06.270851 ignition[1004]: INFO : umount: umount passed Jun 25 18:47:06.270851 ignition[1004]: INFO : Ignition finished successfully Jun 25 18:47:06.267387 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:47:06.269051 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:47:06.271013 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:47:06.272079 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:47:06.272202 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:47:06.273999 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:47:06.274010 systemd-networkd[771]: eth0: DHCPv6 lease lost Jun 25 18:47:06.274102 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:47:06.278267 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:47:06.278384 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:47:06.280721 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:47:06.280855 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:47:06.284946 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:47:06.285056 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:47:06.289801 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:47:06.289923 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:47:06.292473 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:47:06.292543 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:47:06.294415 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:47:06.294467 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:47:06.296433 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:47:06.296481 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:47:06.299138 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:47:06.299185 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:47:06.301575 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:47:06.301637 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:47:06.311924 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:47:06.314091 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:47:06.314147 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:47:06.316747 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:47:06.316796 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:47:06.319313 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:47:06.319360 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:47:06.320693 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:47:06.320741 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:47:06.323295 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:47:06.326879 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:47:06.334998 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:47:06.335120 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:47:06.343279 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:47:06.343458 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:47:06.345097 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:47:06.345144 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:47:06.346935 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:47:06.346977 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:47:06.348915 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:47:06.348965 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:47:06.351124 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:47:06.351176 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:47:06.353391 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:47:06.353438 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:06.371080 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:47:06.373515 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:47:06.374713 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:47:06.377493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:47:06.378666 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:06.381349 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:47:06.382578 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:47:06.568181 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:47:06.568334 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:47:06.569423 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:47:06.571489 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:47:06.571569 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:47:06.581115 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:47:06.589088 systemd[1]: Switching root. Jun 25 18:47:06.633792 systemd-journald[191]: Journal stopped Jun 25 18:47:07.989632 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Jun 25 18:47:07.989705 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:47:07.989728 kernel: SELinux: policy capability open_perms=1 Jun 25 18:47:07.989750 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:47:07.989771 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:47:07.989786 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:47:07.989801 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:47:07.989816 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:47:07.989844 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:47:07.989860 kernel: audit: type=1403 audit(1719341227.244:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:47:07.989886 systemd[1]: Successfully loaded SELinux policy in 41.611ms. Jun 25 18:47:07.989908 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.853ms. Jun 25 18:47:07.989926 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:47:07.989943 systemd[1]: Detected virtualization kvm. Jun 25 18:47:07.989959 systemd[1]: Detected architecture x86-64. Jun 25 18:47:07.989975 systemd[1]: Detected first boot. Jun 25 18:47:07.989992 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:47:07.990012 zram_generator::config[1047]: No configuration found. Jun 25 18:47:07.990030 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:47:07.990050 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:47:07.990066 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:47:07.990081 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:47:07.990098 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:47:07.990114 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:47:07.990131 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:47:07.990147 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:47:07.990163 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:47:07.990181 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:47:07.990200 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:47:07.990223 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:47:07.990240 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:47:07.990259 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:47:07.990275 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:47:07.990292 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:47:07.990308 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:47:07.990331 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:47:07.990351 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:47:07.990371 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:47:07.990387 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:47:07.990404 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:47:07.990421 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:47:07.990438 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:47:07.990455 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:47:07.990471 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:47:07.990492 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:47:07.990508 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:47:07.990536 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:47:07.990553 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:47:07.990570 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:47:07.990587 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:47:07.990604 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:47:07.990621 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:47:07.990640 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:47:07.990657 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:47:07.990678 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:47:07.990696 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:47:07.990713 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:47:07.990730 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:47:07.990747 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:47:07.990764 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:47:07.990781 systemd[1]: Reached target machines.target - Containers. Jun 25 18:47:07.990798 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:47:07.990819 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:47:07.990878 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:47:07.990895 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:47:07.990911 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:47:07.990928 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:47:07.990945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:47:07.990962 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:47:07.990978 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:47:07.991001 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:47:07.991021 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:47:07.991038 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:47:07.991054 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:47:07.991071 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:47:07.991087 kernel: loop: module loaded Jun 25 18:47:07.991104 kernel: fuse: init (API version 7.39) Jun 25 18:47:07.991121 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:47:07.991137 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:47:07.991154 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:47:07.991174 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:47:07.991191 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:47:07.991208 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:47:07.991245 systemd-journald[1113]: Collecting audit messages is disabled. Jun 25 18:47:07.991275 systemd[1]: Stopped verity-setup.service. Jun 25 18:47:07.991293 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:47:07.991309 systemd-journald[1113]: Journal started Jun 25 18:47:07.991341 systemd-journald[1113]: Runtime Journal (/run/log/journal/13b49c36f6f24e4cb4b9db674a19bdd9) is 6.0M, max 48.4M, 42.3M free. Jun 25 18:47:07.760707 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:47:07.779814 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 18:47:07.780298 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:47:07.994149 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:47:07.995227 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:47:07.996611 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:47:08.000247 kernel: ACPI: bus type drm_connector registered Jun 25 18:47:07.998492 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:47:08.000953 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:47:08.002168 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:47:08.003479 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:47:08.004716 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:47:08.013668 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:47:08.015401 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:47:08.015581 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:47:08.017268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:47:08.017432 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:47:08.019076 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:47:08.019243 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:47:08.020991 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:47:08.021158 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:47:08.023108 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:47:08.023271 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:47:08.024847 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:47:08.025009 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:47:08.026594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:47:08.028181 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:47:08.029939 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:47:08.043709 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:47:08.057953 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:47:08.060778 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:47:08.062204 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:47:08.062245 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:47:08.064813 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:47:08.067624 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:47:08.070262 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:47:08.071706 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:47:08.074599 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:47:08.078946 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:47:08.080580 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:47:08.081859 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:47:08.083880 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:47:08.086556 systemd-journald[1113]: Time spent on flushing to /var/log/journal/13b49c36f6f24e4cb4b9db674a19bdd9 is 34.030ms for 944 entries. Jun 25 18:47:08.086556 systemd-journald[1113]: System Journal (/var/log/journal/13b49c36f6f24e4cb4b9db674a19bdd9) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:47:08.126550 systemd-journald[1113]: Received client request to flush runtime journal. Jun 25 18:47:08.088960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:47:08.094035 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:47:08.099268 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:47:08.102612 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:47:08.104315 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:47:08.113100 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:47:08.115280 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:47:08.117153 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:47:08.127024 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:47:08.129946 kernel: loop0: detected capacity change from 0 to 139760 Jun 25 18:47:08.131844 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:47:08.137157 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:47:08.142909 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:47:08.145128 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:47:08.147692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:47:08.160637 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:47:08.161440 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:47:08.165942 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 18:47:08.167928 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:47:08.170689 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:47:08.178032 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:47:08.191888 kernel: loop1: detected capacity change from 0 to 209816 Jun 25 18:47:08.205019 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jun 25 18:47:08.205041 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jun 25 18:47:08.212995 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:47:08.218860 kernel: loop2: detected capacity change from 0 to 80568 Jun 25 18:47:08.260871 kernel: loop3: detected capacity change from 0 to 139760 Jun 25 18:47:08.276871 kernel: loop4: detected capacity change from 0 to 209816 Jun 25 18:47:08.284847 kernel: loop5: detected capacity change from 0 to 80568 Jun 25 18:47:08.291268 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 18:47:08.291905 (sd-merge)[1185]: Merged extensions into '/usr'. Jun 25 18:47:08.298021 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:47:08.298037 systemd[1]: Reloading... Jun 25 18:47:08.368856 zram_generator::config[1209]: No configuration found. Jun 25 18:47:08.408693 ldconfig[1155]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:47:08.499180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:47:08.555078 systemd[1]: Reloading finished in 256 ms. Jun 25 18:47:08.595660 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:47:08.597197 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:47:08.611108 systemd[1]: Starting ensure-sysext.service... Jun 25 18:47:08.613518 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:47:08.621790 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:47:08.621805 systemd[1]: Reloading... Jun 25 18:47:08.641035 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:47:08.642017 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:47:08.643439 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:47:08.644046 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jun 25 18:47:08.644209 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jun 25 18:47:08.648987 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:47:08.649091 systemd-tmpfiles[1247]: Skipping /boot Jun 25 18:47:08.661188 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:47:08.661203 systemd-tmpfiles[1247]: Skipping /boot Jun 25 18:47:08.684851 zram_generator::config[1274]: No configuration found. Jun 25 18:47:08.798185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:47:08.847990 systemd[1]: Reloading finished in 225 ms. Jun 25 18:47:08.868057 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:47:08.884244 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:47:08.892904 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:47:08.895450 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:47:08.897876 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:47:08.901143 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:47:08.905771 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:47:08.909069 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:47:08.914126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:47:08.914292 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:47:08.915710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:47:08.921133 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:47:08.923695 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:47:08.926740 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:47:08.928890 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:47:08.930012 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:47:08.930990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:47:08.931596 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:47:08.934888 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:47:08.935059 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:47:08.938288 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:47:08.940332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:47:08.940695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:47:08.946946 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:47:08.946995 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Jun 25 18:47:08.954423 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:47:08.954648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:47:08.964128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:47:08.970913 augenrules[1341]: No rules Jun 25 18:47:08.967280 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:47:08.969744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:47:08.975152 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:47:08.976714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:47:08.982181 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:47:08.983917 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:47:08.984795 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:47:08.987436 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:47:08.994053 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:47:08.996427 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:47:08.998331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:47:08.998542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:47:09.000248 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:47:09.000425 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:47:09.002050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:47:09.002224 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:47:09.004309 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:47:09.004523 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:47:09.024099 systemd[1]: Finished ensure-sysext.service. Jun 25 18:47:09.026872 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1364) Jun 25 18:47:09.035885 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1368) Jun 25 18:47:09.054117 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:47:09.055934 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:47:09.056007 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:47:09.059047 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 18:47:09.060970 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:47:09.061399 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:47:09.066953 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 18:47:09.078418 systemd-resolved[1314]: Positive Trust Anchors: Jun 25 18:47:09.078436 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:47:09.078468 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:47:09.083891 systemd-resolved[1314]: Defaulting to hostname 'linux'. Jun 25 18:47:09.085631 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:47:09.087081 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:47:09.090626 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:47:09.099996 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:47:09.120045 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 25 18:47:09.120241 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:47:09.142872 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 25 18:47:09.150857 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 25 18:47:09.157945 kernel: ACPI: button: Power Button [PWRF] Jun 25 18:47:09.159130 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:09.159420 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:47:09.167040 systemd-networkd[1382]: lo: Link UP Jun 25 18:47:09.167220 systemd-networkd[1382]: lo: Gained carrier Jun 25 18:47:09.169025 systemd-networkd[1382]: Enumeration completed Jun 25 18:47:09.174917 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:47:09.176650 systemd[1]: Reached target network.target - Network. Jun 25 18:47:09.178319 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:47:09.178327 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:47:09.180507 systemd-networkd[1382]: eth0: Link UP Jun 25 18:47:09.180557 systemd-networkd[1382]: eth0: Gained carrier Jun 25 18:47:09.180614 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:47:09.186039 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:47:09.188264 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 18:47:09.190430 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:47:09.192881 systemd-networkd[1382]: eth0: DHCPv4 address 10.0.0.161/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:47:09.194235 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Jun 25 18:47:09.738918 systemd-resolved[1314]: Clock change detected. Flushing caches. Jun 25 18:47:09.739009 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 18:47:09.739059 systemd-timesyncd[1385]: Initial clock synchronization to Tue 2024-06-25 18:47:09.738889 UTC. Jun 25 18:47:09.816408 kernel: kvm_amd: TSC scaling supported Jun 25 18:47:09.816618 kernel: kvm_amd: Nested Virtualization enabled Jun 25 18:47:09.816660 kernel: kvm_amd: Nested Paging enabled Jun 25 18:47:09.816695 kernel: kvm_amd: LBR virtualization supported Jun 25 18:47:09.816729 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jun 25 18:47:09.816802 kernel: kvm_amd: Virtual GIF supported Jun 25 18:47:09.835015 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:09.840394 kernel: EDAC MC: Ver: 3.0.0 Jun 25 18:47:09.875726 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:47:09.891790 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:47:09.900845 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:47:09.930242 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:47:09.932088 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:47:09.933346 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:47:09.934771 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:47:09.936211 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:47:09.937978 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:47:09.939240 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:47:09.940545 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:47:09.941851 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:47:09.941885 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:47:09.942950 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:47:09.944967 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:47:09.947925 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:47:09.957763 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:47:09.960384 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:47:09.962035 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:47:09.963263 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:47:09.964261 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:47:09.965266 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:47:09.965294 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:47:09.966424 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:47:09.969169 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:47:09.971957 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:47:09.974520 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:47:09.978550 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:47:09.979909 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:47:09.982391 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:47:09.985749 jq[1416]: false Jun 25 18:47:09.988628 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:47:09.994599 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:47:10.007502 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:47:10.009913 extend-filesystems[1417]: Found loop3 Jun 25 18:47:10.009913 extend-filesystems[1417]: Found loop4 Jun 25 18:47:10.012099 extend-filesystems[1417]: Found loop5 Jun 25 18:47:10.012099 extend-filesystems[1417]: Found sr0 Jun 25 18:47:10.012099 extend-filesystems[1417]: Found vda Jun 25 18:47:10.012099 extend-filesystems[1417]: Found vda1 Jun 25 18:47:10.012099 extend-filesystems[1417]: Found vda2 Jun 25 18:47:10.012099 extend-filesystems[1417]: Found vda3 Jun 25 18:47:10.012099 extend-filesystems[1417]: Found usr Jun 25 18:47:10.012099 extend-filesystems[1417]: Found vda4 Jun 25 18:47:10.012099 extend-filesystems[1417]: Found vda6 Jun 25 18:47:10.012099 extend-filesystems[1417]: Found vda7 Jun 25 18:47:10.012099 extend-filesystems[1417]: Found vda9 Jun 25 18:47:10.012099 extend-filesystems[1417]: Checking size of /dev/vda9 Jun 25 18:47:10.015804 dbus-daemon[1415]: [system] SELinux support is enabled Jun 25 18:47:10.041833 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1379) Jun 25 18:47:10.041863 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 18:47:10.041882 extend-filesystems[1417]: Resized partition /dev/vda9 Jun 25 18:47:10.013587 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:47:10.044518 extend-filesystems[1437]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:47:10.015924 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:47:10.016583 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:47:10.017722 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:47:10.021920 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:47:10.023933 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:47:10.034930 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:47:10.045975 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:47:10.046904 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:47:10.047873 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:47:10.048404 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:47:10.053336 jq[1435]: true Jun 25 18:47:10.062119 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:47:10.062444 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:47:10.087012 update_engine[1431]: I0625 18:47:10.086907 1431 main.cc:92] Flatcar Update Engine starting Jun 25 18:47:10.087879 jq[1442]: true Jun 25 18:47:10.090983 update_engine[1431]: I0625 18:47:10.090934 1431 update_check_scheduler.cc:74] Next update check in 10m36s Jun 25 18:47:10.095405 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 18:47:10.126711 tar[1441]: linux-amd64/helm Jun 25 18:47:10.099494 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:47:10.135086 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 18:47:10.135086 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:47:10.135086 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 18:47:10.115739 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:47:10.142765 extend-filesystems[1417]: Resized filesystem in /dev/vda9 Jun 25 18:47:10.118312 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:47:10.120818 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:47:10.120845 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:47:10.122488 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:47:10.122507 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:47:10.130405 systemd-logind[1429]: Watching system buttons on /dev/input/event2 (Power Button) Jun 25 18:47:10.130432 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:47:10.132123 systemd-logind[1429]: New seat seat0. Jun 25 18:47:10.133586 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:47:10.138338 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:47:10.141357 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:47:10.143562 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:47:10.181032 bash[1471]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:47:10.181874 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:47:10.185981 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:47:10.186894 locksmithd[1463]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:47:10.327382 sshd_keygen[1438]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:47:10.329504 containerd[1445]: time="2024-06-25T18:47:10.329424030Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:47:10.351555 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:47:10.353240 containerd[1445]: time="2024-06-25T18:47:10.353206462Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:47:10.353465 containerd[1445]: time="2024-06-25T18:47:10.353301029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:10.355008 containerd[1445]: time="2024-06-25T18:47:10.354957495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:47:10.355008 containerd[1445]: time="2024-06-25T18:47:10.355003411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:10.355701 containerd[1445]: time="2024-06-25T18:47:10.355263238Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:47:10.355701 containerd[1445]: time="2024-06-25T18:47:10.355283666Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:47:10.355701 containerd[1445]: time="2024-06-25T18:47:10.355391639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:10.355701 containerd[1445]: time="2024-06-25T18:47:10.355454607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:47:10.355701 containerd[1445]: time="2024-06-25T18:47:10.355466078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:10.355701 containerd[1445]: time="2024-06-25T18:47:10.355548152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:10.355836 containerd[1445]: time="2024-06-25T18:47:10.355771832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:10.355836 containerd[1445]: time="2024-06-25T18:47:10.355789144Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:47:10.355836 containerd[1445]: time="2024-06-25T18:47:10.355799644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:10.355936 containerd[1445]: time="2024-06-25T18:47:10.355915982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:47:10.355966 containerd[1445]: time="2024-06-25T18:47:10.355934647Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:47:10.356009 containerd[1445]: time="2024-06-25T18:47:10.355992736Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:47:10.356009 containerd[1445]: time="2024-06-25T18:47:10.356006532Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:47:10.359611 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:47:10.361716 systemd[1]: Started sshd@0-10.0.0.161:22-10.0.0.1:44290.service - OpenSSH per-connection server daemon (10.0.0.1:44290). Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365654892Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365694737Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365708082Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365744911Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365759499Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365770559Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365781710Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365920921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365936110Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365948112Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365964072Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365979241Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.365997555Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:47:10.366425 containerd[1445]: time="2024-06-25T18:47:10.366010619Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:47:10.366706 containerd[1445]: time="2024-06-25T18:47:10.366033332Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:47:10.366706 containerd[1445]: time="2024-06-25T18:47:10.366047078Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:47:10.366706 containerd[1445]: time="2024-06-25T18:47:10.366059932Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:47:10.366706 containerd[1445]: time="2024-06-25T18:47:10.366072035Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:47:10.366706 containerd[1445]: time="2024-06-25T18:47:10.366083296Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:47:10.366706 containerd[1445]: time="2024-06-25T18:47:10.366188623Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:47:10.366691 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:47:10.366931 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:47:10.367026 containerd[1445]: time="2024-06-25T18:47:10.366999614Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:47:10.367094 containerd[1445]: time="2024-06-25T18:47:10.367080686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.367144 containerd[1445]: time="2024-06-25T18:47:10.367131752Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:47:10.367202 containerd[1445]: time="2024-06-25T18:47:10.367190782Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:47:10.367313 containerd[1445]: time="2024-06-25T18:47:10.367300207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.367363 containerd[1445]: time="2024-06-25T18:47:10.367352255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.367424 containerd[1445]: time="2024-06-25T18:47:10.367412167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.367469 containerd[1445]: time="2024-06-25T18:47:10.367458885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.367516 containerd[1445]: time="2024-06-25T18:47:10.367504801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.367562 containerd[1445]: time="2024-06-25T18:47:10.367551709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.367615 containerd[1445]: time="2024-06-25T18:47:10.367604288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.367662 containerd[1445]: time="2024-06-25T18:47:10.367651296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.367708 containerd[1445]: time="2024-06-25T18:47:10.367698073Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:47:10.368025 containerd[1445]: time="2024-06-25T18:47:10.367999539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.368080 containerd[1445]: time="2024-06-25T18:47:10.368068869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.368125 containerd[1445]: time="2024-06-25T18:47:10.368114474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.368169 containerd[1445]: time="2024-06-25T18:47:10.368159088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.368231 containerd[1445]: time="2024-06-25T18:47:10.368218710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.368278 containerd[1445]: time="2024-06-25T18:47:10.368268373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.368323 containerd[1445]: time="2024-06-25T18:47:10.368312846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.368367 containerd[1445]: time="2024-06-25T18:47:10.368355957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:47:10.368691 containerd[1445]: time="2024-06-25T18:47:10.368641092Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:47:10.368860 containerd[1445]: time="2024-06-25T18:47:10.368846667Z" level=info msg="Connect containerd service" Jun 25 18:47:10.368920 containerd[1445]: time="2024-06-25T18:47:10.368910236Z" level=info msg="using legacy CRI server" Jun 25 18:47:10.368961 containerd[1445]: time="2024-06-25T18:47:10.368950892Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:47:10.369085 containerd[1445]: time="2024-06-25T18:47:10.369071929Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:47:10.369761 containerd[1445]: time="2024-06-25T18:47:10.369741114Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:47:10.369852 containerd[1445]: time="2024-06-25T18:47:10.369839358Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:47:10.369971 containerd[1445]: time="2024-06-25T18:47:10.369955907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:47:10.370025 containerd[1445]: time="2024-06-25T18:47:10.370006842Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:47:10.370073 containerd[1445]: time="2024-06-25T18:47:10.370061234Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:47:10.370858 containerd[1445]: time="2024-06-25T18:47:10.369928946Z" level=info msg="Start subscribing containerd event" Jun 25 18:47:10.370858 containerd[1445]: time="2024-06-25T18:47:10.370328966Z" level=info msg="Start recovering state" Jun 25 18:47:10.370957 containerd[1445]: time="2024-06-25T18:47:10.370941585Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:47:10.371129 containerd[1445]: time="2024-06-25T18:47:10.371106194Z" level=info msg="Start event monitor" Jun 25 18:47:10.371170 containerd[1445]: time="2024-06-25T18:47:10.371136420Z" level=info msg="Start snapshots syncer" Jun 25 18:47:10.371170 containerd[1445]: time="2024-06-25T18:47:10.371147241Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:47:10.371170 containerd[1445]: time="2024-06-25T18:47:10.371154644Z" level=info msg="Start streaming server" Jun 25 18:47:10.374385 containerd[1445]: time="2024-06-25T18:47:10.371308763Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:47:10.374385 containerd[1445]: time="2024-06-25T18:47:10.371389916Z" level=info msg="containerd successfully booted in 0.043105s" Jun 25 18:47:10.374438 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:47:10.375726 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:47:10.388928 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:47:10.397715 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:47:10.399849 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:47:10.401125 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:47:10.428434 sshd[1496]: Accepted publickey for core from 10.0.0.1 port 44290 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:47:10.430193 sshd[1496]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:10.438424 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:47:10.452629 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:47:10.456586 systemd-logind[1429]: New session 1 of user core. Jun 25 18:47:10.466723 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:47:10.487672 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:47:10.492821 (systemd)[1508]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:10.545350 tar[1441]: linux-amd64/LICENSE Jun 25 18:47:10.545350 tar[1441]: linux-amd64/README.md Jun 25 18:47:10.571300 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:47:10.610295 systemd[1508]: Queued start job for default target default.target. Jun 25 18:47:10.625837 systemd[1508]: Created slice app.slice - User Application Slice. Jun 25 18:47:10.625871 systemd[1508]: Reached target paths.target - Paths. Jun 25 18:47:10.625889 systemd[1508]: Reached target timers.target - Timers. Jun 25 18:47:10.627599 systemd[1508]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:47:10.640044 systemd[1508]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:47:10.640175 systemd[1508]: Reached target sockets.target - Sockets. Jun 25 18:47:10.640195 systemd[1508]: Reached target basic.target - Basic System. Jun 25 18:47:10.640231 systemd[1508]: Reached target default.target - Main User Target. Jun 25 18:47:10.640266 systemd[1508]: Startup finished in 139ms. Jun 25 18:47:10.640820 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:47:10.643455 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:47:10.706969 systemd[1]: Started sshd@1-10.0.0.161:22-10.0.0.1:44306.service - OpenSSH per-connection server daemon (10.0.0.1:44306). Jun 25 18:47:10.749600 sshd[1522]: Accepted publickey for core from 10.0.0.1 port 44306 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:47:10.751254 sshd[1522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:10.755364 systemd-logind[1429]: New session 2 of user core. Jun 25 18:47:10.764613 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:47:10.820595 sshd[1522]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:10.831969 systemd[1]: sshd@1-10.0.0.161:22-10.0.0.1:44306.service: Deactivated successfully. Jun 25 18:47:10.834059 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:47:10.835828 systemd-logind[1429]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:47:10.852760 systemd[1]: Started sshd@2-10.0.0.161:22-10.0.0.1:44320.service - OpenSSH per-connection server daemon (10.0.0.1:44320). Jun 25 18:47:10.855478 systemd-logind[1429]: Removed session 2. Jun 25 18:47:10.885271 sshd[1529]: Accepted publickey for core from 10.0.0.1 port 44320 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:47:10.886743 sshd[1529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:10.891474 systemd-logind[1429]: New session 3 of user core. Jun 25 18:47:10.901560 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:47:10.959074 sshd[1529]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:10.962955 systemd[1]: sshd@2-10.0.0.161:22-10.0.0.1:44320.service: Deactivated successfully. Jun 25 18:47:10.964703 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:47:10.965288 systemd-logind[1429]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:47:10.966269 systemd-logind[1429]: Removed session 3. Jun 25 18:47:11.150597 systemd-networkd[1382]: eth0: Gained IPv6LL Jun 25 18:47:11.154503 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:47:11.156313 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:47:11.173623 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 18:47:11.176020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:47:11.178098 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:47:11.197778 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 18:47:11.198035 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 18:47:11.199892 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:47:11.202023 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:47:11.824753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:11.826651 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:47:11.828571 systemd[1]: Startup finished in 838ms (kernel) + 5.543s (initrd) + 4.081s (userspace) = 10.463s. Jun 25 18:47:11.852855 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:47:12.367283 kubelet[1557]: E0625 18:47:12.367147 1557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:47:12.371721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:47:12.371933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:47:12.372276 systemd[1]: kubelet.service: Consumed 1.018s CPU time. Jun 25 18:47:20.968837 systemd[1]: Started sshd@3-10.0.0.161:22-10.0.0.1:58608.service - OpenSSH per-connection server daemon (10.0.0.1:58608). Jun 25 18:47:21.004439 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 58608 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:47:21.005832 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:21.009287 systemd-logind[1429]: New session 4 of user core. Jun 25 18:47:21.024530 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:47:21.077548 sshd[1572]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:21.091842 systemd[1]: sshd@3-10.0.0.161:22-10.0.0.1:58608.service: Deactivated successfully. Jun 25 18:47:21.093393 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:47:21.094652 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:47:21.095804 systemd[1]: Started sshd@4-10.0.0.161:22-10.0.0.1:58622.service - OpenSSH per-connection server daemon (10.0.0.1:58622). Jun 25 18:47:21.096562 systemd-logind[1429]: Removed session 4. Jun 25 18:47:21.131628 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 58622 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:47:21.133312 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:21.137278 systemd-logind[1429]: New session 5 of user core. Jun 25 18:47:21.148493 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:47:21.197007 sshd[1579]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:21.211031 systemd[1]: sshd@4-10.0.0.161:22-10.0.0.1:58622.service: Deactivated successfully. Jun 25 18:47:21.212650 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:47:21.214115 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:47:21.215349 systemd[1]: Started sshd@5-10.0.0.161:22-10.0.0.1:58630.service - OpenSSH per-connection server daemon (10.0.0.1:58630). Jun 25 18:47:21.216141 systemd-logind[1429]: Removed session 5. Jun 25 18:47:21.251073 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 58630 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:47:21.252335 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:21.255979 systemd-logind[1429]: New session 6 of user core. Jun 25 18:47:21.263488 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:47:21.316708 sshd[1586]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:21.336103 systemd[1]: sshd@5-10.0.0.161:22-10.0.0.1:58630.service: Deactivated successfully. Jun 25 18:47:21.337632 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:47:21.339037 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:47:21.340258 systemd[1]: Started sshd@6-10.0.0.161:22-10.0.0.1:58638.service - OpenSSH per-connection server daemon (10.0.0.1:58638). Jun 25 18:47:21.341088 systemd-logind[1429]: Removed session 6. Jun 25 18:47:21.376005 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 58638 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:47:21.377539 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:21.381273 systemd-logind[1429]: New session 7 of user core. Jun 25 18:47:21.391545 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:47:21.450072 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:47:21.450347 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:47:21.465025 sudo[1596]: pam_unix(sudo:session): session closed for user root Jun 25 18:47:21.466627 sshd[1593]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:21.480150 systemd[1]: sshd@6-10.0.0.161:22-10.0.0.1:58638.service: Deactivated successfully. Jun 25 18:47:21.481790 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:47:21.483324 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:47:21.484658 systemd[1]: Started sshd@7-10.0.0.161:22-10.0.0.1:58654.service - OpenSSH per-connection server daemon (10.0.0.1:58654). Jun 25 18:47:21.485450 systemd-logind[1429]: Removed session 7. Jun 25 18:47:21.536504 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 58654 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:47:21.538004 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:21.541599 systemd-logind[1429]: New session 8 of user core. Jun 25 18:47:21.552486 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:47:21.605411 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:47:21.605690 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:47:21.608758 sudo[1605]: pam_unix(sudo:session): session closed for user root Jun 25 18:47:21.614056 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:47:21.614324 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:47:21.638635 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:47:21.640267 auditctl[1608]: No rules Jun 25 18:47:21.641515 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:47:21.641816 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:47:21.643483 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:47:21.671039 augenrules[1626]: No rules Jun 25 18:47:21.672738 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:47:21.673883 sudo[1604]: pam_unix(sudo:session): session closed for user root Jun 25 18:47:21.675454 sshd[1601]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:21.690235 systemd[1]: sshd@7-10.0.0.161:22-10.0.0.1:58654.service: Deactivated successfully. Jun 25 18:47:21.691799 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:47:21.693262 systemd-logind[1429]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:47:21.694549 systemd[1]: Started sshd@8-10.0.0.161:22-10.0.0.1:58666.service - OpenSSH per-connection server daemon (10.0.0.1:58666). Jun 25 18:47:21.695277 systemd-logind[1429]: Removed session 8. Jun 25 18:47:21.734588 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 58666 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:47:21.736070 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:21.739641 systemd-logind[1429]: New session 9 of user core. Jun 25 18:47:21.755487 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:47:21.807008 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:47:21.807287 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:47:21.907571 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:47:21.907708 (dockerd)[1647]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:47:22.142739 dockerd[1647]: time="2024-06-25T18:47:22.142587758Z" level=info msg="Starting up" Jun 25 18:47:22.175422 systemd[1]: var-lib-docker-metacopy\x2dcheck791267209-merged.mount: Deactivated successfully. Jun 25 18:47:22.201388 dockerd[1647]: time="2024-06-25T18:47:22.201306456Z" level=info msg="Loading containers: start." Jun 25 18:47:22.325413 kernel: Initializing XFRM netlink socket Jun 25 18:47:22.412902 systemd-networkd[1382]: docker0: Link UP Jun 25 18:47:22.425646 dockerd[1647]: time="2024-06-25T18:47:22.425605236Z" level=info msg="Loading containers: done." Jun 25 18:47:22.471829 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1250486307-merged.mount: Deactivated successfully. Jun 25 18:47:22.472757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:47:22.475306 dockerd[1647]: time="2024-06-25T18:47:22.475272512Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:47:22.475477 dockerd[1647]: time="2024-06-25T18:47:22.475456517Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:47:22.475582 dockerd[1647]: time="2024-06-25T18:47:22.475564770Z" level=info msg="Daemon has completed initialization" Jun 25 18:47:22.483527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:47:22.507782 dockerd[1647]: time="2024-06-25T18:47:22.507714812Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:47:22.507960 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:47:22.627812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:22.632420 (kubelet)[1787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:47:22.679955 kubelet[1787]: E0625 18:47:22.679825 1787 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:47:22.687327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:47:22.687596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:47:23.135525 containerd[1445]: time="2024-06-25T18:47:23.135403247Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 18:47:24.020160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1426694594.mount: Deactivated successfully. Jun 25 18:47:25.043561 containerd[1445]: time="2024-06-25T18:47:25.043470673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:25.044262 containerd[1445]: time="2024-06-25T18:47:25.044205080Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 25 18:47:25.045559 containerd[1445]: time="2024-06-25T18:47:25.045526087Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:25.048268 containerd[1445]: time="2024-06-25T18:47:25.048235998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:25.049550 containerd[1445]: time="2024-06-25T18:47:25.049519996Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 1.91408032s" Jun 25 18:47:25.049550 containerd[1445]: time="2024-06-25T18:47:25.049548609Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 18:47:25.071595 containerd[1445]: time="2024-06-25T18:47:25.071508123Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 18:47:26.848550 containerd[1445]: time="2024-06-25T18:47:26.848479835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:26.849252 containerd[1445]: time="2024-06-25T18:47:26.849213240Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 25 18:47:26.850456 containerd[1445]: time="2024-06-25T18:47:26.850423549Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:26.853410 containerd[1445]: time="2024-06-25T18:47:26.853347442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:26.854475 containerd[1445]: time="2024-06-25T18:47:26.854421225Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 1.782863949s" Jun 25 18:47:26.854528 containerd[1445]: time="2024-06-25T18:47:26.854476338Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 18:47:26.877849 containerd[1445]: time="2024-06-25T18:47:26.877796273Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 18:47:27.759246 containerd[1445]: time="2024-06-25T18:47:27.759188824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:27.759997 containerd[1445]: time="2024-06-25T18:47:27.759957325Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 25 18:47:27.761464 containerd[1445]: time="2024-06-25T18:47:27.761428654Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:27.764148 containerd[1445]: time="2024-06-25T18:47:27.764119800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:27.765396 containerd[1445]: time="2024-06-25T18:47:27.765355036Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 887.52036ms" Jun 25 18:47:27.765461 containerd[1445]: time="2024-06-25T18:47:27.765400811Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 18:47:27.788010 containerd[1445]: time="2024-06-25T18:47:27.787958687Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 18:47:28.785092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1253309250.mount: Deactivated successfully. Jun 25 18:47:29.412736 containerd[1445]: time="2024-06-25T18:47:29.412664961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:29.452561 containerd[1445]: time="2024-06-25T18:47:29.452483591Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 25 18:47:29.454198 containerd[1445]: time="2024-06-25T18:47:29.454162619Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:29.458313 containerd[1445]: time="2024-06-25T18:47:29.458280110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:29.458930 containerd[1445]: time="2024-06-25T18:47:29.458903819Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 1.670909014s" Jun 25 18:47:29.458969 containerd[1445]: time="2024-06-25T18:47:29.458932884Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 18:47:29.482070 containerd[1445]: time="2024-06-25T18:47:29.482029951Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:47:30.188004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1741541519.mount: Deactivated successfully. Jun 25 18:47:30.195165 containerd[1445]: time="2024-06-25T18:47:30.195107982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:30.195972 containerd[1445]: time="2024-06-25T18:47:30.195908042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 18:47:30.197270 containerd[1445]: time="2024-06-25T18:47:30.197223007Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:30.199514 containerd[1445]: time="2024-06-25T18:47:30.199467746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:30.200462 containerd[1445]: time="2024-06-25T18:47:30.200400846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 718.329518ms" Jun 25 18:47:30.200462 containerd[1445]: time="2024-06-25T18:47:30.200452473Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:47:30.226937 containerd[1445]: time="2024-06-25T18:47:30.226899110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:47:30.813552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781014120.mount: Deactivated successfully. Jun 25 18:47:32.913321 containerd[1445]: time="2024-06-25T18:47:32.913237153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:32.926620 containerd[1445]: time="2024-06-25T18:47:32.926510581Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 18:47:32.934828 containerd[1445]: time="2024-06-25T18:47:32.934776690Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:32.937862 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:47:32.945577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:47:32.952709 containerd[1445]: time="2024-06-25T18:47:32.952651315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:32.954324 containerd[1445]: time="2024-06-25T18:47:32.954258027Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.727318291s" Jun 25 18:47:32.954324 containerd[1445]: time="2024-06-25T18:47:32.954319543Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 18:47:32.979062 containerd[1445]: time="2024-06-25T18:47:32.979012231Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 18:47:33.091811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:33.096525 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:47:33.523000 kubelet[1973]: E0625 18:47:33.522934 1973 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:47:33.527719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:47:33.527936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:47:35.170721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount974641053.mount: Deactivated successfully. Jun 25 18:47:35.403217 containerd[1445]: time="2024-06-25T18:47:35.403147364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:35.404020 containerd[1445]: time="2024-06-25T18:47:35.403952023Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 25 18:47:35.405237 containerd[1445]: time="2024-06-25T18:47:35.405202728Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:35.407851 containerd[1445]: time="2024-06-25T18:47:35.407791211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:35.408754 containerd[1445]: time="2024-06-25T18:47:35.408707089Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 2.429656705s" Jun 25 18:47:35.408754 containerd[1445]: time="2024-06-25T18:47:35.408747955Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 18:47:38.557936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:38.568614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:47:38.587773 systemd[1]: Reloading requested from client PID 2067 ('systemctl') (unit session-9.scope)... Jun 25 18:47:38.587787 systemd[1]: Reloading... Jun 25 18:47:38.677429 zram_generator::config[2104]: No configuration found. Jun 25 18:47:39.225216 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:47:39.308232 systemd[1]: Reloading finished in 720 ms. Jun 25 18:47:39.360927 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:47:39.361046 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:47:39.361421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:39.363149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:47:39.516902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:39.522141 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:47:39.568971 kubelet[2153]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:47:39.568971 kubelet[2153]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:47:39.568971 kubelet[2153]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:47:39.569393 kubelet[2153]: I0625 18:47:39.569014 2153 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:47:39.801763 kubelet[2153]: I0625 18:47:39.801661 2153 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:47:39.801763 kubelet[2153]: I0625 18:47:39.801694 2153 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:47:39.801919 kubelet[2153]: I0625 18:47:39.801896 2153 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:47:39.815583 kubelet[2153]: I0625 18:47:39.815535 2153 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:47:39.816432 kubelet[2153]: E0625 18:47:39.816406 2153 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.161:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:39.827687 kubelet[2153]: I0625 18:47:39.827663 2153 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:47:39.827910 kubelet[2153]: I0625 18:47:39.827887 2153 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:47:39.828073 kubelet[2153]: I0625 18:47:39.828048 2153 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:47:39.828164 kubelet[2153]: I0625 18:47:39.828074 2153 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:47:39.828164 kubelet[2153]: I0625 18:47:39.828088 2153 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:47:39.829084 kubelet[2153]: I0625 18:47:39.829059 2153 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:47:39.830978 kubelet[2153]: I0625 18:47:39.830956 2153 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:47:39.830978 kubelet[2153]: I0625 18:47:39.830976 2153 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:47:39.831043 kubelet[2153]: I0625 18:47:39.831005 2153 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:47:39.831043 kubelet[2153]: I0625 18:47:39.831021 2153 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:47:39.831998 kubelet[2153]: I0625 18:47:39.831972 2153 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:47:39.832955 kubelet[2153]: W0625 18:47:39.832841 2153 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:39.832955 kubelet[2153]: W0625 18:47:39.832866 2153 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:39.832955 kubelet[2153]: E0625 18:47:39.832916 2153 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:39.832955 kubelet[2153]: E0625 18:47:39.832922 2153 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:39.833522 kubelet[2153]: W0625 18:47:39.833498 2153 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:47:39.834025 kubelet[2153]: I0625 18:47:39.834008 2153 server.go:1232] "Started kubelet" Jun 25 18:47:39.834553 kubelet[2153]: I0625 18:47:39.834296 2153 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:47:39.835143 kubelet[2153]: I0625 18:47:39.834601 2153 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:47:39.835143 kubelet[2153]: I0625 18:47:39.834646 2153 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:47:39.835143 kubelet[2153]: I0625 18:47:39.835018 2153 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:47:39.835432 kubelet[2153]: I0625 18:47:39.835414 2153 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:47:39.836127 kubelet[2153]: E0625 18:47:39.835835 2153 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:47:39.836127 kubelet[2153]: E0625 18:47:39.835867 2153 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:47:39.836244 kubelet[2153]: E0625 18:47:39.836210 2153 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:47:39.836244 kubelet[2153]: I0625 18:47:39.836234 2153 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:47:39.836325 kubelet[2153]: I0625 18:47:39.836295 2153 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:47:39.836716 kubelet[2153]: I0625 18:47:39.836366 2153 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:47:39.836716 kubelet[2153]: E0625 18:47:39.836290 2153 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc53c43013b26f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 47, 39, 833979503, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 47, 39, 833979503, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.161:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.161:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:47:39.836716 kubelet[2153]: E0625 18:47:39.836626 2153 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.161:6443: connect: connection refused" interval="200ms" Jun 25 18:47:39.836716 kubelet[2153]: W0625 18:47:39.836618 2153 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:39.836869 kubelet[2153]: E0625 18:47:39.836654 2153 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:39.851680 kubelet[2153]: I0625 18:47:39.851642 2153 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:47:39.853159 kubelet[2153]: I0625 18:47:39.853133 2153 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:47:39.853197 kubelet[2153]: I0625 18:47:39.853164 2153 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:47:39.853197 kubelet[2153]: I0625 18:47:39.853182 2153 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:47:39.853248 kubelet[2153]: E0625 18:47:39.853228 2153 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:47:39.854149 kubelet[2153]: W0625 18:47:39.854013 2153 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:39.854149 kubelet[2153]: E0625 18:47:39.854041 2153 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:39.865843 kubelet[2153]: I0625 18:47:39.865824 2153 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:47:39.865843 kubelet[2153]: I0625 18:47:39.865837 2153 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:47:39.865968 kubelet[2153]: I0625 18:47:39.865855 2153 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:47:39.937748 kubelet[2153]: I0625 18:47:39.937724 2153 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:47:39.938019 kubelet[2153]: E0625 18:47:39.937992 2153 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.161:6443/api/v1/nodes\": dial tcp 10.0.0.161:6443: connect: connection refused" node="localhost" Jun 25 18:47:39.954132 kubelet[2153]: E0625 18:47:39.954099 2153 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:47:40.037674 kubelet[2153]: E0625 18:47:40.037633 2153 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.161:6443: connect: connection refused" interval="400ms" Jun 25 18:47:40.140209 kubelet[2153]: I0625 18:47:40.140072 2153 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:47:40.140390 kubelet[2153]: E0625 18:47:40.140361 2153 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.161:6443/api/v1/nodes\": dial tcp 10.0.0.161:6443: connect: connection refused" node="localhost" Jun 25 18:47:40.154521 kubelet[2153]: E0625 18:47:40.154485 2153 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:47:40.265868 kubelet[2153]: I0625 18:47:40.265803 2153 policy_none.go:49] "None policy: Start" Jun 25 18:47:40.266641 kubelet[2153]: I0625 18:47:40.266612 2153 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:47:40.266641 kubelet[2153]: I0625 18:47:40.266649 2153 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:47:40.276993 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:47:40.297505 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:47:40.300248 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:47:40.311267 kubelet[2153]: I0625 18:47:40.311226 2153 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:47:40.311996 kubelet[2153]: I0625 18:47:40.311595 2153 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:47:40.311996 kubelet[2153]: E0625 18:47:40.311962 2153 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 18:47:40.438694 kubelet[2153]: E0625 18:47:40.438553 2153 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.161:6443: connect: connection refused" interval="800ms" Jun 25 18:47:40.542165 kubelet[2153]: I0625 18:47:40.542137 2153 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:47:40.542602 kubelet[2153]: E0625 18:47:40.542580 2153 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.161:6443/api/v1/nodes\": dial tcp 10.0.0.161:6443: connect: connection refused" node="localhost" Jun 25 18:47:40.554675 kubelet[2153]: I0625 18:47:40.554644 2153 topology_manager.go:215] "Topology Admit Handler" podUID="2f0d1f4559fcac733f4f77c6c4aeda66" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:47:40.555906 kubelet[2153]: I0625 18:47:40.555876 2153 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:47:40.556786 kubelet[2153]: I0625 18:47:40.556751 2153 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:47:40.562057 systemd[1]: Created slice kubepods-burstable-pod2f0d1f4559fcac733f4f77c6c4aeda66.slice - libcontainer container kubepods-burstable-pod2f0d1f4559fcac733f4f77c6c4aeda66.slice. Jun 25 18:47:40.571831 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jun 25 18:47:40.586311 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jun 25 18:47:40.642117 kubelet[2153]: I0625 18:47:40.642081 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:47:40.642458 kubelet[2153]: I0625 18:47:40.642124 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f0d1f4559fcac733f4f77c6c4aeda66-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f0d1f4559fcac733f4f77c6c4aeda66\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:47:40.642458 kubelet[2153]: I0625 18:47:40.642159 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:40.642458 kubelet[2153]: I0625 18:47:40.642179 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:40.642458 kubelet[2153]: I0625 18:47:40.642233 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:40.642458 kubelet[2153]: I0625 18:47:40.642275 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f0d1f4559fcac733f4f77c6c4aeda66-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f0d1f4559fcac733f4f77c6c4aeda66\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:47:40.642575 kubelet[2153]: I0625 18:47:40.642328 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f0d1f4559fcac733f4f77c6c4aeda66-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2f0d1f4559fcac733f4f77c6c4aeda66\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:47:40.642575 kubelet[2153]: I0625 18:47:40.642353 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:40.642575 kubelet[2153]: I0625 18:47:40.642398 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:40.802117 kubelet[2153]: W0625 18:47:40.801978 2153 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:40.802117 kubelet[2153]: E0625 18:47:40.802034 2153 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:40.827575 kubelet[2153]: W0625 18:47:40.827514 2153 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:40.827575 kubelet[2153]: E0625 18:47:40.827572 2153 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:40.871440 kubelet[2153]: E0625 18:47:40.871405 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:40.871909 containerd[1445]: time="2024-06-25T18:47:40.871867483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2f0d1f4559fcac733f4f77c6c4aeda66,Namespace:kube-system,Attempt:0,}" Jun 25 18:47:40.884083 kubelet[2153]: E0625 18:47:40.884058 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:40.884366 containerd[1445]: time="2024-06-25T18:47:40.884338536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jun 25 18:47:40.888660 kubelet[2153]: E0625 18:47:40.888646 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:40.892144 containerd[1445]: time="2024-06-25T18:47:40.892112803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jun 25 18:47:41.211109 kubelet[2153]: W0625 18:47:41.210988 2153 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:41.211109 kubelet[2153]: E0625 18:47:41.211039 2153 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:41.239579 kubelet[2153]: E0625 18:47:41.239550 2153 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.161:6443: connect: connection refused" interval="1.6s" Jun 25 18:47:41.242950 kubelet[2153]: W0625 18:47:41.242894 2153 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:41.242982 kubelet[2153]: E0625 18:47:41.242954 2153 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:41.344758 kubelet[2153]: I0625 18:47:41.344721 2153 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:47:41.345058 kubelet[2153]: E0625 18:47:41.345032 2153 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.161:6443/api/v1/nodes\": dial tcp 10.0.0.161:6443: connect: connection refused" node="localhost" Jun 25 18:47:41.434435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1741453328.mount: Deactivated successfully. Jun 25 18:47:41.442165 containerd[1445]: time="2024-06-25T18:47:41.442119485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:47:41.442900 containerd[1445]: time="2024-06-25T18:47:41.442845306Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:47:41.443926 containerd[1445]: time="2024-06-25T18:47:41.443888532Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:47:41.444917 containerd[1445]: time="2024-06-25T18:47:41.444882927Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:47:41.445890 containerd[1445]: time="2024-06-25T18:47:41.445852865Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:47:41.446806 containerd[1445]: time="2024-06-25T18:47:41.446747232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:47:41.447659 containerd[1445]: time="2024-06-25T18:47:41.447626120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 18:47:41.449365 containerd[1445]: time="2024-06-25T18:47:41.449325006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:47:41.451492 containerd[1445]: time="2024-06-25T18:47:41.451455230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 579.494132ms" Jun 25 18:47:41.452342 containerd[1445]: time="2024-06-25T18:47:41.452318178Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.90421ms" Jun 25 18:47:41.454868 containerd[1445]: time="2024-06-25T18:47:41.454832061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 562.630742ms" Jun 25 18:47:41.617812 containerd[1445]: time="2024-06-25T18:47:41.617160377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:47:41.617812 containerd[1445]: time="2024-06-25T18:47:41.617210791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:47:41.617812 containerd[1445]: time="2024-06-25T18:47:41.617226731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:47:41.617812 containerd[1445]: time="2024-06-25T18:47:41.617237181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:47:41.617962 containerd[1445]: time="2024-06-25T18:47:41.617480597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:47:41.617962 containerd[1445]: time="2024-06-25T18:47:41.617545148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:47:41.617962 containerd[1445]: time="2024-06-25T18:47:41.617564374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:47:41.617962 containerd[1445]: time="2024-06-25T18:47:41.617577399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:47:41.618830 containerd[1445]: time="2024-06-25T18:47:41.618760617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:47:41.619041 containerd[1445]: time="2024-06-25T18:47:41.618886023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:47:41.619714 containerd[1445]: time="2024-06-25T18:47:41.619658511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:47:41.619815 containerd[1445]: time="2024-06-25T18:47:41.619727470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:47:41.641534 systemd[1]: Started cri-containerd-ea75bca628297990955ee74c015e613feccaaa4183c58f087db25a3847f0add8.scope - libcontainer container ea75bca628297990955ee74c015e613feccaaa4183c58f087db25a3847f0add8. Jun 25 18:47:41.646728 systemd[1]: Started cri-containerd-b28fcfa2bf81c6fabadf010decb2bc37122755f04ad4ad23f93c7e4155efa8a1.scope - libcontainer container b28fcfa2bf81c6fabadf010decb2bc37122755f04ad4ad23f93c7e4155efa8a1. Jun 25 18:47:41.648511 systemd[1]: Started cri-containerd-e7284bf6027767ffe29a0748adbc5ff98f53adc47cad34de6d3e1ce8d9098503.scope - libcontainer container e7284bf6027767ffe29a0748adbc5ff98f53adc47cad34de6d3e1ce8d9098503. Jun 25 18:47:41.684265 containerd[1445]: time="2024-06-25T18:47:41.684224222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea75bca628297990955ee74c015e613feccaaa4183c58f087db25a3847f0add8\"" Jun 25 18:47:41.685623 kubelet[2153]: E0625 18:47:41.685598 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:41.690181 containerd[1445]: time="2024-06-25T18:47:41.689383746Z" level=info msg="CreateContainer within sandbox \"ea75bca628297990955ee74c015e613feccaaa4183c58f087db25a3847f0add8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:47:41.692520 containerd[1445]: time="2024-06-25T18:47:41.692490731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7284bf6027767ffe29a0748adbc5ff98f53adc47cad34de6d3e1ce8d9098503\"" Jun 25 18:47:41.693142 kubelet[2153]: E0625 18:47:41.693105 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:41.696264 containerd[1445]: time="2024-06-25T18:47:41.696150533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2f0d1f4559fcac733f4f77c6c4aeda66,Namespace:kube-system,Attempt:0,} returns sandbox id \"b28fcfa2bf81c6fabadf010decb2bc37122755f04ad4ad23f93c7e4155efa8a1\"" Jun 25 18:47:41.697449 kubelet[2153]: E0625 18:47:41.697214 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:41.698911 containerd[1445]: time="2024-06-25T18:47:41.698883458Z" level=info msg="CreateContainer within sandbox \"e7284bf6027767ffe29a0748adbc5ff98f53adc47cad34de6d3e1ce8d9098503\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:47:41.701853 containerd[1445]: time="2024-06-25T18:47:41.701828119Z" level=info msg="CreateContainer within sandbox \"b28fcfa2bf81c6fabadf010decb2bc37122755f04ad4ad23f93c7e4155efa8a1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:47:41.712575 containerd[1445]: time="2024-06-25T18:47:41.712528753Z" level=info msg="CreateContainer within sandbox \"ea75bca628297990955ee74c015e613feccaaa4183c58f087db25a3847f0add8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ab5e6a21b288276bc1ffb2379119c6fbdfe42c398f8b75f07fa876dd7b71b809\"" Jun 25 18:47:41.713515 containerd[1445]: time="2024-06-25T18:47:41.712985439Z" level=info msg="StartContainer for \"ab5e6a21b288276bc1ffb2379119c6fbdfe42c398f8b75f07fa876dd7b71b809\"" Jun 25 18:47:41.719172 containerd[1445]: time="2024-06-25T18:47:41.719135921Z" level=info msg="CreateContainer within sandbox \"e7284bf6027767ffe29a0748adbc5ff98f53adc47cad34de6d3e1ce8d9098503\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0686de815b1f1c965993d9b988ca7e9eeeb4ea2fc9b9abd5c67b04c147c48ee5\"" Jun 25 18:47:41.719633 containerd[1445]: time="2024-06-25T18:47:41.719601815Z" level=info msg="StartContainer for \"0686de815b1f1c965993d9b988ca7e9eeeb4ea2fc9b9abd5c67b04c147c48ee5\"" Jun 25 18:47:41.740508 systemd[1]: Started cri-containerd-ab5e6a21b288276bc1ffb2379119c6fbdfe42c398f8b75f07fa876dd7b71b809.scope - libcontainer container ab5e6a21b288276bc1ffb2379119c6fbdfe42c398f8b75f07fa876dd7b71b809. Jun 25 18:47:41.744150 systemd[1]: Started cri-containerd-0686de815b1f1c965993d9b988ca7e9eeeb4ea2fc9b9abd5c67b04c147c48ee5.scope - libcontainer container 0686de815b1f1c965993d9b988ca7e9eeeb4ea2fc9b9abd5c67b04c147c48ee5. Jun 25 18:47:41.779445 containerd[1445]: time="2024-06-25T18:47:41.779400417Z" level=info msg="StartContainer for \"ab5e6a21b288276bc1ffb2379119c6fbdfe42c398f8b75f07fa876dd7b71b809\" returns successfully" Jun 25 18:47:41.784905 containerd[1445]: time="2024-06-25T18:47:41.784858351Z" level=info msg="StartContainer for \"0686de815b1f1c965993d9b988ca7e9eeeb4ea2fc9b9abd5c67b04c147c48ee5\" returns successfully" Jun 25 18:47:41.785037 containerd[1445]: time="2024-06-25T18:47:41.784875363Z" level=info msg="CreateContainer within sandbox \"b28fcfa2bf81c6fabadf010decb2bc37122755f04ad4ad23f93c7e4155efa8a1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"111e636c2495618bfdc90cf38a2ea7196d8ecb245d11f8c1dce618ec151487e4\"" Jun 25 18:47:41.785620 containerd[1445]: time="2024-06-25T18:47:41.785576146Z" level=info msg="StartContainer for \"111e636c2495618bfdc90cf38a2ea7196d8ecb245d11f8c1dce618ec151487e4\"" Jun 25 18:47:41.816527 systemd[1]: Started cri-containerd-111e636c2495618bfdc90cf38a2ea7196d8ecb245d11f8c1dce618ec151487e4.scope - libcontainer container 111e636c2495618bfdc90cf38a2ea7196d8ecb245d11f8c1dce618ec151487e4. Jun 25 18:47:41.834589 kubelet[2153]: E0625 18:47:41.834550 2153 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.161:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.161:6443: connect: connection refused Jun 25 18:47:41.860987 kubelet[2153]: E0625 18:47:41.860945 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:41.864951 containerd[1445]: time="2024-06-25T18:47:41.864915848Z" level=info msg="StartContainer for \"111e636c2495618bfdc90cf38a2ea7196d8ecb245d11f8c1dce618ec151487e4\" returns successfully" Jun 25 18:47:41.868762 kubelet[2153]: E0625 18:47:41.868694 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:42.850882 kubelet[2153]: E0625 18:47:42.850837 2153 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 18:47:42.873976 kubelet[2153]: E0625 18:47:42.873941 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:42.946420 kubelet[2153]: I0625 18:47:42.946388 2153 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:47:42.952298 kubelet[2153]: I0625 18:47:42.952250 2153 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 18:47:42.957887 kubelet[2153]: E0625 18:47:42.957855 2153 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:47:42.973538 kubelet[2153]: E0625 18:47:42.973519 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:43.058752 kubelet[2153]: E0625 18:47:43.058714 2153 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:47:43.159355 kubelet[2153]: E0625 18:47:43.159234 2153 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:47:43.833951 kubelet[2153]: I0625 18:47:43.833901 2153 apiserver.go:52] "Watching apiserver" Jun 25 18:47:43.836788 kubelet[2153]: I0625 18:47:43.836757 2153 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:47:43.917277 kubelet[2153]: E0625 18:47:43.915779 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:44.875539 kubelet[2153]: E0625 18:47:44.875490 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:45.091057 systemd[1]: Reloading requested from client PID 2430 ('systemctl') (unit session-9.scope)... Jun 25 18:47:45.091074 systemd[1]: Reloading... Jun 25 18:47:45.174418 zram_generator::config[2470]: No configuration found. Jun 25 18:47:45.495619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:47:45.609673 systemd[1]: Reloading finished in 518 ms. Jun 25 18:47:45.661211 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:47:45.681185 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:47:45.681570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:45.694678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:47:45.835461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:45.840492 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:47:45.884337 kubelet[2512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:47:45.884337 kubelet[2512]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:47:45.884337 kubelet[2512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:47:45.884754 kubelet[2512]: I0625 18:47:45.884366 2512 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:47:45.888863 kubelet[2512]: I0625 18:47:45.888837 2512 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:47:45.888863 kubelet[2512]: I0625 18:47:45.888868 2512 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:47:45.889111 kubelet[2512]: I0625 18:47:45.889095 2512 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:47:45.890681 kubelet[2512]: I0625 18:47:45.890655 2512 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:47:45.891596 kubelet[2512]: I0625 18:47:45.891564 2512 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:47:45.900678 kubelet[2512]: I0625 18:47:45.900650 2512 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:47:45.900882 kubelet[2512]: I0625 18:47:45.900856 2512 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:47:45.901038 kubelet[2512]: I0625 18:47:45.901010 2512 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:47:45.901149 kubelet[2512]: I0625 18:47:45.901041 2512 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:47:45.901149 kubelet[2512]: I0625 18:47:45.901052 2512 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:47:45.901149 kubelet[2512]: I0625 18:47:45.901088 2512 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:47:45.901236 kubelet[2512]: I0625 18:47:45.901178 2512 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:47:45.901236 kubelet[2512]: I0625 18:47:45.901192 2512 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:47:45.901236 kubelet[2512]: I0625 18:47:45.901212 2512 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:47:45.901236 kubelet[2512]: I0625 18:47:45.901228 2512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:47:45.902114 kubelet[2512]: I0625 18:47:45.902082 2512 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:47:45.904398 kubelet[2512]: I0625 18:47:45.902592 2512 server.go:1232] "Started kubelet" Jun 25 18:47:45.904398 kubelet[2512]: I0625 18:47:45.904201 2512 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:47:45.904495 kubelet[2512]: I0625 18:47:45.904471 2512 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:47:45.904539 kubelet[2512]: I0625 18:47:45.904524 2512 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:47:45.904583 kubelet[2512]: I0625 18:47:45.904551 2512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:47:45.907391 kubelet[2512]: I0625 18:47:45.905280 2512 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:47:45.908168 kubelet[2512]: I0625 18:47:45.908149 2512 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:47:45.908675 kubelet[2512]: E0625 18:47:45.908643 2512 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:47:45.908675 kubelet[2512]: E0625 18:47:45.908673 2512 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:47:45.909331 kubelet[2512]: I0625 18:47:45.909297 2512 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:47:45.911393 kubelet[2512]: I0625 18:47:45.909688 2512 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:47:45.923497 kubelet[2512]: I0625 18:47:45.923463 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:47:45.925173 kubelet[2512]: I0625 18:47:45.925148 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:47:45.925235 kubelet[2512]: I0625 18:47:45.925179 2512 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:47:45.925235 kubelet[2512]: I0625 18:47:45.925197 2512 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:47:45.925296 kubelet[2512]: E0625 18:47:45.925241 2512 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:47:45.972255 kubelet[2512]: I0625 18:47:45.972226 2512 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:47:45.972255 kubelet[2512]: I0625 18:47:45.972247 2512 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:47:45.972255 kubelet[2512]: I0625 18:47:45.972272 2512 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:47:45.972480 kubelet[2512]: I0625 18:47:45.972447 2512 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:47:45.972480 kubelet[2512]: I0625 18:47:45.972465 2512 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:47:45.972480 kubelet[2512]: I0625 18:47:45.972471 2512 policy_none.go:49] "None policy: Start" Jun 25 18:47:45.973133 kubelet[2512]: I0625 18:47:45.973108 2512 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:47:45.973174 kubelet[2512]: I0625 18:47:45.973141 2512 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:47:45.975668 kubelet[2512]: I0625 18:47:45.973520 2512 state_mem.go:75] "Updated machine memory state" Jun 25 18:47:45.979746 kubelet[2512]: I0625 18:47:45.979713 2512 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:47:45.980029 kubelet[2512]: I0625 18:47:45.979989 2512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:47:46.013299 kubelet[2512]: I0625 18:47:46.013268 2512 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:47:46.025605 kubelet[2512]: I0625 18:47:46.025574 2512 topology_manager.go:215] "Topology Admit Handler" podUID="2f0d1f4559fcac733f4f77c6c4aeda66" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:47:46.025725 kubelet[2512]: I0625 18:47:46.025674 2512 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:47:46.025725 kubelet[2512]: I0625 18:47:46.025709 2512 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:47:46.114030 kubelet[2512]: I0625 18:47:46.112930 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:46.114030 kubelet[2512]: I0625 18:47:46.112974 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:46.114030 kubelet[2512]: I0625 18:47:46.112993 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f0d1f4559fcac733f4f77c6c4aeda66-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f0d1f4559fcac733f4f77c6c4aeda66\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:47:46.114030 kubelet[2512]: I0625 18:47:46.113009 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:46.114030 kubelet[2512]: I0625 18:47:46.113028 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:46.114285 kubelet[2512]: I0625 18:47:46.113047 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:46.114285 kubelet[2512]: I0625 18:47:46.113100 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:47:46.114285 kubelet[2512]: I0625 18:47:46.113169 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f0d1f4559fcac733f4f77c6c4aeda66-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f0d1f4559fcac733f4f77c6c4aeda66\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:47:46.114285 kubelet[2512]: I0625 18:47:46.113203 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f0d1f4559fcac733f4f77c6c4aeda66-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2f0d1f4559fcac733f4f77c6c4aeda66\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:47:46.134555 kubelet[2512]: E0625 18:47:46.134524 2512 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 18:47:46.136276 kubelet[2512]: I0625 18:47:46.136238 2512 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jun 25 18:47:46.136414 kubelet[2512]: I0625 18:47:46.136318 2512 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 18:47:46.414172 kubelet[2512]: E0625 18:47:46.414066 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:46.415723 kubelet[2512]: E0625 18:47:46.415702 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:46.435691 kubelet[2512]: E0625 18:47:46.435653 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:46.901764 kubelet[2512]: I0625 18:47:46.901637 2512 apiserver.go:52] "Watching apiserver" Jun 25 18:47:46.911519 kubelet[2512]: I0625 18:47:46.911471 2512 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:47:46.936302 kubelet[2512]: E0625 18:47:46.936262 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:46.937692 kubelet[2512]: E0625 18:47:46.937586 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:46.970411 kubelet[2512]: I0625 18:47:46.970256 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.970199833 podCreationTimestamp="2024-06-25 18:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:47:46.970141962 +0000 UTC m=+1.125380565" watchObservedRunningTime="2024-06-25 18:47:46.970199833 +0000 UTC m=+1.125438436" Jun 25 18:47:46.991346 kubelet[2512]: E0625 18:47:46.991286 2512 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 18:47:46.992275 kubelet[2512]: E0625 18:47:46.991743 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:47.018632 kubelet[2512]: I0625 18:47:47.018583 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.018539719 podCreationTimestamp="2024-06-25 18:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:47:47.018496737 +0000 UTC m=+1.173735340" watchObservedRunningTime="2024-06-25 18:47:47.018539719 +0000 UTC m=+1.173778332" Jun 25 18:47:47.121538 kubelet[2512]: I0625 18:47:47.121488 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.120327106 podCreationTimestamp="2024-06-25 18:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:47:47.111703567 +0000 UTC m=+1.266942170" watchObservedRunningTime="2024-06-25 18:47:47.120327106 +0000 UTC m=+1.275565709" Jun 25 18:47:47.938894 kubelet[2512]: E0625 18:47:47.938806 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:48.939861 kubelet[2512]: E0625 18:47:48.939828 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:49.944102 kubelet[2512]: E0625 18:47:49.944076 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:51.208835 kubelet[2512]: E0625 18:47:51.208787 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:51.338779 sudo[1637]: pam_unix(sudo:session): session closed for user root Jun 25 18:47:51.341003 sshd[1634]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:51.345935 systemd[1]: sshd@8-10.0.0.161:22-10.0.0.1:58666.service: Deactivated successfully. Jun 25 18:47:51.348448 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:47:51.348742 systemd[1]: session-9.scope: Consumed 5.369s CPU time, 140.1M memory peak, 0B memory swap peak. Jun 25 18:47:51.349198 systemd-logind[1429]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:47:51.350255 systemd-logind[1429]: Removed session 9. Jun 25 18:47:51.943294 kubelet[2512]: E0625 18:47:51.943267 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:54.305970 kubelet[2512]: E0625 18:47:54.305932 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:54.948315 kubelet[2512]: E0625 18:47:54.948288 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:55.370163 update_engine[1431]: I0625 18:47:55.370110 1431 update_attempter.cc:509] Updating boot flags... Jun 25 18:47:55.409405 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2621) Jun 25 18:47:55.448156 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2626) Jun 25 18:47:55.477409 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2626) Jun 25 18:47:55.949730 kubelet[2512]: E0625 18:47:55.949698 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:59.944000 kubelet[2512]: I0625 18:47:59.943966 2512 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:47:59.944413 containerd[1445]: time="2024-06-25T18:47:59.944286104Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:47:59.944644 kubelet[2512]: I0625 18:47:59.944586 2512 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:47:59.947181 kubelet[2512]: E0625 18:47:59.947156 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:00.526982 kubelet[2512]: I0625 18:48:00.526917 2512 topology_manager.go:215] "Topology Admit Handler" podUID="e77ee541-5e2c-4089-8c39-6c2cd022bb86" podNamespace="kube-system" podName="kube-proxy-b2v9c" Jun 25 18:48:00.534705 systemd[1]: Created slice kubepods-besteffort-pode77ee541_5e2c_4089_8c39_6c2cd022bb86.slice - libcontainer container kubepods-besteffort-pode77ee541_5e2c_4089_8c39_6c2cd022bb86.slice. Jun 25 18:48:00.613877 kubelet[2512]: I0625 18:48:00.613834 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e77ee541-5e2c-4089-8c39-6c2cd022bb86-kube-proxy\") pod \"kube-proxy-b2v9c\" (UID: \"e77ee541-5e2c-4089-8c39-6c2cd022bb86\") " pod="kube-system/kube-proxy-b2v9c" Jun 25 18:48:00.613877 kubelet[2512]: I0625 18:48:00.613870 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e77ee541-5e2c-4089-8c39-6c2cd022bb86-xtables-lock\") pod \"kube-proxy-b2v9c\" (UID: \"e77ee541-5e2c-4089-8c39-6c2cd022bb86\") " pod="kube-system/kube-proxy-b2v9c" Jun 25 18:48:00.614080 kubelet[2512]: I0625 18:48:00.613909 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e77ee541-5e2c-4089-8c39-6c2cd022bb86-lib-modules\") pod \"kube-proxy-b2v9c\" (UID: \"e77ee541-5e2c-4089-8c39-6c2cd022bb86\") " pod="kube-system/kube-proxy-b2v9c" Jun 25 18:48:00.614080 kubelet[2512]: I0625 18:48:00.613929 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvsd8\" (UniqueName: \"kubernetes.io/projected/e77ee541-5e2c-4089-8c39-6c2cd022bb86-kube-api-access-kvsd8\") pod \"kube-proxy-b2v9c\" (UID: \"e77ee541-5e2c-4089-8c39-6c2cd022bb86\") " pod="kube-system/kube-proxy-b2v9c" Jun 25 18:48:00.851270 kubelet[2512]: E0625 18:48:00.851229 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:00.851917 containerd[1445]: time="2024-06-25T18:48:00.851844089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2v9c,Uid:e77ee541-5e2c-4089-8c39-6c2cd022bb86,Namespace:kube-system,Attempt:0,}" Jun 25 18:48:00.884499 containerd[1445]: time="2024-06-25T18:48:00.884315002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:00.884499 containerd[1445]: time="2024-06-25T18:48:00.884422926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:00.884499 containerd[1445]: time="2024-06-25T18:48:00.884446190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:00.884499 containerd[1445]: time="2024-06-25T18:48:00.884463153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:00.901411 kubelet[2512]: I0625 18:48:00.901359 2512 topology_manager.go:215] "Topology Admit Handler" podUID="e928e107-fdd8-480e-b8dc-323a5f9e9ee6" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-8mdfd" Jun 25 18:48:00.927656 systemd[1]: Started cri-containerd-e05f732a2b2b76d0ec10443da451ddf04905098e5664fdd4436342f98c8b7a41.scope - libcontainer container e05f732a2b2b76d0ec10443da451ddf04905098e5664fdd4436342f98c8b7a41. Jun 25 18:48:00.928965 systemd[1]: Created slice kubepods-besteffort-pode928e107_fdd8_480e_b8dc_323a5f9e9ee6.slice - libcontainer container kubepods-besteffort-pode928e107_fdd8_480e_b8dc_323a5f9e9ee6.slice. Jun 25 18:48:00.968426 containerd[1445]: time="2024-06-25T18:48:00.968080136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2v9c,Uid:e77ee541-5e2c-4089-8c39-6c2cd022bb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"e05f732a2b2b76d0ec10443da451ddf04905098e5664fdd4436342f98c8b7a41\"" Jun 25 18:48:00.969443 kubelet[2512]: E0625 18:48:00.969366 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:00.971544 containerd[1445]: time="2024-06-25T18:48:00.971413305Z" level=info msg="CreateContainer within sandbox \"e05f732a2b2b76d0ec10443da451ddf04905098e5664fdd4436342f98c8b7a41\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:48:00.990794 containerd[1445]: time="2024-06-25T18:48:00.990753929Z" level=info msg="CreateContainer within sandbox \"e05f732a2b2b76d0ec10443da451ddf04905098e5664fdd4436342f98c8b7a41\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d023a2892a4ed7f02a4de25095805feb36549c11b92bbbf97ba4c1ba7cc379c5\"" Jun 25 18:48:00.991132 containerd[1445]: time="2024-06-25T18:48:00.991114831Z" level=info msg="StartContainer for \"d023a2892a4ed7f02a4de25095805feb36549c11b92bbbf97ba4c1ba7cc379c5\"" Jun 25 18:48:01.017207 kubelet[2512]: I0625 18:48:01.017178 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e928e107-fdd8-480e-b8dc-323a5f9e9ee6-var-lib-calico\") pod \"tigera-operator-76c4974c85-8mdfd\" (UID: \"e928e107-fdd8-480e-b8dc-323a5f9e9ee6\") " pod="tigera-operator/tigera-operator-76c4974c85-8mdfd" Jun 25 18:48:01.017207 kubelet[2512]: I0625 18:48:01.017213 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7m8h\" (UniqueName: \"kubernetes.io/projected/e928e107-fdd8-480e-b8dc-323a5f9e9ee6-kube-api-access-t7m8h\") pod \"tigera-operator-76c4974c85-8mdfd\" (UID: \"e928e107-fdd8-480e-b8dc-323a5f9e9ee6\") " pod="tigera-operator/tigera-operator-76c4974c85-8mdfd" Jun 25 18:48:01.020516 systemd[1]: Started cri-containerd-d023a2892a4ed7f02a4de25095805feb36549c11b92bbbf97ba4c1ba7cc379c5.scope - libcontainer container d023a2892a4ed7f02a4de25095805feb36549c11b92bbbf97ba4c1ba7cc379c5. Jun 25 18:48:01.089879 containerd[1445]: time="2024-06-25T18:48:01.089833981Z" level=info msg="StartContainer for \"d023a2892a4ed7f02a4de25095805feb36549c11b92bbbf97ba4c1ba7cc379c5\" returns successfully" Jun 25 18:48:01.238610 containerd[1445]: time="2024-06-25T18:48:01.238503516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-8mdfd,Uid:e928e107-fdd8-480e-b8dc-323a5f9e9ee6,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:48:01.261829 containerd[1445]: time="2024-06-25T18:48:01.261714142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:01.261829 containerd[1445]: time="2024-06-25T18:48:01.261792269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:01.261829 containerd[1445]: time="2024-06-25T18:48:01.261816475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:01.262049 containerd[1445]: time="2024-06-25T18:48:01.261832596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:01.287991 systemd[1]: Started cri-containerd-27f97ba0e6e674f8b1fa8ac7c9b87218021a122d47eb8cb59612275b7972e186.scope - libcontainer container 27f97ba0e6e674f8b1fa8ac7c9b87218021a122d47eb8cb59612275b7972e186. Jun 25 18:48:01.322337 containerd[1445]: time="2024-06-25T18:48:01.322277802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-8mdfd,Uid:e928e107-fdd8-480e-b8dc-323a5f9e9ee6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"27f97ba0e6e674f8b1fa8ac7c9b87218021a122d47eb8cb59612275b7972e186\"" Jun 25 18:48:01.326192 containerd[1445]: time="2024-06-25T18:48:01.326161887Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:48:01.729023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124596496.mount: Deactivated successfully. Jun 25 18:48:01.959771 kubelet[2512]: E0625 18:48:01.959743 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:01.966215 kubelet[2512]: I0625 18:48:01.966177 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-b2v9c" podStartSLOduration=1.966140416 podCreationTimestamp="2024-06-25 18:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:48:01.96608433 +0000 UTC m=+16.121322954" watchObservedRunningTime="2024-06-25 18:48:01.966140416 +0000 UTC m=+16.121379019" Jun 25 18:48:02.958951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198158136.mount: Deactivated successfully. Jun 25 18:48:03.299936 containerd[1445]: time="2024-06-25T18:48:03.299884745Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:03.300738 containerd[1445]: time="2024-06-25T18:48:03.300660068Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076096" Jun 25 18:48:03.301885 containerd[1445]: time="2024-06-25T18:48:03.301857147Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:03.304509 containerd[1445]: time="2024-06-25T18:48:03.304474967Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:03.305134 containerd[1445]: time="2024-06-25T18:48:03.305105135Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 1.97891232s" Jun 25 18:48:03.305180 containerd[1445]: time="2024-06-25T18:48:03.305133028Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 18:48:03.306777 containerd[1445]: time="2024-06-25T18:48:03.306743127Z" level=info msg="CreateContainer within sandbox \"27f97ba0e6e674f8b1fa8ac7c9b87218021a122d47eb8cb59612275b7972e186\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:48:03.317590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77687950.mount: Deactivated successfully. Jun 25 18:48:03.319524 containerd[1445]: time="2024-06-25T18:48:03.319482074Z" level=info msg="CreateContainer within sandbox \"27f97ba0e6e674f8b1fa8ac7c9b87218021a122d47eb8cb59612275b7972e186\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"36e357d7b229a19e9235cc02b99278236dc803d3c50c976b98ed4d62d1a93720\"" Jun 25 18:48:03.319895 containerd[1445]: time="2024-06-25T18:48:03.319855850Z" level=info msg="StartContainer for \"36e357d7b229a19e9235cc02b99278236dc803d3c50c976b98ed4d62d1a93720\"" Jun 25 18:48:03.346534 systemd[1]: Started cri-containerd-36e357d7b229a19e9235cc02b99278236dc803d3c50c976b98ed4d62d1a93720.scope - libcontainer container 36e357d7b229a19e9235cc02b99278236dc803d3c50c976b98ed4d62d1a93720. Jun 25 18:48:03.373743 containerd[1445]: time="2024-06-25T18:48:03.373680434Z" level=info msg="StartContainer for \"36e357d7b229a19e9235cc02b99278236dc803d3c50c976b98ed4d62d1a93720\" returns successfully" Jun 25 18:48:03.973580 kubelet[2512]: I0625 18:48:03.973549 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-8mdfd" podStartSLOduration=1.992423654 podCreationTimestamp="2024-06-25 18:48:00 +0000 UTC" firstStartedPulling="2024-06-25 18:48:01.324434264 +0000 UTC m=+15.479672867" lastFinishedPulling="2024-06-25 18:48:03.305522693 +0000 UTC m=+17.460761296" observedRunningTime="2024-06-25 18:48:03.973460707 +0000 UTC m=+18.128699310" watchObservedRunningTime="2024-06-25 18:48:03.973512083 +0000 UTC m=+18.128750686" Jun 25 18:48:06.171301 kubelet[2512]: I0625 18:48:06.170333 2512 topology_manager.go:215] "Topology Admit Handler" podUID="9f4e7712-652c-4f63-9d13-5484d4e0e14d" podNamespace="calico-system" podName="calico-typha-558667c89b-swclv" Jun 25 18:48:06.179151 systemd[1]: Created slice kubepods-besteffort-pod9f4e7712_652c_4f63_9d13_5484d4e0e14d.slice - libcontainer container kubepods-besteffort-pod9f4e7712_652c_4f63_9d13_5484d4e0e14d.slice. Jun 25 18:48:06.216452 kubelet[2512]: I0625 18:48:06.216394 2512 topology_manager.go:215] "Topology Admit Handler" podUID="bac6a162-6e45-46a9-8a6e-d2c50044ec35" podNamespace="calico-system" podName="calico-node-gw9np" Jun 25 18:48:06.225788 systemd[1]: Created slice kubepods-besteffort-podbac6a162_6e45_46a9_8a6e_d2c50044ec35.slice - libcontainer container kubepods-besteffort-podbac6a162_6e45_46a9_8a6e_d2c50044ec35.slice. Jun 25 18:48:06.259126 kubelet[2512]: I0625 18:48:06.259077 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bac6a162-6e45-46a9-8a6e-d2c50044ec35-cni-log-dir\") pod \"calico-node-gw9np\" (UID: \"bac6a162-6e45-46a9-8a6e-d2c50044ec35\") " pod="calico-system/calico-node-gw9np" Jun 25 18:48:06.259126 kubelet[2512]: I0625 18:48:06.259132 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bac6a162-6e45-46a9-8a6e-d2c50044ec35-tigera-ca-bundle\") pod \"calico-node-gw9np\" (UID: \"bac6a162-6e45-46a9-8a6e-d2c50044ec35\") " pod="calico-system/calico-node-gw9np" Jun 25 18:48:06.259304 kubelet[2512]: I0625 18:48:06.259161 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bac6a162-6e45-46a9-8a6e-d2c50044ec35-var-lib-calico\") pod \"calico-node-gw9np\" (UID: \"bac6a162-6e45-46a9-8a6e-d2c50044ec35\") " pod="calico-system/calico-node-gw9np" Jun 25 18:48:06.259331 kubelet[2512]: I0625 18:48:06.259312 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bac6a162-6e45-46a9-8a6e-d2c50044ec35-policysync\") pod \"calico-node-gw9np\" (UID: \"bac6a162-6e45-46a9-8a6e-d2c50044ec35\") " pod="calico-system/calico-node-gw9np" Jun 25 18:48:06.259386 kubelet[2512]: I0625 18:48:06.259353 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bac6a162-6e45-46a9-8a6e-d2c50044ec35-flexvol-driver-host\") pod \"calico-node-gw9np\" (UID: \"bac6a162-6e45-46a9-8a6e-d2c50044ec35\") " pod="calico-system/calico-node-gw9np" Jun 25 18:48:06.259418 kubelet[2512]: I0625 18:48:06.259396 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bac6a162-6e45-46a9-8a6e-d2c50044ec35-lib-modules\") pod \"calico-node-gw9np\" (UID: \"bac6a162-6e45-46a9-8a6e-d2c50044ec35\") " pod="calico-system/calico-node-gw9np" Jun 25 18:48:06.259524 kubelet[2512]: I0625 18:48:06.259463 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9f4e7712-652c-4f63-9d13-5484d4e0e14d-typha-certs\") pod \"calico-typha-558667c89b-swclv\" (UID: \"9f4e7712-652c-4f63-9d13-5484d4e0e14d\") " pod="calico-system/calico-typha-558667c89b-swclv" Jun 25 18:48:06.259524 kubelet[2512]: I0625 18:48:06.259497 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bac6a162-6e45-46a9-8a6e-d2c50044ec35-xtables-lock\") pod \"calico-node-gw9np\" (UID: \"bac6a162-6e45-46a9-8a6e-d2c50044ec35\") " pod="calico-system/calico-node-gw9np" Jun 25 18:48:06.259524 kubelet[2512]: I0625 18:48:06.259525 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bac6a162-6e45-46a9-8a6e-d2c50044ec35-cni-net-dir\") pod \"calico-node-gw9np\" (UID: \"bac6a162-6e45-46a9-8a6e-d2c50044ec35\") " pod="calico-system/calico-node-gw9np" Jun 25 18:48:06.259635 kubelet[2512]: I0625 18:48:06.259551 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bac6a162-6e45-46a9-8a6e-d2c50044ec35-cni-bin-dir\") pod \"calico-node-gw9np\" (UID: \"bac6a162-6e45-46a9-8a6e-d2c50044ec35\") " pod="calico-system/calico-node-gw9np" Jun 25 18:48:06.259685 kubelet[2512]: I0625 18:48:06.259640 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f4e7712-652c-4f63-9d13-5484d4e0e14d-tigera-ca-bundle\") pod \"calico-typha-558667c89b-swclv\" (UID: \"9f4e7712-652c-4f63-9d13-5484d4e0e14d\") " pod="calico-system/calico-typha-558667c89b-swclv" Jun 25 18:48:06.259722 kubelet[2512]: I0625 18:48:06.259689 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bac6a162-6e45-46a9-8a6e-d2c50044ec35-var-run-calico\") pod \"calico-node-gw9np\" (UID: \"bac6a162-6e45-46a9-8a6e-d2c50044ec35\") " pod="calico-system/calico-node-gw9np" Jun 25 18:48:06.259757 kubelet[2512]: I0625 18:48:06.259729 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ncc4\" (UniqueName: \"kubernetes.io/projected/9f4e7712-652c-4f63-9d13-5484d4e0e14d-kube-api-access-2ncc4\") pod \"calico-typha-558667c89b-swclv\" (UID: \"9f4e7712-652c-4f63-9d13-5484d4e0e14d\") " pod="calico-system/calico-typha-558667c89b-swclv" Jun 25 18:48:06.259822 kubelet[2512]: I0625 18:48:06.259788 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bac6a162-6e45-46a9-8a6e-d2c50044ec35-node-certs\") pod \"calico-node-gw9np\" (UID: \"bac6a162-6e45-46a9-8a6e-d2c50044ec35\") " pod="calico-system/calico-node-gw9np" Jun 25 18:48:06.259866 kubelet[2512]: I0625 18:48:06.259830 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhtq5\" (UniqueName: \"kubernetes.io/projected/bac6a162-6e45-46a9-8a6e-d2c50044ec35-kube-api-access-mhtq5\") pod \"calico-node-gw9np\" (UID: \"bac6a162-6e45-46a9-8a6e-d2c50044ec35\") " pod="calico-system/calico-node-gw9np" Jun 25 18:48:06.323813 kubelet[2512]: I0625 18:48:06.323652 2512 topology_manager.go:215] "Topology Admit Handler" podUID="95ae610b-871e-4cb8-8b01-77db3f937baa" podNamespace="calico-system" podName="csi-node-driver-44zgj" Jun 25 18:48:06.323951 kubelet[2512]: E0625 18:48:06.323936 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44zgj" podUID="95ae610b-871e-4cb8-8b01-77db3f937baa" Jun 25 18:48:06.360816 kubelet[2512]: I0625 18:48:06.360763 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j76cg\" (UniqueName: \"kubernetes.io/projected/95ae610b-871e-4cb8-8b01-77db3f937baa-kube-api-access-j76cg\") pod \"csi-node-driver-44zgj\" (UID: \"95ae610b-871e-4cb8-8b01-77db3f937baa\") " pod="calico-system/csi-node-driver-44zgj" Jun 25 18:48:06.360970 kubelet[2512]: I0625 18:48:06.360873 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95ae610b-871e-4cb8-8b01-77db3f937baa-kubelet-dir\") pod \"csi-node-driver-44zgj\" (UID: \"95ae610b-871e-4cb8-8b01-77db3f937baa\") " pod="calico-system/csi-node-driver-44zgj" Jun 25 18:48:06.361025 kubelet[2512]: I0625 18:48:06.361001 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/95ae610b-871e-4cb8-8b01-77db3f937baa-varrun\") pod \"csi-node-driver-44zgj\" (UID: \"95ae610b-871e-4cb8-8b01-77db3f937baa\") " pod="calico-system/csi-node-driver-44zgj" Jun 25 18:48:06.361128 kubelet[2512]: I0625 18:48:06.361103 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/95ae610b-871e-4cb8-8b01-77db3f937baa-socket-dir\") pod \"csi-node-driver-44zgj\" (UID: \"95ae610b-871e-4cb8-8b01-77db3f937baa\") " pod="calico-system/csi-node-driver-44zgj" Jun 25 18:48:06.361162 kubelet[2512]: I0625 18:48:06.361142 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/95ae610b-871e-4cb8-8b01-77db3f937baa-registration-dir\") pod \"csi-node-driver-44zgj\" (UID: \"95ae610b-871e-4cb8-8b01-77db3f937baa\") " pod="calico-system/csi-node-driver-44zgj" Jun 25 18:48:06.366125 kubelet[2512]: E0625 18:48:06.366077 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.366125 kubelet[2512]: W0625 18:48:06.366098 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.366319 kubelet[2512]: E0625 18:48:06.366262 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.369667 kubelet[2512]: E0625 18:48:06.369631 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.369667 kubelet[2512]: W0625 18:48:06.369662 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.369752 kubelet[2512]: E0625 18:48:06.369692 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.381604 kubelet[2512]: E0625 18:48:06.381560 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.381604 kubelet[2512]: W0625 18:48:06.381591 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.381765 kubelet[2512]: E0625 18:48:06.381621 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.382151 kubelet[2512]: E0625 18:48:06.382125 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.382151 kubelet[2512]: W0625 18:48:06.382145 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.382218 kubelet[2512]: E0625 18:48:06.382161 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.462323 kubelet[2512]: E0625 18:48:06.462196 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.462323 kubelet[2512]: W0625 18:48:06.462223 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.462323 kubelet[2512]: E0625 18:48:06.462260 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.462674 kubelet[2512]: E0625 18:48:06.462651 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.462971 kubelet[2512]: W0625 18:48:06.462729 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.462971 kubelet[2512]: E0625 18:48:06.462772 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.463357 kubelet[2512]: E0625 18:48:06.463338 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.463466 kubelet[2512]: W0625 18:48:06.463438 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.463466 kubelet[2512]: E0625 18:48:06.463463 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.463837 kubelet[2512]: E0625 18:48:06.463802 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.463837 kubelet[2512]: W0625 18:48:06.463821 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.464207 kubelet[2512]: E0625 18:48:06.464041 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.464207 kubelet[2512]: E0625 18:48:06.464090 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.464207 kubelet[2512]: W0625 18:48:06.464102 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.464397 kubelet[2512]: E0625 18:48:06.464319 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.464397 kubelet[2512]: E0625 18:48:06.464367 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.464451 kubelet[2512]: W0625 18:48:06.464410 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.464451 kubelet[2512]: E0625 18:48:06.464444 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.464800 kubelet[2512]: E0625 18:48:06.464758 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.464800 kubelet[2512]: W0625 18:48:06.464774 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.464800 kubelet[2512]: E0625 18:48:06.464797 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.465120 kubelet[2512]: E0625 18:48:06.465108 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.465211 kubelet[2512]: W0625 18:48:06.465179 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.465428 kubelet[2512]: E0625 18:48:06.465351 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.465855 kubelet[2512]: E0625 18:48:06.465777 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.465855 kubelet[2512]: W0625 18:48:06.465786 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.466054 kubelet[2512]: E0625 18:48:06.465957 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.466149 kubelet[2512]: E0625 18:48:06.466138 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.466239 kubelet[2512]: W0625 18:48:06.466186 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.466400 kubelet[2512]: E0625 18:48:06.466319 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.466624 kubelet[2512]: E0625 18:48:06.466598 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.466624 kubelet[2512]: W0625 18:48:06.466609 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.466876 kubelet[2512]: E0625 18:48:06.466740 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.466958 kubelet[2512]: E0625 18:48:06.466949 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.467090 kubelet[2512]: W0625 18:48:06.467012 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.467210 kubelet[2512]: E0625 18:48:06.467154 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.467394 kubelet[2512]: E0625 18:48:06.467366 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.467521 kubelet[2512]: W0625 18:48:06.467448 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.467615 kubelet[2512]: E0625 18:48:06.467577 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.467853 kubelet[2512]: E0625 18:48:06.467817 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.467853 kubelet[2512]: W0625 18:48:06.467842 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.468007 kubelet[2512]: E0625 18:48:06.467986 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.468211 kubelet[2512]: E0625 18:48:06.468188 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.468211 kubelet[2512]: W0625 18:48:06.468200 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.468598 kubelet[2512]: E0625 18:48:06.468298 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.468598 kubelet[2512]: E0625 18:48:06.468484 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.468598 kubelet[2512]: W0625 18:48:06.468516 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.468598 kubelet[2512]: E0625 18:48:06.468600 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.468872 kubelet[2512]: E0625 18:48:06.468810 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.468872 kubelet[2512]: W0625 18:48:06.468834 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.469065 kubelet[2512]: E0625 18:48:06.468917 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.469065 kubelet[2512]: E0625 18:48:06.469000 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.469065 kubelet[2512]: W0625 18:48:06.469007 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.469065 kubelet[2512]: E0625 18:48:06.469024 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.469513 kubelet[2512]: E0625 18:48:06.469239 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.469513 kubelet[2512]: W0625 18:48:06.469257 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.469513 kubelet[2512]: E0625 18:48:06.469271 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.469513 kubelet[2512]: E0625 18:48:06.469483 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.469513 kubelet[2512]: W0625 18:48:06.469490 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.469513 kubelet[2512]: E0625 18:48:06.469501 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.470158 kubelet[2512]: E0625 18:48:06.469696 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.470158 kubelet[2512]: W0625 18:48:06.469705 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.470158 kubelet[2512]: E0625 18:48:06.469726 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.470685 kubelet[2512]: E0625 18:48:06.470514 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.470685 kubelet[2512]: W0625 18:48:06.470531 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.470685 kubelet[2512]: E0625 18:48:06.470558 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.470920 kubelet[2512]: E0625 18:48:06.470897 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.470920 kubelet[2512]: W0625 18:48:06.470917 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.471034 kubelet[2512]: E0625 18:48:06.470941 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.471210 kubelet[2512]: E0625 18:48:06.471191 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.471286 kubelet[2512]: W0625 18:48:06.471228 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.471286 kubelet[2512]: E0625 18:48:06.471256 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.471611 kubelet[2512]: E0625 18:48:06.471591 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.471611 kubelet[2512]: W0625 18:48:06.471603 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.471611 kubelet[2512]: E0625 18:48:06.471617 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.482408 kubelet[2512]: E0625 18:48:06.482321 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:48:06.482408 kubelet[2512]: W0625 18:48:06.482337 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:48:06.482408 kubelet[2512]: E0625 18:48:06.482353 2512 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:48:06.484661 kubelet[2512]: E0625 18:48:06.484612 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:06.485534 containerd[1445]: time="2024-06-25T18:48:06.485107849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-558667c89b-swclv,Uid:9f4e7712-652c-4f63-9d13-5484d4e0e14d,Namespace:calico-system,Attempt:0,}" Jun 25 18:48:06.532403 kubelet[2512]: E0625 18:48:06.529860 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:06.532549 containerd[1445]: time="2024-06-25T18:48:06.531593467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gw9np,Uid:bac6a162-6e45-46a9-8a6e-d2c50044ec35,Namespace:calico-system,Attempt:0,}" Jun 25 18:48:06.883458 containerd[1445]: time="2024-06-25T18:48:06.883022957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:06.883458 containerd[1445]: time="2024-06-25T18:48:06.883108087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:06.884748 containerd[1445]: time="2024-06-25T18:48:06.884493118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:06.884748 containerd[1445]: time="2024-06-25T18:48:06.884546408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:06.884748 containerd[1445]: time="2024-06-25T18:48:06.884565985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:06.884748 containerd[1445]: time="2024-06-25T18:48:06.884585131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:06.884748 containerd[1445]: time="2024-06-25T18:48:06.883870394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:06.885462 containerd[1445]: time="2024-06-25T18:48:06.885030010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:06.918577 systemd[1]: Started cri-containerd-0c15d0c08122ce9bb5f78be8783212c1a9c95a275b3f16240f9cb40f4b09848a.scope - libcontainer container 0c15d0c08122ce9bb5f78be8783212c1a9c95a275b3f16240f9cb40f4b09848a. Jun 25 18:48:06.921828 systemd[1]: Started cri-containerd-2feb7a0d6a4187c79c5e1d699578864a9a6c0198d9ad11ebe065d34e3f71f9e1.scope - libcontainer container 2feb7a0d6a4187c79c5e1d699578864a9a6c0198d9ad11ebe065d34e3f71f9e1. Jun 25 18:48:06.954317 containerd[1445]: time="2024-06-25T18:48:06.954210053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gw9np,Uid:bac6a162-6e45-46a9-8a6e-d2c50044ec35,Namespace:calico-system,Attempt:0,} returns sandbox id \"2feb7a0d6a4187c79c5e1d699578864a9a6c0198d9ad11ebe065d34e3f71f9e1\"" Jun 25 18:48:06.955071 kubelet[2512]: E0625 18:48:06.955039 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:06.956735 containerd[1445]: time="2024-06-25T18:48:06.956602283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:48:06.977655 containerd[1445]: time="2024-06-25T18:48:06.977526862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-558667c89b-swclv,Uid:9f4e7712-652c-4f63-9d13-5484d4e0e14d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c15d0c08122ce9bb5f78be8783212c1a9c95a275b3f16240f9cb40f4b09848a\"" Jun 25 18:48:06.978806 kubelet[2512]: E0625 18:48:06.978783 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:07.926117 kubelet[2512]: E0625 18:48:07.926031 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44zgj" podUID="95ae610b-871e-4cb8-8b01-77db3f937baa" Jun 25 18:48:08.221593 containerd[1445]: time="2024-06-25T18:48:08.221455095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:08.222262 containerd[1445]: time="2024-06-25T18:48:08.222206790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 18:48:08.223940 containerd[1445]: time="2024-06-25T18:48:08.223910630Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:08.228358 containerd[1445]: time="2024-06-25T18:48:08.228316108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:08.229528 containerd[1445]: time="2024-06-25T18:48:08.229499498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.272867047s" Jun 25 18:48:08.229584 containerd[1445]: time="2024-06-25T18:48:08.229531878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 18:48:08.230130 containerd[1445]: time="2024-06-25T18:48:08.230107062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:48:08.231917 containerd[1445]: time="2024-06-25T18:48:08.231851257Z" level=info msg="CreateContainer within sandbox \"2feb7a0d6a4187c79c5e1d699578864a9a6c0198d9ad11ebe065d34e3f71f9e1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:48:08.249285 containerd[1445]: time="2024-06-25T18:48:08.249252125Z" level=info msg="CreateContainer within sandbox \"2feb7a0d6a4187c79c5e1d699578864a9a6c0198d9ad11ebe065d34e3f71f9e1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"355f23f4eb9c3b557bc22c9454835e0e27ac1d95cf85be127703343ce223497a\"" Jun 25 18:48:08.249766 containerd[1445]: time="2024-06-25T18:48:08.249732179Z" level=info msg="StartContainer for \"355f23f4eb9c3b557bc22c9454835e0e27ac1d95cf85be127703343ce223497a\"" Jun 25 18:48:08.279702 systemd[1]: Started cri-containerd-355f23f4eb9c3b557bc22c9454835e0e27ac1d95cf85be127703343ce223497a.scope - libcontainer container 355f23f4eb9c3b557bc22c9454835e0e27ac1d95cf85be127703343ce223497a. Jun 25 18:48:08.313199 containerd[1445]: time="2024-06-25T18:48:08.313155657Z" level=info msg="StartContainer for \"355f23f4eb9c3b557bc22c9454835e0e27ac1d95cf85be127703343ce223497a\" returns successfully" Jun 25 18:48:08.326675 systemd[1]: cri-containerd-355f23f4eb9c3b557bc22c9454835e0e27ac1d95cf85be127703343ce223497a.scope: Deactivated successfully. Jun 25 18:48:08.368632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-355f23f4eb9c3b557bc22c9454835e0e27ac1d95cf85be127703343ce223497a-rootfs.mount: Deactivated successfully. Jun 25 18:48:08.401576 containerd[1445]: time="2024-06-25T18:48:08.401516007Z" level=info msg="shim disconnected" id=355f23f4eb9c3b557bc22c9454835e0e27ac1d95cf85be127703343ce223497a namespace=k8s.io Jun 25 18:48:08.401576 containerd[1445]: time="2024-06-25T18:48:08.401571641Z" level=warning msg="cleaning up after shim disconnected" id=355f23f4eb9c3b557bc22c9454835e0e27ac1d95cf85be127703343ce223497a namespace=k8s.io Jun 25 18:48:08.401576 containerd[1445]: time="2024-06-25T18:48:08.401580037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:09.000098 kubelet[2512]: E0625 18:48:09.000062 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:09.926815 kubelet[2512]: E0625 18:48:09.926749 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44zgj" podUID="95ae610b-871e-4cb8-8b01-77db3f937baa" Jun 25 18:48:10.517181 containerd[1445]: time="2024-06-25T18:48:10.517128873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:10.518777 containerd[1445]: time="2024-06-25T18:48:10.518734095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 18:48:10.520087 containerd[1445]: time="2024-06-25T18:48:10.520058839Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:10.522379 containerd[1445]: time="2024-06-25T18:48:10.522326497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:10.523028 containerd[1445]: time="2024-06-25T18:48:10.522982963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.292844963s" Jun 25 18:48:10.523028 containerd[1445]: time="2024-06-25T18:48:10.523022258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 18:48:10.524318 containerd[1445]: time="2024-06-25T18:48:10.524280446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:48:10.539869 containerd[1445]: time="2024-06-25T18:48:10.539729240Z" level=info msg="CreateContainer within sandbox \"0c15d0c08122ce9bb5f78be8783212c1a9c95a275b3f16240f9cb40f4b09848a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:48:10.557605 containerd[1445]: time="2024-06-25T18:48:10.557560590Z" level=info msg="CreateContainer within sandbox \"0c15d0c08122ce9bb5f78be8783212c1a9c95a275b3f16240f9cb40f4b09848a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ccee3818b1f2eefbf4f77998e5efa12129d9d1f97a34cf90758a4e0573e64d48\"" Jun 25 18:48:10.558042 containerd[1445]: time="2024-06-25T18:48:10.558007641Z" level=info msg="StartContainer for \"ccee3818b1f2eefbf4f77998e5efa12129d9d1f97a34cf90758a4e0573e64d48\"" Jun 25 18:48:10.588557 systemd[1]: Started cri-containerd-ccee3818b1f2eefbf4f77998e5efa12129d9d1f97a34cf90758a4e0573e64d48.scope - libcontainer container ccee3818b1f2eefbf4f77998e5efa12129d9d1f97a34cf90758a4e0573e64d48. Jun 25 18:48:10.632897 containerd[1445]: time="2024-06-25T18:48:10.632759299Z" level=info msg="StartContainer for \"ccee3818b1f2eefbf4f77998e5efa12129d9d1f97a34cf90758a4e0573e64d48\" returns successfully" Jun 25 18:48:11.005247 kubelet[2512]: E0625 18:48:11.005197 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:11.925632 kubelet[2512]: E0625 18:48:11.925553 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44zgj" podUID="95ae610b-871e-4cb8-8b01-77db3f937baa" Jun 25 18:48:12.013239 kubelet[2512]: I0625 18:48:12.013194 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:48:12.013838 kubelet[2512]: E0625 18:48:12.013799 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:13.926178 kubelet[2512]: E0625 18:48:13.926096 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44zgj" podUID="95ae610b-871e-4cb8-8b01-77db3f937baa" Jun 25 18:48:15.343862 containerd[1445]: time="2024-06-25T18:48:15.343802514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:15.344599 containerd[1445]: time="2024-06-25T18:48:15.344555930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 18:48:15.345689 containerd[1445]: time="2024-06-25T18:48:15.345665046Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:15.348006 containerd[1445]: time="2024-06-25T18:48:15.347961585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:15.348671 containerd[1445]: time="2024-06-25T18:48:15.348626706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 4.824317986s" Jun 25 18:48:15.348671 containerd[1445]: time="2024-06-25T18:48:15.348656171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 18:48:15.350388 containerd[1445]: time="2024-06-25T18:48:15.350340809Z" level=info msg="CreateContainer within sandbox \"2feb7a0d6a4187c79c5e1d699578864a9a6c0198d9ad11ebe065d34e3f71f9e1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:48:15.366014 containerd[1445]: time="2024-06-25T18:48:15.365975169Z" level=info msg="CreateContainer within sandbox \"2feb7a0d6a4187c79c5e1d699578864a9a6c0198d9ad11ebe065d34e3f71f9e1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f881ec488365e817d39d42e9f529eb0d1c97bf7c730912552f921814a5041263\"" Jun 25 18:48:15.366504 containerd[1445]: time="2024-06-25T18:48:15.366478185Z" level=info msg="StartContainer for \"f881ec488365e817d39d42e9f529eb0d1c97bf7c730912552f921814a5041263\"" Jun 25 18:48:15.404594 systemd[1]: Started cri-containerd-f881ec488365e817d39d42e9f529eb0d1c97bf7c730912552f921814a5041263.scope - libcontainer container f881ec488365e817d39d42e9f529eb0d1c97bf7c730912552f921814a5041263. Jun 25 18:48:15.441409 containerd[1445]: time="2024-06-25T18:48:15.441327229Z" level=info msg="StartContainer for \"f881ec488365e817d39d42e9f529eb0d1c97bf7c730912552f921814a5041263\" returns successfully" Jun 25 18:48:15.926675 kubelet[2512]: E0625 18:48:15.926636 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44zgj" podUID="95ae610b-871e-4cb8-8b01-77db3f937baa" Jun 25 18:48:16.013628 kubelet[2512]: E0625 18:48:16.013602 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:16.034248 kubelet[2512]: I0625 18:48:16.033687 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-558667c89b-swclv" podStartSLOduration=6.489312874 podCreationTimestamp="2024-06-25 18:48:06 +0000 UTC" firstStartedPulling="2024-06-25 18:48:06.979197812 +0000 UTC m=+21.134436415" lastFinishedPulling="2024-06-25 18:48:10.52353304 +0000 UTC m=+24.678771643" observedRunningTime="2024-06-25 18:48:11.012763752 +0000 UTC m=+25.168002375" watchObservedRunningTime="2024-06-25 18:48:16.033648102 +0000 UTC m=+30.188886705" Jun 25 18:48:16.756039 containerd[1445]: time="2024-06-25T18:48:16.755951931Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:48:16.758616 kubelet[2512]: I0625 18:48:16.758589 2512 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 18:48:16.759458 systemd[1]: cri-containerd-f881ec488365e817d39d42e9f529eb0d1c97bf7c730912552f921814a5041263.scope: Deactivated successfully. Jun 25 18:48:16.776812 kubelet[2512]: I0625 18:48:16.776700 2512 topology_manager.go:215] "Topology Admit Handler" podUID="32242272-2e0c-40ca-8a2e-803560249411" podNamespace="kube-system" podName="coredns-5dd5756b68-54xzb" Jun 25 18:48:16.780671 kubelet[2512]: I0625 18:48:16.780638 2512 topology_manager.go:215] "Topology Admit Handler" podUID="7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94" podNamespace="calico-system" podName="calico-kube-controllers-fb5dbb684-c2xhl" Jun 25 18:48:16.781421 kubelet[2512]: I0625 18:48:16.780910 2512 topology_manager.go:215] "Topology Admit Handler" podUID="b4f157b2-6424-4d5a-852d-442ff6151575" podNamespace="kube-system" podName="coredns-5dd5756b68-926qz" Jun 25 18:48:16.794978 systemd[1]: Created slice kubepods-burstable-pod32242272_2e0c_40ca_8a2e_803560249411.slice - libcontainer container kubepods-burstable-pod32242272_2e0c_40ca_8a2e_803560249411.slice. Jun 25 18:48:16.801683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f881ec488365e817d39d42e9f529eb0d1c97bf7c730912552f921814a5041263-rootfs.mount: Deactivated successfully. Jun 25 18:48:16.807272 systemd[1]: Created slice kubepods-besteffort-pod7ee8e7db_b61a_4d82_9e63_5fbaf75d0a94.slice - libcontainer container kubepods-besteffort-pod7ee8e7db_b61a_4d82_9e63_5fbaf75d0a94.slice. Jun 25 18:48:16.813157 systemd[1]: Created slice kubepods-burstable-podb4f157b2_6424_4d5a_852d_442ff6151575.slice - libcontainer container kubepods-burstable-podb4f157b2_6424_4d5a_852d_442ff6151575.slice. Jun 25 18:48:16.839454 kubelet[2512]: I0625 18:48:16.839145 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4f157b2-6424-4d5a-852d-442ff6151575-config-volume\") pod \"coredns-5dd5756b68-926qz\" (UID: \"b4f157b2-6424-4d5a-852d-442ff6151575\") " pod="kube-system/coredns-5dd5756b68-926qz" Jun 25 18:48:16.839454 kubelet[2512]: I0625 18:48:16.839205 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj5n5\" (UniqueName: \"kubernetes.io/projected/b4f157b2-6424-4d5a-852d-442ff6151575-kube-api-access-tj5n5\") pod \"coredns-5dd5756b68-926qz\" (UID: \"b4f157b2-6424-4d5a-852d-442ff6151575\") " pod="kube-system/coredns-5dd5756b68-926qz" Jun 25 18:48:16.839454 kubelet[2512]: I0625 18:48:16.839228 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzdzc\" (UniqueName: \"kubernetes.io/projected/32242272-2e0c-40ca-8a2e-803560249411-kube-api-access-zzdzc\") pod \"coredns-5dd5756b68-54xzb\" (UID: \"32242272-2e0c-40ca-8a2e-803560249411\") " pod="kube-system/coredns-5dd5756b68-54xzb" Jun 25 18:48:16.839454 kubelet[2512]: I0625 18:48:16.839248 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94-tigera-ca-bundle\") pod \"calico-kube-controllers-fb5dbb684-c2xhl\" (UID: \"7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94\") " pod="calico-system/calico-kube-controllers-fb5dbb684-c2xhl" Jun 25 18:48:16.839454 kubelet[2512]: I0625 18:48:16.839269 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32242272-2e0c-40ca-8a2e-803560249411-config-volume\") pod \"coredns-5dd5756b68-54xzb\" (UID: \"32242272-2e0c-40ca-8a2e-803560249411\") " pod="kube-system/coredns-5dd5756b68-54xzb" Jun 25 18:48:16.846017 kubelet[2512]: I0625 18:48:16.839300 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chrfm\" (UniqueName: \"kubernetes.io/projected/7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94-kube-api-access-chrfm\") pod \"calico-kube-controllers-fb5dbb684-c2xhl\" (UID: \"7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94\") " pod="calico-system/calico-kube-controllers-fb5dbb684-c2xhl" Jun 25 18:48:17.015702 kubelet[2512]: E0625 18:48:17.015578 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:17.090224 containerd[1445]: time="2024-06-25T18:48:17.090135602Z" level=info msg="shim disconnected" id=f881ec488365e817d39d42e9f529eb0d1c97bf7c730912552f921814a5041263 namespace=k8s.io Jun 25 18:48:17.090224 containerd[1445]: time="2024-06-25T18:48:17.090218688Z" level=warning msg="cleaning up after shim disconnected" id=f881ec488365e817d39d42e9f529eb0d1c97bf7c730912552f921814a5041263 namespace=k8s.io Jun 25 18:48:17.090224 containerd[1445]: time="2024-06-25T18:48:17.090233386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:17.100166 kubelet[2512]: E0625 18:48:17.100118 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:17.101054 containerd[1445]: time="2024-06-25T18:48:17.100996959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-54xzb,Uid:32242272-2e0c-40ca-8a2e-803560249411,Namespace:kube-system,Attempt:0,}" Jun 25 18:48:17.110816 containerd[1445]: time="2024-06-25T18:48:17.110746187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fb5dbb684-c2xhl,Uid:7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94,Namespace:calico-system,Attempt:0,}" Jun 25 18:48:17.117246 kubelet[2512]: E0625 18:48:17.117208 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:17.117896 containerd[1445]: time="2024-06-25T18:48:17.117820315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-926qz,Uid:b4f157b2-6424-4d5a-852d-442ff6151575,Namespace:kube-system,Attempt:0,}" Jun 25 18:48:17.203916 containerd[1445]: time="2024-06-25T18:48:17.203841311Z" level=error msg="Failed to destroy network for sandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.204655 containerd[1445]: time="2024-06-25T18:48:17.204494959Z" level=error msg="encountered an error cleaning up failed sandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.204655 containerd[1445]: time="2024-06-25T18:48:17.204549713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-54xzb,Uid:32242272-2e0c-40ca-8a2e-803560249411,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.204851 kubelet[2512]: E0625 18:48:17.204815 2512 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.205509 kubelet[2512]: E0625 18:48:17.205115 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-54xzb" Jun 25 18:48:17.205509 kubelet[2512]: E0625 18:48:17.205143 2512 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-54xzb" Jun 25 18:48:17.205606 kubelet[2512]: E0625 18:48:17.205593 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-54xzb_kube-system(32242272-2e0c-40ca-8a2e-803560249411)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-54xzb_kube-system(32242272-2e0c-40ca-8a2e-803560249411)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-54xzb" podUID="32242272-2e0c-40ca-8a2e-803560249411" Jun 25 18:48:17.211882 containerd[1445]: time="2024-06-25T18:48:17.211812886Z" level=error msg="Failed to destroy network for sandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.212417 containerd[1445]: time="2024-06-25T18:48:17.212387265Z" level=error msg="encountered an error cleaning up failed sandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.212491 containerd[1445]: time="2024-06-25T18:48:17.212453841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-926qz,Uid:b4f157b2-6424-4d5a-852d-442ff6151575,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.212847 kubelet[2512]: E0625 18:48:17.212814 2512 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.212910 kubelet[2512]: E0625 18:48:17.212881 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-926qz" Jun 25 18:48:17.212965 kubelet[2512]: E0625 18:48:17.212951 2512 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-926qz" Jun 25 18:48:17.213037 kubelet[2512]: E0625 18:48:17.213024 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-926qz_kube-system(b4f157b2-6424-4d5a-852d-442ff6151575)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-926qz_kube-system(b4f157b2-6424-4d5a-852d-442ff6151575)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-926qz" podUID="b4f157b2-6424-4d5a-852d-442ff6151575" Jun 25 18:48:17.213132 containerd[1445]: time="2024-06-25T18:48:17.213003273Z" level=error msg="Failed to destroy network for sandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.213508 containerd[1445]: time="2024-06-25T18:48:17.213484719Z" level=error msg="encountered an error cleaning up failed sandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.213567 containerd[1445]: time="2024-06-25T18:48:17.213534472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fb5dbb684-c2xhl,Uid:7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.213820 kubelet[2512]: E0625 18:48:17.213793 2512 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.213876 kubelet[2512]: E0625 18:48:17.213856 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fb5dbb684-c2xhl" Jun 25 18:48:17.213905 kubelet[2512]: E0625 18:48:17.213879 2512 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fb5dbb684-c2xhl" Jun 25 18:48:17.213975 kubelet[2512]: E0625 18:48:17.213959 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fb5dbb684-c2xhl_calico-system(7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fb5dbb684-c2xhl_calico-system(7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fb5dbb684-c2xhl" podUID="7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94" Jun 25 18:48:17.802175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e-shm.mount: Deactivated successfully. Jun 25 18:48:17.931541 systemd[1]: Created slice kubepods-besteffort-pod95ae610b_871e_4cb8_8b01_77db3f937baa.slice - libcontainer container kubepods-besteffort-pod95ae610b_871e_4cb8_8b01_77db3f937baa.slice. Jun 25 18:48:17.933734 containerd[1445]: time="2024-06-25T18:48:17.933692504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44zgj,Uid:95ae610b-871e-4cb8-8b01-77db3f937baa,Namespace:calico-system,Attempt:0,}" Jun 25 18:48:17.995649 containerd[1445]: time="2024-06-25T18:48:17.995598497Z" level=error msg="Failed to destroy network for sandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.997829 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d-shm.mount: Deactivated successfully. Jun 25 18:48:17.998422 containerd[1445]: time="2024-06-25T18:48:17.998362643Z" level=error msg="encountered an error cleaning up failed sandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.998506 containerd[1445]: time="2024-06-25T18:48:17.998449857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44zgj,Uid:95ae610b-871e-4cb8-8b01-77db3f937baa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.998689 kubelet[2512]: E0625 18:48:17.998668 2512 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:17.998773 kubelet[2512]: E0625 18:48:17.998716 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-44zgj" Jun 25 18:48:17.998773 kubelet[2512]: E0625 18:48:17.998736 2512 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-44zgj" Jun 25 18:48:17.998842 kubelet[2512]: E0625 18:48:17.998788 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-44zgj_calico-system(95ae610b-871e-4cb8-8b01-77db3f937baa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-44zgj_calico-system(95ae610b-871e-4cb8-8b01-77db3f937baa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-44zgj" podUID="95ae610b-871e-4cb8-8b01-77db3f937baa" Jun 25 18:48:18.018196 kubelet[2512]: I0625 18:48:18.018154 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:18.018764 containerd[1445]: time="2024-06-25T18:48:18.018716380Z" level=info msg="StopPodSandbox for \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\"" Jun 25 18:48:18.019000 containerd[1445]: time="2024-06-25T18:48:18.018962733Z" level=info msg="Ensure that sandbox 59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d in task-service has been cleanup successfully" Jun 25 18:48:18.019366 kubelet[2512]: I0625 18:48:18.019342 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:18.020659 kubelet[2512]: I0625 18:48:18.020631 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:18.020706 containerd[1445]: time="2024-06-25T18:48:18.020215997Z" level=info msg="StopPodSandbox for \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\"" Jun 25 18:48:18.020706 containerd[1445]: time="2024-06-25T18:48:18.020442393Z" level=info msg="Ensure that sandbox d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e in task-service has been cleanup successfully" Jun 25 18:48:18.021015 containerd[1445]: time="2024-06-25T18:48:18.020988550Z" level=info msg="StopPodSandbox for \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\"" Jun 25 18:48:18.021200 containerd[1445]: time="2024-06-25T18:48:18.021172967Z" level=info msg="Ensure that sandbox f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d in task-service has been cleanup successfully" Jun 25 18:48:18.023466 kubelet[2512]: I0625 18:48:18.023436 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:18.024649 containerd[1445]: time="2024-06-25T18:48:18.024248788Z" level=info msg="StopPodSandbox for \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\"" Jun 25 18:48:18.024649 containerd[1445]: time="2024-06-25T18:48:18.024492706Z" level=info msg="Ensure that sandbox fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb in task-service has been cleanup successfully" Jun 25 18:48:18.031912 kubelet[2512]: E0625 18:48:18.031880 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:18.037026 containerd[1445]: time="2024-06-25T18:48:18.036982802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:48:18.070422 containerd[1445]: time="2024-06-25T18:48:18.069762613Z" level=error msg="StopPodSandbox for \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\" failed" error="failed to destroy network for sandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:18.070618 kubelet[2512]: E0625 18:48:18.070168 2512 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:18.070618 kubelet[2512]: E0625 18:48:18.070250 2512 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d"} Jun 25 18:48:18.070618 kubelet[2512]: E0625 18:48:18.070291 2512 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95ae610b-871e-4cb8-8b01-77db3f937baa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:48:18.070618 kubelet[2512]: E0625 18:48:18.070319 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95ae610b-871e-4cb8-8b01-77db3f937baa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-44zgj" podUID="95ae610b-871e-4cb8-8b01-77db3f937baa" Jun 25 18:48:18.074543 containerd[1445]: time="2024-06-25T18:48:18.074486351Z" level=error msg="StopPodSandbox for \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\" failed" error="failed to destroy network for sandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:18.074784 kubelet[2512]: E0625 18:48:18.074730 2512 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:18.074915 kubelet[2512]: E0625 18:48:18.074890 2512 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb"} Jun 25 18:48:18.074986 kubelet[2512]: E0625 18:48:18.074951 2512 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4f157b2-6424-4d5a-852d-442ff6151575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:48:18.075065 kubelet[2512]: E0625 18:48:18.074993 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4f157b2-6424-4d5a-852d-442ff6151575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-926qz" podUID="b4f157b2-6424-4d5a-852d-442ff6151575" Jun 25 18:48:18.075808 containerd[1445]: time="2024-06-25T18:48:18.075759364Z" level=error msg="StopPodSandbox for \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\" failed" error="failed to destroy network for sandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:18.075991 kubelet[2512]: E0625 18:48:18.075962 2512 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:18.075991 kubelet[2512]: E0625 18:48:18.075990 2512 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e"} Jun 25 18:48:18.076094 kubelet[2512]: E0625 18:48:18.076027 2512 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"32242272-2e0c-40ca-8a2e-803560249411\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:48:18.076094 kubelet[2512]: E0625 18:48:18.076065 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"32242272-2e0c-40ca-8a2e-803560249411\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-54xzb" podUID="32242272-2e0c-40ca-8a2e-803560249411" Jun 25 18:48:18.076701 containerd[1445]: time="2024-06-25T18:48:18.076666579Z" level=error msg="StopPodSandbox for \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\" failed" error="failed to destroy network for sandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:48:18.076946 kubelet[2512]: E0625 18:48:18.076913 2512 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:18.076946 kubelet[2512]: E0625 18:48:18.076940 2512 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d"} Jun 25 18:48:18.077045 kubelet[2512]: E0625 18:48:18.076977 2512 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:48:18.077045 kubelet[2512]: E0625 18:48:18.077007 2512 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fb5dbb684-c2xhl" podUID="7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94" Jun 25 18:48:19.065110 systemd[1]: Started sshd@9-10.0.0.161:22-10.0.0.1:52980.service - OpenSSH per-connection server daemon (10.0.0.1:52980). Jun 25 18:48:19.112785 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 52980 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:19.114709 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:19.119963 systemd-logind[1429]: New session 10 of user core. Jun 25 18:48:19.129576 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:48:19.269754 sshd[3474]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:19.274202 systemd[1]: sshd@9-10.0.0.161:22-10.0.0.1:52980.service: Deactivated successfully. Jun 25 18:48:19.277176 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:48:19.278141 systemd-logind[1429]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:48:19.279163 systemd-logind[1429]: Removed session 10. Jun 25 18:48:21.622995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount599359514.mount: Deactivated successfully. Jun 25 18:48:22.036873 containerd[1445]: time="2024-06-25T18:48:22.036702075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:22.038090 containerd[1445]: time="2024-06-25T18:48:22.038023447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 18:48:22.039753 containerd[1445]: time="2024-06-25T18:48:22.039697151Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:22.045367 containerd[1445]: time="2024-06-25T18:48:22.045315064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:22.045828 containerd[1445]: time="2024-06-25T18:48:22.045797120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 4.008763703s" Jun 25 18:48:22.045828 containerd[1445]: time="2024-06-25T18:48:22.045825904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 18:48:22.054399 containerd[1445]: time="2024-06-25T18:48:22.053409530Z" level=info msg="CreateContainer within sandbox \"2feb7a0d6a4187c79c5e1d699578864a9a6c0198d9ad11ebe065d34e3f71f9e1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:48:22.077685 containerd[1445]: time="2024-06-25T18:48:22.077639226Z" level=info msg="CreateContainer within sandbox \"2feb7a0d6a4187c79c5e1d699578864a9a6c0198d9ad11ebe065d34e3f71f9e1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"41c24332952c6fb1f8245c176adf4653f07f6517821eab51266374130720bf73\"" Jun 25 18:48:22.078193 containerd[1445]: time="2024-06-25T18:48:22.078130889Z" level=info msg="StartContainer for \"41c24332952c6fb1f8245c176adf4653f07f6517821eab51266374130720bf73\"" Jun 25 18:48:22.151599 systemd[1]: Started cri-containerd-41c24332952c6fb1f8245c176adf4653f07f6517821eab51266374130720bf73.scope - libcontainer container 41c24332952c6fb1f8245c176adf4653f07f6517821eab51266374130720bf73. Jun 25 18:48:22.427743 containerd[1445]: time="2024-06-25T18:48:22.427680269Z" level=info msg="StartContainer for \"41c24332952c6fb1f8245c176adf4653f07f6517821eab51266374130720bf73\" returns successfully" Jun 25 18:48:22.436414 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:48:22.436560 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:48:23.050693 kubelet[2512]: E0625 18:48:23.050649 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:23.060828 kubelet[2512]: I0625 18:48:23.060547 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-gw9np" podStartSLOduration=1.970480239 podCreationTimestamp="2024-06-25 18:48:06 +0000 UTC" firstStartedPulling="2024-06-25 18:48:06.95598562 +0000 UTC m=+21.111224223" lastFinishedPulling="2024-06-25 18:48:22.046001865 +0000 UTC m=+36.201240468" observedRunningTime="2024-06-25 18:48:23.059817519 +0000 UTC m=+37.215056122" watchObservedRunningTime="2024-06-25 18:48:23.060496484 +0000 UTC m=+37.215735087" Jun 25 18:48:24.281427 systemd[1]: Started sshd@10-10.0.0.161:22-10.0.0.1:53036.service - OpenSSH per-connection server daemon (10.0.0.1:53036). Jun 25 18:48:24.324156 sshd[3662]: Accepted publickey for core from 10.0.0.1 port 53036 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:24.326644 sshd[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:24.331297 systemd-logind[1429]: New session 11 of user core. Jun 25 18:48:24.339546 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:48:24.483598 sshd[3662]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:24.487445 systemd[1]: sshd@10-10.0.0.161:22-10.0.0.1:53036.service: Deactivated successfully. Jun 25 18:48:24.489551 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:48:24.490149 systemd-logind[1429]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:48:24.491006 systemd-logind[1429]: Removed session 11. Jun 25 18:48:28.927340 containerd[1445]: time="2024-06-25T18:48:28.927174242Z" level=info msg="StopPodSandbox for \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\"" Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.114 [INFO][3794] k8s.go 608: Cleaning up netns ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.114 [INFO][3794] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" iface="eth0" netns="/var/run/netns/cni-3dfe90e8-74b1-644e-1dd2-e43d8018613b" Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.115 [INFO][3794] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" iface="eth0" netns="/var/run/netns/cni-3dfe90e8-74b1-644e-1dd2-e43d8018613b" Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.115 [INFO][3794] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" iface="eth0" netns="/var/run/netns/cni-3dfe90e8-74b1-644e-1dd2-e43d8018613b" Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.115 [INFO][3794] k8s.go 615: Releasing IP address(es) ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.115 [INFO][3794] utils.go 188: Calico CNI releasing IP address ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.164 [INFO][3802] ipam_plugin.go 411: Releasing address using handleID ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" HandleID="k8s-pod-network.d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Workload="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.164 [INFO][3802] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.164 [INFO][3802] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.248 [WARNING][3802] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" HandleID="k8s-pod-network.d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Workload="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.248 [INFO][3802] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" HandleID="k8s-pod-network.d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Workload="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.321 [INFO][3802] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:29.328662 containerd[1445]: 2024-06-25 18:48:29.325 [INFO][3794] k8s.go 621: Teardown processing complete. ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:29.376705 containerd[1445]: time="2024-06-25T18:48:29.329580793Z" level=info msg="TearDown network for sandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\" successfully" Jun 25 18:48:29.376705 containerd[1445]: time="2024-06-25T18:48:29.329609577Z" level=info msg="StopPodSandbox for \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\" returns successfully" Jun 25 18:48:29.376705 containerd[1445]: time="2024-06-25T18:48:29.330630875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-54xzb,Uid:32242272-2e0c-40ca-8a2e-803560249411,Namespace:kube-system,Attempt:1,}" Jun 25 18:48:29.334278 systemd[1]: run-netns-cni\x2d3dfe90e8\x2d74b1\x2d644e\x2d1dd2\x2de43d8018613b.mount: Deactivated successfully. Jun 25 18:48:29.377144 kubelet[2512]: E0625 18:48:29.329950 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:29.500479 systemd[1]: Started sshd@11-10.0.0.161:22-10.0.0.1:52750.service - OpenSSH per-connection server daemon (10.0.0.1:52750). Jun 25 18:48:29.551545 sshd[3834]: Accepted publickey for core from 10.0.0.1 port 52750 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:29.553285 sshd[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:29.557960 systemd-logind[1429]: New session 12 of user core. Jun 25 18:48:29.568541 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:48:29.913867 sshd[3834]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:29.918311 systemd[1]: sshd@11-10.0.0.161:22-10.0.0.1:52750.service: Deactivated successfully. Jun 25 18:48:29.920484 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:48:29.921225 systemd-logind[1429]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:48:29.922201 systemd-logind[1429]: Removed session 12. Jun 25 18:48:29.927261 containerd[1445]: time="2024-06-25T18:48:29.927227524Z" level=info msg="StopPodSandbox for \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\"" Jun 25 18:48:29.928076 containerd[1445]: time="2024-06-25T18:48:29.927425977Z" level=info msg="StopPodSandbox for \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\"" Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:29.992 [INFO][3880] k8s.go 608: Cleaning up netns ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:29.992 [INFO][3880] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" iface="eth0" netns="/var/run/netns/cni-b0ea6840-a9aa-4931-9c10-8f93f95fe0c8" Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:29.992 [INFO][3880] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" iface="eth0" netns="/var/run/netns/cni-b0ea6840-a9aa-4931-9c10-8f93f95fe0c8" Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:29.993 [INFO][3880] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" iface="eth0" netns="/var/run/netns/cni-b0ea6840-a9aa-4931-9c10-8f93f95fe0c8" Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:29.993 [INFO][3880] k8s.go 615: Releasing IP address(es) ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:29.993 [INFO][3880] utils.go 188: Calico CNI releasing IP address ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:30.024 [INFO][3896] ipam_plugin.go 411: Releasing address using handleID ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" HandleID="k8s-pod-network.59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Workload="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:30.024 [INFO][3896] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:30.025 [INFO][3896] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:30.101 [WARNING][3896] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" HandleID="k8s-pod-network.59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Workload="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:30.101 [INFO][3896] ipam_plugin.go 439: Releasing address using workloadID ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" HandleID="k8s-pod-network.59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Workload="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:30.102 [INFO][3896] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:30.110103 containerd[1445]: 2024-06-25 18:48:30.106 [INFO][3880] k8s.go 621: Teardown processing complete. ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:30.110533 containerd[1445]: time="2024-06-25T18:48:30.110321288Z" level=info msg="TearDown network for sandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\" successfully" Jun 25 18:48:30.110533 containerd[1445]: time="2024-06-25T18:48:30.110360752Z" level=info msg="StopPodSandbox for \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\" returns successfully" Jun 25 18:48:30.111631 containerd[1445]: time="2024-06-25T18:48:30.111559583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fb5dbb684-c2xhl,Uid:7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94,Namespace:calico-system,Attempt:1,}" Jun 25 18:48:30.113050 systemd[1]: run-netns-cni\x2db0ea6840\x2da9aa\x2d4931\x2d9c10\x2d8f93f95fe0c8.mount: Deactivated successfully. Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.027 [INFO][3881] k8s.go 608: Cleaning up netns ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.027 [INFO][3881] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" iface="eth0" netns="/var/run/netns/cni-d9092465-905e-38d7-29e3-55e9b78056a0" Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.027 [INFO][3881] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" iface="eth0" netns="/var/run/netns/cni-d9092465-905e-38d7-29e3-55e9b78056a0" Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.027 [INFO][3881] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" iface="eth0" netns="/var/run/netns/cni-d9092465-905e-38d7-29e3-55e9b78056a0" Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.028 [INFO][3881] k8s.go 615: Releasing IP address(es) ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.028 [INFO][3881] utils.go 188: Calico CNI releasing IP address ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.047 [INFO][3904] ipam_plugin.go 411: Releasing address using handleID ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" HandleID="k8s-pod-network.f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Workload="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.047 [INFO][3904] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.102 [INFO][3904] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.111 [WARNING][3904] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" HandleID="k8s-pod-network.f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Workload="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.111 [INFO][3904] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" HandleID="k8s-pod-network.f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Workload="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.113 [INFO][3904] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:30.119255 containerd[1445]: 2024-06-25 18:48:30.116 [INFO][3881] k8s.go 621: Teardown processing complete. ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:30.119640 containerd[1445]: time="2024-06-25T18:48:30.119511449Z" level=info msg="TearDown network for sandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\" successfully" Jun 25 18:48:30.119640 containerd[1445]: time="2024-06-25T18:48:30.119536637Z" level=info msg="StopPodSandbox for \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\" returns successfully" Jun 25 18:48:30.120133 containerd[1445]: time="2024-06-25T18:48:30.120113559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44zgj,Uid:95ae610b-871e-4cb8-8b01-77db3f937baa,Namespace:calico-system,Attempt:1,}" Jun 25 18:48:30.121839 systemd[1]: run-netns-cni\x2dd9092465\x2d905e\x2d38d7\x2d29e3\x2d55e9b78056a0.mount: Deactivated successfully. Jun 25 18:48:30.662173 kubelet[2512]: I0625 18:48:30.661822 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:48:30.667701 kubelet[2512]: E0625 18:48:30.665275 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:30.680895 systemd-networkd[1382]: cali50e42519dbd: Link UP Jun 25 18:48:30.682172 systemd-networkd[1382]: cali50e42519dbd: Gained carrier Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.573 [INFO][3913] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.586 [INFO][3913] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--54xzb-eth0 coredns-5dd5756b68- kube-system 32242272-2e0c-40ca-8a2e-803560249411 792 0 2024-06-25 18:48:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-54xzb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali50e42519dbd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" Namespace="kube-system" Pod="coredns-5dd5756b68-54xzb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--54xzb-" Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.586 [INFO][3913] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" Namespace="kube-system" Pod="coredns-5dd5756b68-54xzb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.627 [INFO][3952] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" HandleID="k8s-pod-network.8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" Workload="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.636 [INFO][3952] ipam_plugin.go 264: Auto assigning IP ContainerID="8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" HandleID="k8s-pod-network.8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" Workload="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a570), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-54xzb", "timestamp":"2024-06-25 18:48:30.627839166 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.636 [INFO][3952] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.636 [INFO][3952] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.636 [INFO][3952] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.638 [INFO][3952] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" host="localhost" Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.642 [INFO][3952] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.646 [INFO][3952] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.648 [INFO][3952] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.650 [INFO][3952] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.650 [INFO][3952] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" host="localhost" Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.651 [INFO][3952] ipam.go 1685: Creating new handle: k8s-pod-network.8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.655 [INFO][3952] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" host="localhost" Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.658 [INFO][3952] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" host="localhost" Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.658 [INFO][3952] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" host="localhost" Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.658 [INFO][3952] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:30.698307 containerd[1445]: 2024-06-25 18:48:30.658 [INFO][3952] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" HandleID="k8s-pod-network.8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" Workload="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:30.700254 containerd[1445]: 2024-06-25 18:48:30.661 [INFO][3913] k8s.go 386: Populated endpoint ContainerID="8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" Namespace="kube-system" Pod="coredns-5dd5756b68-54xzb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--54xzb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"32242272-2e0c-40ca-8a2e-803560249411", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-54xzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50e42519dbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:30.700254 containerd[1445]: 2024-06-25 18:48:30.662 [INFO][3913] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" Namespace="kube-system" Pod="coredns-5dd5756b68-54xzb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:30.700254 containerd[1445]: 2024-06-25 18:48:30.662 [INFO][3913] dataplane_linux.go 68: Setting the host side veth name to cali50e42519dbd ContainerID="8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" Namespace="kube-system" Pod="coredns-5dd5756b68-54xzb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:30.700254 containerd[1445]: 2024-06-25 18:48:30.681 [INFO][3913] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" Namespace="kube-system" Pod="coredns-5dd5756b68-54xzb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:30.700254 containerd[1445]: 2024-06-25 18:48:30.682 [INFO][3913] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" Namespace="kube-system" Pod="coredns-5dd5756b68-54xzb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--54xzb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"32242272-2e0c-40ca-8a2e-803560249411", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da", Pod:"coredns-5dd5756b68-54xzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50e42519dbd", MAC:"9e:f5:e5:95:4d:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:30.700254 containerd[1445]: 2024-06-25 18:48:30.694 [INFO][3913] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da" Namespace="kube-system" Pod="coredns-5dd5756b68-54xzb" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:30.734941 systemd-networkd[1382]: cali73913b11c51: Link UP Jun 25 18:48:30.735826 systemd-networkd[1382]: cali73913b11c51: Gained carrier Jun 25 18:48:30.749402 containerd[1445]: time="2024-06-25T18:48:30.748240160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:30.749402 containerd[1445]: time="2024-06-25T18:48:30.748870202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:30.749402 containerd[1445]: time="2024-06-25T18:48:30.748887725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:30.749402 containerd[1445]: time="2024-06-25T18:48:30.748897033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.601 [INFO][3935] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.613 [INFO][3935] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--44zgj-eth0 csi-node-driver- calico-system 95ae610b-871e-4cb8-8b01-77db3f937baa 802 0 2024-06-25 18:48:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-44zgj eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali73913b11c51 [] []}} ContainerID="720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" Namespace="calico-system" Pod="csi-node-driver-44zgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--44zgj-" Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.613 [INFO][3935] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" Namespace="calico-system" Pod="csi-node-driver-44zgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.658 [INFO][3966] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" HandleID="k8s-pod-network.720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" Workload="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.672 [INFO][3966] ipam_plugin.go 264: Auto assigning IP ContainerID="720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" HandleID="k8s-pod-network.720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" Workload="localhost-k8s-csi--node--driver--44zgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000294bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-44zgj", "timestamp":"2024-06-25 18:48:30.658225965 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.672 [INFO][3966] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.673 [INFO][3966] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.673 [INFO][3966] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.677 [INFO][3966] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" host="localhost" Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.693 [INFO][3966] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.706 [INFO][3966] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.710 [INFO][3966] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.713 [INFO][3966] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.713 [INFO][3966] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" host="localhost" Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.714 [INFO][3966] ipam.go 1685: Creating new handle: k8s-pod-network.720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.719 [INFO][3966] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" host="localhost" Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.727 [INFO][3966] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" host="localhost" Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.727 [INFO][3966] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" host="localhost" Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.727 [INFO][3966] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:30.758209 containerd[1445]: 2024-06-25 18:48:30.727 [INFO][3966] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" HandleID="k8s-pod-network.720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" Workload="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:30.759309 containerd[1445]: 2024-06-25 18:48:30.731 [INFO][3935] k8s.go 386: Populated endpoint ContainerID="720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" Namespace="calico-system" Pod="csi-node-driver-44zgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--44zgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--44zgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95ae610b-871e-4cb8-8b01-77db3f937baa", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-44zgj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali73913b11c51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:30.759309 containerd[1445]: 2024-06-25 18:48:30.731 [INFO][3935] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" Namespace="calico-system" Pod="csi-node-driver-44zgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:30.759309 containerd[1445]: 2024-06-25 18:48:30.731 [INFO][3935] dataplane_linux.go 68: Setting the host side veth name to cali73913b11c51 ContainerID="720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" Namespace="calico-system" Pod="csi-node-driver-44zgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:30.759309 containerd[1445]: 2024-06-25 18:48:30.735 [INFO][3935] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" Namespace="calico-system" Pod="csi-node-driver-44zgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:30.759309 containerd[1445]: 2024-06-25 18:48:30.736 [INFO][3935] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" Namespace="calico-system" Pod="csi-node-driver-44zgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--44zgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--44zgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95ae610b-871e-4cb8-8b01-77db3f937baa", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e", Pod:"csi-node-driver-44zgj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali73913b11c51", MAC:"d6:de:0d:04:cc:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:30.759309 containerd[1445]: 2024-06-25 18:48:30.752 [INFO][3935] k8s.go 500: Wrote updated endpoint to datastore ContainerID="720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e" Namespace="calico-system" Pod="csi-node-driver-44zgj" WorkloadEndpoint="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:30.775562 systemd[1]: Started cri-containerd-8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da.scope - libcontainer container 8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da. Jun 25 18:48:30.789113 systemd-networkd[1382]: cali6f9f239e6e6: Link UP Jun 25 18:48:30.790688 systemd-networkd[1382]: cali6f9f239e6e6: Gained carrier Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.598 [INFO][3925] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.613 [INFO][3925] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0 calico-kube-controllers-fb5dbb684- calico-system 7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94 801 0 2024-06-25 18:48:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fb5dbb684 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-fb5dbb684-c2xhl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6f9f239e6e6 [] []}} ContainerID="011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" Namespace="calico-system" Pod="calico-kube-controllers-fb5dbb684-c2xhl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-" Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.613 [INFO][3925] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" Namespace="calico-system" Pod="calico-kube-controllers-fb5dbb684-c2xhl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.661 [INFO][3959] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" HandleID="k8s-pod-network.011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" Workload="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.678 [INFO][3959] ipam_plugin.go 264: Auto assigning IP ContainerID="011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" HandleID="k8s-pod-network.011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" Workload="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027ff00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-fb5dbb684-c2xhl", "timestamp":"2024-06-25 18:48:30.661065936 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.679 [INFO][3959] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.727 [INFO][3959] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.728 [INFO][3959] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.732 [INFO][3959] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" host="localhost" Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.740 [INFO][3959] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.747 [INFO][3959] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.752 [INFO][3959] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.756 [INFO][3959] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.756 [INFO][3959] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" host="localhost" Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.759 [INFO][3959] ipam.go 1685: Creating new handle: k8s-pod-network.011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4 Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.765 [INFO][3959] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" host="localhost" Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.774 [INFO][3959] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" host="localhost" Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.775 [INFO][3959] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" host="localhost" Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.775 [INFO][3959] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:30.810195 containerd[1445]: 2024-06-25 18:48:30.775 [INFO][3959] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" HandleID="k8s-pod-network.011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" Workload="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:30.811158 containerd[1445]: 2024-06-25 18:48:30.785 [INFO][3925] k8s.go 386: Populated endpoint ContainerID="011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" Namespace="calico-system" Pod="calico-kube-controllers-fb5dbb684-c2xhl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0", GenerateName:"calico-kube-controllers-fb5dbb684-", Namespace:"calico-system", SelfLink:"", UID:"7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fb5dbb684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-fb5dbb684-c2xhl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f9f239e6e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:30.811158 containerd[1445]: 2024-06-25 18:48:30.785 [INFO][3925] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" Namespace="calico-system" Pod="calico-kube-controllers-fb5dbb684-c2xhl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:30.811158 containerd[1445]: 2024-06-25 18:48:30.785 [INFO][3925] dataplane_linux.go 68: Setting the host side veth name to cali6f9f239e6e6 ContainerID="011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" Namespace="calico-system" Pod="calico-kube-controllers-fb5dbb684-c2xhl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:30.811158 containerd[1445]: 2024-06-25 18:48:30.791 [INFO][3925] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" Namespace="calico-system" Pod="calico-kube-controllers-fb5dbb684-c2xhl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:30.811158 containerd[1445]: 2024-06-25 18:48:30.794 [INFO][3925] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" Namespace="calico-system" Pod="calico-kube-controllers-fb5dbb684-c2xhl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0", GenerateName:"calico-kube-controllers-fb5dbb684-", Namespace:"calico-system", SelfLink:"", UID:"7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fb5dbb684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4", Pod:"calico-kube-controllers-fb5dbb684-c2xhl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f9f239e6e6", MAC:"fa:24:93:f2:d0:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:30.811158 containerd[1445]: 2024-06-25 18:48:30.805 [INFO][3925] k8s.go 500: Wrote updated endpoint to datastore ContainerID="011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4" Namespace="calico-system" Pod="calico-kube-controllers-fb5dbb684-c2xhl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:30.820061 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:48:30.828919 containerd[1445]: time="2024-06-25T18:48:30.828809025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:30.829128 containerd[1445]: time="2024-06-25T18:48:30.828976439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:30.829128 containerd[1445]: time="2024-06-25T18:48:30.828998210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:30.829128 containerd[1445]: time="2024-06-25T18:48:30.829010673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:30.855976 systemd[1]: Started cri-containerd-720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e.scope - libcontainer container 720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e. Jun 25 18:48:30.864644 containerd[1445]: time="2024-06-25T18:48:30.864601261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-54xzb,Uid:32242272-2e0c-40ca-8a2e-803560249411,Namespace:kube-system,Attempt:1,} returns sandbox id \"8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da\"" Jun 25 18:48:30.865692 kubelet[2512]: E0625 18:48:30.865512 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:30.868670 containerd[1445]: time="2024-06-25T18:48:30.868650701Z" level=info msg="CreateContainer within sandbox \"8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:48:30.869464 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:48:30.882157 containerd[1445]: time="2024-06-25T18:48:30.882069479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:30.882157 containerd[1445]: time="2024-06-25T18:48:30.882118171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:30.882157 containerd[1445]: time="2024-06-25T18:48:30.882146664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:30.882157 containerd[1445]: time="2024-06-25T18:48:30.882159799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:30.884778 containerd[1445]: time="2024-06-25T18:48:30.884736757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44zgj,Uid:95ae610b-871e-4cb8-8b01-77db3f937baa,Namespace:calico-system,Attempt:1,} returns sandbox id \"720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e\"" Jun 25 18:48:30.886348 containerd[1445]: time="2024-06-25T18:48:30.886322443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:48:30.902514 systemd[1]: Started cri-containerd-011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4.scope - libcontainer container 011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4. Jun 25 18:48:30.916021 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:48:30.925916 containerd[1445]: time="2024-06-25T18:48:30.925870749Z" level=info msg="StopPodSandbox for \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\"" Jun 25 18:48:30.945230 containerd[1445]: time="2024-06-25T18:48:30.945200552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fb5dbb684-c2xhl,Uid:7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94,Namespace:calico-system,Attempt:1,} returns sandbox id \"011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4\"" Jun 25 18:48:31.085592 kubelet[2512]: E0625 18:48:31.085544 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:31.136076 containerd[1445]: time="2024-06-25T18:48:31.136000684Z" level=info msg="CreateContainer within sandbox \"8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"28c91d87a6052f10947954d342b984d210bc7d638d072ee41ea2af5c4b30b241\"" Jun 25 18:48:31.136837 containerd[1445]: time="2024-06-25T18:48:31.136805424Z" level=info msg="StartContainer for \"28c91d87a6052f10947954d342b984d210bc7d638d072ee41ea2af5c4b30b241\"" Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.104 [INFO][4173] k8s.go 608: Cleaning up netns ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.105 [INFO][4173] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" iface="eth0" netns="/var/run/netns/cni-354a4b8e-20e1-edf9-a023-a9f7bc0696ce" Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.106 [INFO][4173] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" iface="eth0" netns="/var/run/netns/cni-354a4b8e-20e1-edf9-a023-a9f7bc0696ce" Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.106 [INFO][4173] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" iface="eth0" netns="/var/run/netns/cni-354a4b8e-20e1-edf9-a023-a9f7bc0696ce" Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.106 [INFO][4173] k8s.go 615: Releasing IP address(es) ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.106 [INFO][4173] utils.go 188: Calico CNI releasing IP address ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.131 [INFO][4208] ipam_plugin.go 411: Releasing address using handleID ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" HandleID="k8s-pod-network.fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Workload="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.131 [INFO][4208] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.131 [INFO][4208] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.137 [WARNING][4208] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" HandleID="k8s-pod-network.fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Workload="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.138 [INFO][4208] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" HandleID="k8s-pod-network.fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Workload="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.140 [INFO][4208] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:31.149579 containerd[1445]: 2024-06-25 18:48:31.144 [INFO][4173] k8s.go 621: Teardown processing complete. ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:31.150176 containerd[1445]: time="2024-06-25T18:48:31.150032150Z" level=info msg="TearDown network for sandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\" successfully" Jun 25 18:48:31.150176 containerd[1445]: time="2024-06-25T18:48:31.150060964Z" level=info msg="StopPodSandbox for \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\" returns successfully" Jun 25 18:48:31.150630 kubelet[2512]: E0625 18:48:31.150602 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:31.151589 containerd[1445]: time="2024-06-25T18:48:31.151361937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-926qz,Uid:b4f157b2-6424-4d5a-852d-442ff6151575,Namespace:kube-system,Attempt:1,}" Jun 25 18:48:31.172427 systemd[1]: Started cri-containerd-28c91d87a6052f10947954d342b984d210bc7d638d072ee41ea2af5c4b30b241.scope - libcontainer container 28c91d87a6052f10947954d342b984d210bc7d638d072ee41ea2af5c4b30b241. Jun 25 18:48:31.216089 containerd[1445]: time="2024-06-25T18:48:31.215641006Z" level=info msg="StartContainer for \"28c91d87a6052f10947954d342b984d210bc7d638d072ee41ea2af5c4b30b241\" returns successfully" Jun 25 18:48:31.354733 systemd-networkd[1382]: vxlan.calico: Link UP Jun 25 18:48:31.354957 systemd-networkd[1382]: vxlan.calico: Gained carrier Jun 25 18:48:31.521028 systemd-networkd[1382]: cali681b913a504: Link UP Jun 25 18:48:31.522179 systemd-networkd[1382]: cali681b913a504: Gained carrier Jun 25 18:48:31.538783 systemd[1]: run-netns-cni\x2d354a4b8e\x2d20e1\x2dedf9\x2da023\x2da9f7bc0696ce.mount: Deactivated successfully. Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.230 [INFO][4255] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.241 [INFO][4255] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--926qz-eth0 coredns-5dd5756b68- kube-system b4f157b2-6424-4d5a-852d-442ff6151575 830 0 2024-06-25 18:48:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-926qz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali681b913a504 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" Namespace="kube-system" Pod="coredns-5dd5756b68-926qz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--926qz-" Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.241 [INFO][4255] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" Namespace="kube-system" Pod="coredns-5dd5756b68-926qz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.274 [INFO][4272] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" HandleID="k8s-pod-network.10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" Workload="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.297 [INFO][4272] ipam_plugin.go 264: Auto assigning IP ContainerID="10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" HandleID="k8s-pod-network.10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" Workload="localhost-k8s-coredns--5dd5756b68--926qz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005906d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-926qz", "timestamp":"2024-06-25 18:48:31.274578048 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.297 [INFO][4272] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.297 [INFO][4272] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.297 [INFO][4272] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.300 [INFO][4272] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" host="localhost" Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.304 [INFO][4272] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.447 [INFO][4272] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.495 [INFO][4272] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.497 [INFO][4272] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.497 [INFO][4272] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" host="localhost" Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.499 [INFO][4272] ipam.go 1685: Creating new handle: k8s-pod-network.10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5 Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.502 [INFO][4272] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" host="localhost" Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.515 [INFO][4272] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" host="localhost" Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.516 [INFO][4272] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" host="localhost" Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.516 [INFO][4272] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:31.620133 containerd[1445]: 2024-06-25 18:48:31.516 [INFO][4272] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" HandleID="k8s-pod-network.10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" Workload="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:31.621348 containerd[1445]: 2024-06-25 18:48:31.518 [INFO][4255] k8s.go 386: Populated endpoint ContainerID="10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" Namespace="kube-system" Pod="coredns-5dd5756b68-926qz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--926qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--926qz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b4f157b2-6424-4d5a-852d-442ff6151575", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-926qz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali681b913a504", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:31.621348 containerd[1445]: 2024-06-25 18:48:31.519 [INFO][4255] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" Namespace="kube-system" Pod="coredns-5dd5756b68-926qz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:31.621348 containerd[1445]: 2024-06-25 18:48:31.519 [INFO][4255] dataplane_linux.go 68: Setting the host side veth name to cali681b913a504 ContainerID="10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" Namespace="kube-system" Pod="coredns-5dd5756b68-926qz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:31.621348 containerd[1445]: 2024-06-25 18:48:31.522 [INFO][4255] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" Namespace="kube-system" Pod="coredns-5dd5756b68-926qz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:31.621348 containerd[1445]: 2024-06-25 18:48:31.522 [INFO][4255] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" Namespace="kube-system" Pod="coredns-5dd5756b68-926qz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--926qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--926qz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b4f157b2-6424-4d5a-852d-442ff6151575", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5", Pod:"coredns-5dd5756b68-926qz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali681b913a504", MAC:"a2:21:9d:cc:79:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:31.621348 containerd[1445]: 2024-06-25 18:48:31.617 [INFO][4255] k8s.go 500: Wrote updated endpoint to datastore ContainerID="10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5" Namespace="kube-system" Pod="coredns-5dd5756b68-926qz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:31.653314 containerd[1445]: time="2024-06-25T18:48:31.653168728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:31.653314 containerd[1445]: time="2024-06-25T18:48:31.653240091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:31.653314 containerd[1445]: time="2024-06-25T18:48:31.653260160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:31.653314 containerd[1445]: time="2024-06-25T18:48:31.653275268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:31.675277 systemd[1]: run-containerd-runc-k8s.io-10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5-runc.wbrbvM.mount: Deactivated successfully. Jun 25 18:48:31.683542 systemd[1]: Started cri-containerd-10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5.scope - libcontainer container 10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5. Jun 25 18:48:31.698006 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:48:31.726929 containerd[1445]: time="2024-06-25T18:48:31.726878628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-926qz,Uid:b4f157b2-6424-4d5a-852d-442ff6151575,Namespace:kube-system,Attempt:1,} returns sandbox id \"10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5\"" Jun 25 18:48:31.732584 kubelet[2512]: E0625 18:48:31.732552 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:31.735920 containerd[1445]: time="2024-06-25T18:48:31.735843896Z" level=info msg="CreateContainer within sandbox \"10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:48:31.752897 containerd[1445]: time="2024-06-25T18:48:31.752844645Z" level=info msg="CreateContainer within sandbox \"10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9779edff303afce936ea27d1d4a828aaaddbb66b48032a5cd422be7e76b2dd69\"" Jun 25 18:48:31.753417 containerd[1445]: time="2024-06-25T18:48:31.753392393Z" level=info msg="StartContainer for \"9779edff303afce936ea27d1d4a828aaaddbb66b48032a5cd422be7e76b2dd69\"" Jun 25 18:48:31.781525 systemd[1]: Started cri-containerd-9779edff303afce936ea27d1d4a828aaaddbb66b48032a5cd422be7e76b2dd69.scope - libcontainer container 9779edff303afce936ea27d1d4a828aaaddbb66b48032a5cd422be7e76b2dd69. Jun 25 18:48:31.816959 containerd[1445]: time="2024-06-25T18:48:31.816419423Z" level=info msg="StartContainer for \"9779edff303afce936ea27d1d4a828aaaddbb66b48032a5cd422be7e76b2dd69\" returns successfully" Jun 25 18:48:31.854500 systemd-networkd[1382]: cali73913b11c51: Gained IPv6LL Jun 25 18:48:31.982520 systemd-networkd[1382]: cali6f9f239e6e6: Gained IPv6LL Jun 25 18:48:32.089091 kubelet[2512]: E0625 18:48:32.088794 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:32.090228 kubelet[2512]: E0625 18:48:32.090202 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:32.107305 kubelet[2512]: I0625 18:48:32.106913 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-926qz" podStartSLOduration=32.106877469 podCreationTimestamp="2024-06-25 18:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:48:32.106304984 +0000 UTC m=+46.261543597" watchObservedRunningTime="2024-06-25 18:48:32.106877469 +0000 UTC m=+46.262116062" Jun 25 18:48:32.574042 containerd[1445]: time="2024-06-25T18:48:32.573981667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:32.574825 containerd[1445]: time="2024-06-25T18:48:32.574781359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 18:48:32.576065 containerd[1445]: time="2024-06-25T18:48:32.576027497Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:32.578341 containerd[1445]: time="2024-06-25T18:48:32.578301214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:32.578909 containerd[1445]: time="2024-06-25T18:48:32.578867617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.692513314s" Jun 25 18:48:32.578940 containerd[1445]: time="2024-06-25T18:48:32.578910598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 18:48:32.579489 containerd[1445]: time="2024-06-25T18:48:32.579453757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:48:32.580872 containerd[1445]: time="2024-06-25T18:48:32.580700768Z" level=info msg="CreateContainer within sandbox \"720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:48:32.604794 containerd[1445]: time="2024-06-25T18:48:32.604738854Z" level=info msg="CreateContainer within sandbox \"720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7e4841968331d7d36155f0dbf938d1d7a291ee8c1b3eeeb23060e8d76cf11911\"" Jun 25 18:48:32.605261 containerd[1445]: time="2024-06-25T18:48:32.605229926Z" level=info msg="StartContainer for \"7e4841968331d7d36155f0dbf938d1d7a291ee8c1b3eeeb23060e8d76cf11911\"" Jun 25 18:48:32.623528 systemd-networkd[1382]: cali681b913a504: Gained IPv6LL Jun 25 18:48:32.624165 systemd-networkd[1382]: cali50e42519dbd: Gained IPv6LL Jun 25 18:48:32.638511 systemd[1]: Started cri-containerd-7e4841968331d7d36155f0dbf938d1d7a291ee8c1b3eeeb23060e8d76cf11911.scope - libcontainer container 7e4841968331d7d36155f0dbf938d1d7a291ee8c1b3eeeb23060e8d76cf11911. Jun 25 18:48:32.675326 containerd[1445]: time="2024-06-25T18:48:32.675286500Z" level=info msg="StartContainer for \"7e4841968331d7d36155f0dbf938d1d7a291ee8c1b3eeeb23060e8d76cf11911\" returns successfully" Jun 25 18:48:32.686604 systemd-networkd[1382]: vxlan.calico: Gained IPv6LL Jun 25 18:48:33.093434 kubelet[2512]: E0625 18:48:33.093405 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:33.093434 kubelet[2512]: E0625 18:48:33.093435 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:34.095943 kubelet[2512]: E0625 18:48:34.095889 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:34.096533 kubelet[2512]: E0625 18:48:34.096059 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:34.217450 containerd[1445]: time="2024-06-25T18:48:34.217394904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:34.218277 containerd[1445]: time="2024-06-25T18:48:34.218213391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 18:48:34.219699 containerd[1445]: time="2024-06-25T18:48:34.219669914Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:34.221730 containerd[1445]: time="2024-06-25T18:48:34.221682741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:34.222493 containerd[1445]: time="2024-06-25T18:48:34.222456774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 1.642976126s" Jun 25 18:48:34.222493 containerd[1445]: time="2024-06-25T18:48:34.222490267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 18:48:34.223051 containerd[1445]: time="2024-06-25T18:48:34.223026242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:48:34.231324 containerd[1445]: time="2024-06-25T18:48:34.231288257Z" level=info msg="CreateContainer within sandbox \"011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:48:34.245932 containerd[1445]: time="2024-06-25T18:48:34.245874890Z" level=info msg="CreateContainer within sandbox \"011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bbae3cb0b3a7cf895b8e4ebf0dc79d215a2e7cee33e12d3255df9ad574651de9\"" Jun 25 18:48:34.246651 containerd[1445]: time="2024-06-25T18:48:34.246601605Z" level=info msg="StartContainer for \"bbae3cb0b3a7cf895b8e4ebf0dc79d215a2e7cee33e12d3255df9ad574651de9\"" Jun 25 18:48:34.276510 systemd[1]: Started cri-containerd-bbae3cb0b3a7cf895b8e4ebf0dc79d215a2e7cee33e12d3255df9ad574651de9.scope - libcontainer container bbae3cb0b3a7cf895b8e4ebf0dc79d215a2e7cee33e12d3255df9ad574651de9. Jun 25 18:48:34.321646 containerd[1445]: time="2024-06-25T18:48:34.321589037Z" level=info msg="StartContainer for \"bbae3cb0b3a7cf895b8e4ebf0dc79d215a2e7cee33e12d3255df9ad574651de9\" returns successfully" Jun 25 18:48:34.928941 systemd[1]: Started sshd@12-10.0.0.161:22-10.0.0.1:52856.service - OpenSSH per-connection server daemon (10.0.0.1:52856). Jun 25 18:48:34.989716 sshd[4552]: Accepted publickey for core from 10.0.0.1 port 52856 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:34.992047 sshd[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:34.996874 systemd-logind[1429]: New session 13 of user core. Jun 25 18:48:35.001490 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:48:35.115424 kubelet[2512]: I0625 18:48:35.114621 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-54xzb" podStartSLOduration=35.114585395 podCreationTimestamp="2024-06-25 18:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:48:32.221044952 +0000 UTC m=+46.376283555" watchObservedRunningTime="2024-06-25 18:48:35.114585395 +0000 UTC m=+49.269823998" Jun 25 18:48:35.170125 sshd[4552]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:35.179983 systemd[1]: sshd@12-10.0.0.161:22-10.0.0.1:52856.service: Deactivated successfully. Jun 25 18:48:35.182222 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:48:35.183086 systemd-logind[1429]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:48:35.192820 systemd[1]: Started sshd@13-10.0.0.161:22-10.0.0.1:52858.service - OpenSSH per-connection server daemon (10.0.0.1:52858). Jun 25 18:48:35.193571 systemd-logind[1429]: Removed session 13. Jun 25 18:48:35.209154 systemd[1]: run-containerd-runc-k8s.io-bbae3cb0b3a7cf895b8e4ebf0dc79d215a2e7cee33e12d3255df9ad574651de9-runc.z0U0ou.mount: Deactivated successfully. Jun 25 18:48:35.230416 sshd[4568]: Accepted publickey for core from 10.0.0.1 port 52858 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:35.233122 sshd[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:35.238575 systemd-logind[1429]: New session 14 of user core. Jun 25 18:48:35.246549 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:48:35.258607 kubelet[2512]: I0625 18:48:35.258572 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-fb5dbb684-c2xhl" podStartSLOduration=25.982601559 podCreationTimestamp="2024-06-25 18:48:06 +0000 UTC" firstStartedPulling="2024-06-25 18:48:30.946892418 +0000 UTC m=+45.102131021" lastFinishedPulling="2024-06-25 18:48:34.222823081 +0000 UTC m=+48.378061684" observedRunningTime="2024-06-25 18:48:35.115065555 +0000 UTC m=+49.270304158" watchObservedRunningTime="2024-06-25 18:48:35.258532222 +0000 UTC m=+49.413770825" Jun 25 18:48:35.671996 sshd[4568]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:35.684555 systemd[1]: sshd@13-10.0.0.161:22-10.0.0.1:52858.service: Deactivated successfully. Jun 25 18:48:35.689114 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:48:35.694689 systemd-logind[1429]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:48:35.706534 systemd[1]: Started sshd@14-10.0.0.161:22-10.0.0.1:52872.service - OpenSSH per-connection server daemon (10.0.0.1:52872). Jun 25 18:48:35.709009 systemd-logind[1429]: Removed session 14. Jun 25 18:48:35.754834 sshd[4599]: Accepted publickey for core from 10.0.0.1 port 52872 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:35.756665 sshd[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:35.762634 systemd-logind[1429]: New session 15 of user core. Jun 25 18:48:35.771601 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:48:35.903643 sshd[4599]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:35.908479 systemd[1]: sshd@14-10.0.0.161:22-10.0.0.1:52872.service: Deactivated successfully. Jun 25 18:48:35.910638 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:48:35.911434 systemd-logind[1429]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:48:35.912593 systemd-logind[1429]: Removed session 15. Jun 25 18:48:36.158537 containerd[1445]: time="2024-06-25T18:48:36.158464474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:36.159533 containerd[1445]: time="2024-06-25T18:48:36.159451255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 18:48:36.160808 containerd[1445]: time="2024-06-25T18:48:36.160734383Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:36.163284 containerd[1445]: time="2024-06-25T18:48:36.163246626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:36.164046 containerd[1445]: time="2024-06-25T18:48:36.164008626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.940889188s" Jun 25 18:48:36.164101 containerd[1445]: time="2024-06-25T18:48:36.164046828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 18:48:36.166068 containerd[1445]: time="2024-06-25T18:48:36.166031872Z" level=info msg="CreateContainer within sandbox \"720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:48:36.187674 containerd[1445]: time="2024-06-25T18:48:36.187596437Z" level=info msg="CreateContainer within sandbox \"720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d05f01e2970e9cc2901596fe3acc73dac797a7786a736ab1bea5f36f0418fa3b\"" Jun 25 18:48:36.188265 containerd[1445]: time="2024-06-25T18:48:36.188204919Z" level=info msg="StartContainer for \"d05f01e2970e9cc2901596fe3acc73dac797a7786a736ab1bea5f36f0418fa3b\"" Jun 25 18:48:36.222584 systemd[1]: Started cri-containerd-d05f01e2970e9cc2901596fe3acc73dac797a7786a736ab1bea5f36f0418fa3b.scope - libcontainer container d05f01e2970e9cc2901596fe3acc73dac797a7786a736ab1bea5f36f0418fa3b. Jun 25 18:48:36.294021 containerd[1445]: time="2024-06-25T18:48:36.293965259Z" level=info msg="StartContainer for \"d05f01e2970e9cc2901596fe3acc73dac797a7786a736ab1bea5f36f0418fa3b\" returns successfully" Jun 25 18:48:36.358482 kubelet[2512]: I0625 18:48:36.356534 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:48:36.358482 kubelet[2512]: E0625 18:48:36.357398 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:37.014811 kubelet[2512]: I0625 18:48:37.014774 2512 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:48:37.014811 kubelet[2512]: I0625 18:48:37.014814 2512 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:48:37.107590 kubelet[2512]: E0625 18:48:37.107533 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:37.116739 kubelet[2512]: I0625 18:48:37.116693 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-44zgj" podStartSLOduration=25.838335178 podCreationTimestamp="2024-06-25 18:48:06 +0000 UTC" firstStartedPulling="2024-06-25 18:48:30.885975211 +0000 UTC m=+45.041213814" lastFinishedPulling="2024-06-25 18:48:36.164294313 +0000 UTC m=+50.319532916" observedRunningTime="2024-06-25 18:48:37.1162977 +0000 UTC m=+51.271536333" watchObservedRunningTime="2024-06-25 18:48:37.11665428 +0000 UTC m=+51.271892883" Jun 25 18:48:40.918257 systemd[1]: Started sshd@15-10.0.0.161:22-10.0.0.1:57498.service - OpenSSH per-connection server daemon (10.0.0.1:57498). Jun 25 18:48:40.966181 sshd[4714]: Accepted publickey for core from 10.0.0.1 port 57498 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:40.968090 sshd[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:40.972464 systemd-logind[1429]: New session 16 of user core. Jun 25 18:48:40.978621 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:48:41.107113 sshd[4714]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:41.112697 systemd[1]: sshd@15-10.0.0.161:22-10.0.0.1:57498.service: Deactivated successfully. Jun 25 18:48:41.115030 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:48:41.116991 systemd-logind[1429]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:48:41.118274 systemd-logind[1429]: Removed session 16. Jun 25 18:48:45.915011 containerd[1445]: time="2024-06-25T18:48:45.914965035Z" level=info msg="StopPodSandbox for \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\"" Jun 25 18:48:46.065390 containerd[1445]: 2024-06-25 18:48:45.954 [WARNING][4755] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--54xzb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"32242272-2e0c-40ca-8a2e-803560249411", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da", Pod:"coredns-5dd5756b68-54xzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50e42519dbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:46.065390 containerd[1445]: 2024-06-25 18:48:45.955 [INFO][4755] k8s.go 608: Cleaning up netns ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:46.065390 containerd[1445]: 2024-06-25 18:48:45.955 [INFO][4755] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" iface="eth0" netns="" Jun 25 18:48:46.065390 containerd[1445]: 2024-06-25 18:48:45.955 [INFO][4755] k8s.go 615: Releasing IP address(es) ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:46.065390 containerd[1445]: 2024-06-25 18:48:45.955 [INFO][4755] utils.go 188: Calico CNI releasing IP address ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:46.065390 containerd[1445]: 2024-06-25 18:48:45.977 [INFO][4764] ipam_plugin.go 411: Releasing address using handleID ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" HandleID="k8s-pod-network.d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Workload="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:46.065390 containerd[1445]: 2024-06-25 18:48:45.977 [INFO][4764] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:46.065390 containerd[1445]: 2024-06-25 18:48:45.977 [INFO][4764] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:46.065390 containerd[1445]: 2024-06-25 18:48:46.058 [WARNING][4764] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" HandleID="k8s-pod-network.d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Workload="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:46.065390 containerd[1445]: 2024-06-25 18:48:46.058 [INFO][4764] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" HandleID="k8s-pod-network.d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Workload="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:46.065390 containerd[1445]: 2024-06-25 18:48:46.060 [INFO][4764] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:46.065390 containerd[1445]: 2024-06-25 18:48:46.062 [INFO][4755] k8s.go 621: Teardown processing complete. ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:46.065907 containerd[1445]: time="2024-06-25T18:48:46.065431553Z" level=info msg="TearDown network for sandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\" successfully" Jun 25 18:48:46.065907 containerd[1445]: time="2024-06-25T18:48:46.065465026Z" level=info msg="StopPodSandbox for \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\" returns successfully" Jun 25 18:48:46.066013 containerd[1445]: time="2024-06-25T18:48:46.065977941Z" level=info msg="RemovePodSandbox for \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\"" Jun 25 18:48:46.068201 containerd[1445]: time="2024-06-25T18:48:46.068173102Z" level=info msg="Forcibly stopping sandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\"" Jun 25 18:48:46.119693 systemd[1]: Started sshd@16-10.0.0.161:22-10.0.0.1:36856.service - OpenSSH per-connection server daemon (10.0.0.1:36856). Jun 25 18:48:46.140501 containerd[1445]: 2024-06-25 18:48:46.107 [WARNING][4787] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--54xzb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"32242272-2e0c-40ca-8a2e-803560249411", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b888ed8ae9892c4ff152d2eb681f6f73a5b5039fe32b91cab2533fc6cc8f0da", Pod:"coredns-5dd5756b68-54xzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50e42519dbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:46.140501 containerd[1445]: 2024-06-25 18:48:46.108 [INFO][4787] k8s.go 608: Cleaning up netns ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:46.140501 containerd[1445]: 2024-06-25 18:48:46.108 [INFO][4787] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" iface="eth0" netns="" Jun 25 18:48:46.140501 containerd[1445]: 2024-06-25 18:48:46.108 [INFO][4787] k8s.go 615: Releasing IP address(es) ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:46.140501 containerd[1445]: 2024-06-25 18:48:46.108 [INFO][4787] utils.go 188: Calico CNI releasing IP address ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:46.140501 containerd[1445]: 2024-06-25 18:48:46.129 [INFO][4796] ipam_plugin.go 411: Releasing address using handleID ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" HandleID="k8s-pod-network.d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Workload="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:46.140501 containerd[1445]: 2024-06-25 18:48:46.129 [INFO][4796] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:46.140501 containerd[1445]: 2024-06-25 18:48:46.129 [INFO][4796] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:46.140501 containerd[1445]: 2024-06-25 18:48:46.134 [WARNING][4796] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" HandleID="k8s-pod-network.d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Workload="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:46.140501 containerd[1445]: 2024-06-25 18:48:46.134 [INFO][4796] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" HandleID="k8s-pod-network.d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Workload="localhost-k8s-coredns--5dd5756b68--54xzb-eth0" Jun 25 18:48:46.140501 containerd[1445]: 2024-06-25 18:48:46.135 [INFO][4796] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:46.140501 containerd[1445]: 2024-06-25 18:48:46.138 [INFO][4787] k8s.go 621: Teardown processing complete. ContainerID="d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e" Jun 25 18:48:46.140887 containerd[1445]: time="2024-06-25T18:48:46.140533700Z" level=info msg="TearDown network for sandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\" successfully" Jun 25 18:48:46.171544 sshd[4802]: Accepted publickey for core from 10.0.0.1 port 36856 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:46.173235 sshd[4802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:46.173662 containerd[1445]: time="2024-06-25T18:48:46.173498733Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:48:46.173662 containerd[1445]: time="2024-06-25T18:48:46.173592780Z" level=info msg="RemovePodSandbox \"d48c2213c9d098b06876db0852f742b79929f0f5fbf7e1a1879fbdd54c85482e\" returns successfully" Jun 25 18:48:46.174211 containerd[1445]: time="2024-06-25T18:48:46.174169425Z" level=info msg="StopPodSandbox for \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\"" Jun 25 18:48:46.178230 systemd-logind[1429]: New session 17 of user core. Jun 25 18:48:46.185569 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:48:46.239148 containerd[1445]: 2024-06-25 18:48:46.209 [WARNING][4820] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0", GenerateName:"calico-kube-controllers-fb5dbb684-", Namespace:"calico-system", SelfLink:"", UID:"7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fb5dbb684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4", Pod:"calico-kube-controllers-fb5dbb684-c2xhl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f9f239e6e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:46.239148 containerd[1445]: 2024-06-25 18:48:46.209 [INFO][4820] k8s.go 608: Cleaning up netns ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:46.239148 containerd[1445]: 2024-06-25 18:48:46.209 [INFO][4820] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" iface="eth0" netns="" Jun 25 18:48:46.239148 containerd[1445]: 2024-06-25 18:48:46.209 [INFO][4820] k8s.go 615: Releasing IP address(es) ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:46.239148 containerd[1445]: 2024-06-25 18:48:46.209 [INFO][4820] utils.go 188: Calico CNI releasing IP address ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:46.239148 containerd[1445]: 2024-06-25 18:48:46.228 [INFO][4829] ipam_plugin.go 411: Releasing address using handleID ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" HandleID="k8s-pod-network.59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Workload="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:46.239148 containerd[1445]: 2024-06-25 18:48:46.228 [INFO][4829] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:46.239148 containerd[1445]: 2024-06-25 18:48:46.228 [INFO][4829] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:46.239148 containerd[1445]: 2024-06-25 18:48:46.233 [WARNING][4829] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" HandleID="k8s-pod-network.59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Workload="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:46.239148 containerd[1445]: 2024-06-25 18:48:46.233 [INFO][4829] ipam_plugin.go 439: Releasing address using workloadID ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" HandleID="k8s-pod-network.59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Workload="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:46.239148 containerd[1445]: 2024-06-25 18:48:46.234 [INFO][4829] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:46.239148 containerd[1445]: 2024-06-25 18:48:46.236 [INFO][4820] k8s.go 621: Teardown processing complete. ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:46.239699 containerd[1445]: time="2024-06-25T18:48:46.239158340Z" level=info msg="TearDown network for sandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\" successfully" Jun 25 18:48:46.239699 containerd[1445]: time="2024-06-25T18:48:46.239183598Z" level=info msg="StopPodSandbox for \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\" returns successfully" Jun 25 18:48:46.239699 containerd[1445]: time="2024-06-25T18:48:46.239659234Z" level=info msg="RemovePodSandbox for \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\"" Jun 25 18:48:46.239699 containerd[1445]: time="2024-06-25T18:48:46.239695622Z" level=info msg="Forcibly stopping sandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\"" Jun 25 18:48:46.309980 sshd[4802]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:46.311713 containerd[1445]: 2024-06-25 18:48:46.275 [WARNING][4858] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0", GenerateName:"calico-kube-controllers-fb5dbb684-", Namespace:"calico-system", SelfLink:"", UID:"7ee8e7db-b61a-4d82-9e63-5fbaf75d0a94", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fb5dbb684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"011bba0902124d003beeb6600d150ef7fd3ff038dc04cfce709bc559faff37f4", Pod:"calico-kube-controllers-fb5dbb684-c2xhl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f9f239e6e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:46.311713 containerd[1445]: 2024-06-25 18:48:46.275 [INFO][4858] k8s.go 608: Cleaning up netns ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:46.311713 containerd[1445]: 2024-06-25 18:48:46.275 [INFO][4858] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" iface="eth0" netns="" Jun 25 18:48:46.311713 containerd[1445]: 2024-06-25 18:48:46.275 [INFO][4858] k8s.go 615: Releasing IP address(es) ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:46.311713 containerd[1445]: 2024-06-25 18:48:46.275 [INFO][4858] utils.go 188: Calico CNI releasing IP address ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:46.311713 containerd[1445]: 2024-06-25 18:48:46.299 [INFO][4866] ipam_plugin.go 411: Releasing address using handleID ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" HandleID="k8s-pod-network.59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Workload="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:46.311713 containerd[1445]: 2024-06-25 18:48:46.300 [INFO][4866] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:46.311713 containerd[1445]: 2024-06-25 18:48:46.300 [INFO][4866] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:46.311713 containerd[1445]: 2024-06-25 18:48:46.305 [WARNING][4866] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" HandleID="k8s-pod-network.59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Workload="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:46.311713 containerd[1445]: 2024-06-25 18:48:46.305 [INFO][4866] ipam_plugin.go 439: Releasing address using workloadID ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" HandleID="k8s-pod-network.59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Workload="localhost-k8s-calico--kube--controllers--fb5dbb684--c2xhl-eth0" Jun 25 18:48:46.311713 containerd[1445]: 2024-06-25 18:48:46.307 [INFO][4866] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:46.311713 containerd[1445]: 2024-06-25 18:48:46.309 [INFO][4858] k8s.go 621: Teardown processing complete. ContainerID="59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d" Jun 25 18:48:46.312658 containerd[1445]: time="2024-06-25T18:48:46.311825405Z" level=info msg="TearDown network for sandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\" successfully" Jun 25 18:48:46.313173 systemd[1]: sshd@16-10.0.0.161:22-10.0.0.1:36856.service: Deactivated successfully. Jun 25 18:48:46.316038 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:48:46.317615 containerd[1445]: time="2024-06-25T18:48:46.317571538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:48:46.317686 containerd[1445]: time="2024-06-25T18:48:46.317627523Z" level=info msg="RemovePodSandbox \"59c20929ec75ddd916b1ef88aaeaeb2c17a7100e782232d53d2624256783678d\" returns successfully" Jun 25 18:48:46.318028 systemd-logind[1429]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:48:46.318546 containerd[1445]: time="2024-06-25T18:48:46.318047604Z" level=info msg="StopPodSandbox for \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\"" Jun 25 18:48:46.319299 systemd-logind[1429]: Removed session 17. Jun 25 18:48:46.383452 containerd[1445]: 2024-06-25 18:48:46.352 [WARNING][4890] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--44zgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95ae610b-871e-4cb8-8b01-77db3f937baa", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e", Pod:"csi-node-driver-44zgj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali73913b11c51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:46.383452 containerd[1445]: 2024-06-25 18:48:46.352 [INFO][4890] k8s.go 608: Cleaning up netns ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:46.383452 containerd[1445]: 2024-06-25 18:48:46.352 [INFO][4890] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" iface="eth0" netns="" Jun 25 18:48:46.383452 containerd[1445]: 2024-06-25 18:48:46.352 [INFO][4890] k8s.go 615: Releasing IP address(es) ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:46.383452 containerd[1445]: 2024-06-25 18:48:46.352 [INFO][4890] utils.go 188: Calico CNI releasing IP address ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:46.383452 containerd[1445]: 2024-06-25 18:48:46.371 [INFO][4898] ipam_plugin.go 411: Releasing address using handleID ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" HandleID="k8s-pod-network.f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Workload="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:46.383452 containerd[1445]: 2024-06-25 18:48:46.371 [INFO][4898] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:46.383452 containerd[1445]: 2024-06-25 18:48:46.371 [INFO][4898] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:46.383452 containerd[1445]: 2024-06-25 18:48:46.376 [WARNING][4898] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" HandleID="k8s-pod-network.f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Workload="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:46.383452 containerd[1445]: 2024-06-25 18:48:46.377 [INFO][4898] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" HandleID="k8s-pod-network.f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Workload="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:46.383452 containerd[1445]: 2024-06-25 18:48:46.378 [INFO][4898] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:46.383452 containerd[1445]: 2024-06-25 18:48:46.381 [INFO][4890] k8s.go 621: Teardown processing complete. ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:46.383869 containerd[1445]: time="2024-06-25T18:48:46.383501123Z" level=info msg="TearDown network for sandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\" successfully" Jun 25 18:48:46.383869 containerd[1445]: time="2024-06-25T18:48:46.383531822Z" level=info msg="StopPodSandbox for \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\" returns successfully" Jun 25 18:48:46.384073 containerd[1445]: time="2024-06-25T18:48:46.384041180Z" level=info msg="RemovePodSandbox for \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\"" Jun 25 18:48:46.384102 containerd[1445]: time="2024-06-25T18:48:46.384080694Z" level=info msg="Forcibly stopping sandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\"" Jun 25 18:48:46.456046 containerd[1445]: 2024-06-25 18:48:46.425 [WARNING][4921] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--44zgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95ae610b-871e-4cb8-8b01-77db3f937baa", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"720869a7ca71aa39b528fd96bfb8ff2d36d822454b0998d289794034fdbb5e0e", Pod:"csi-node-driver-44zgj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali73913b11c51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:46.456046 containerd[1445]: 2024-06-25 18:48:46.425 [INFO][4921] k8s.go 608: Cleaning up netns ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:46.456046 containerd[1445]: 2024-06-25 18:48:46.425 [INFO][4921] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" iface="eth0" netns="" Jun 25 18:48:46.456046 containerd[1445]: 2024-06-25 18:48:46.425 [INFO][4921] k8s.go 615: Releasing IP address(es) ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:46.456046 containerd[1445]: 2024-06-25 18:48:46.425 [INFO][4921] utils.go 188: Calico CNI releasing IP address ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:46.456046 containerd[1445]: 2024-06-25 18:48:46.444 [INFO][4928] ipam_plugin.go 411: Releasing address using handleID ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" HandleID="k8s-pod-network.f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Workload="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:46.456046 containerd[1445]: 2024-06-25 18:48:46.444 [INFO][4928] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:46.456046 containerd[1445]: 2024-06-25 18:48:46.444 [INFO][4928] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:46.456046 containerd[1445]: 2024-06-25 18:48:46.450 [WARNING][4928] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" HandleID="k8s-pod-network.f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Workload="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:46.456046 containerd[1445]: 2024-06-25 18:48:46.450 [INFO][4928] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" HandleID="k8s-pod-network.f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Workload="localhost-k8s-csi--node--driver--44zgj-eth0" Jun 25 18:48:46.456046 containerd[1445]: 2024-06-25 18:48:46.451 [INFO][4928] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:46.456046 containerd[1445]: 2024-06-25 18:48:46.453 [INFO][4921] k8s.go 621: Teardown processing complete. ContainerID="f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d" Jun 25 18:48:46.456046 containerd[1445]: time="2024-06-25T18:48:46.456024989Z" level=info msg="TearDown network for sandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\" successfully" Jun 25 18:48:46.459608 containerd[1445]: time="2024-06-25T18:48:46.459576522Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:48:46.459705 containerd[1445]: time="2024-06-25T18:48:46.459627627Z" level=info msg="RemovePodSandbox \"f6a9e87102bb588971c5efb258aa264cec90d3ae0494582f13847b1b7925279d\" returns successfully" Jun 25 18:48:46.460197 containerd[1445]: time="2024-06-25T18:48:46.460146033Z" level=info msg="StopPodSandbox for \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\"" Jun 25 18:48:46.523927 containerd[1445]: 2024-06-25 18:48:46.493 [WARNING][4951] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--926qz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b4f157b2-6424-4d5a-852d-442ff6151575", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5", Pod:"coredns-5dd5756b68-926qz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali681b913a504", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:46.523927 containerd[1445]: 2024-06-25 18:48:46.493 [INFO][4951] k8s.go 608: Cleaning up netns ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:46.523927 containerd[1445]: 2024-06-25 18:48:46.493 [INFO][4951] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" iface="eth0" netns="" Jun 25 18:48:46.523927 containerd[1445]: 2024-06-25 18:48:46.493 [INFO][4951] k8s.go 615: Releasing IP address(es) ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:46.523927 containerd[1445]: 2024-06-25 18:48:46.493 [INFO][4951] utils.go 188: Calico CNI releasing IP address ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:46.523927 containerd[1445]: 2024-06-25 18:48:46.512 [INFO][4959] ipam_plugin.go 411: Releasing address using handleID ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" HandleID="k8s-pod-network.fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Workload="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:46.523927 containerd[1445]: 2024-06-25 18:48:46.512 [INFO][4959] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:46.523927 containerd[1445]: 2024-06-25 18:48:46.512 [INFO][4959] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:46.523927 containerd[1445]: 2024-06-25 18:48:46.517 [WARNING][4959] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" HandleID="k8s-pod-network.fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Workload="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:46.523927 containerd[1445]: 2024-06-25 18:48:46.518 [INFO][4959] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" HandleID="k8s-pod-network.fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Workload="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:46.523927 containerd[1445]: 2024-06-25 18:48:46.519 [INFO][4959] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:46.523927 containerd[1445]: 2024-06-25 18:48:46.521 [INFO][4951] k8s.go 621: Teardown processing complete. ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:46.524333 containerd[1445]: time="2024-06-25T18:48:46.523971790Z" level=info msg="TearDown network for sandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\" successfully" Jun 25 18:48:46.524333 containerd[1445]: time="2024-06-25T18:48:46.524002227Z" level=info msg="StopPodSandbox for \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\" returns successfully" Jun 25 18:48:46.524609 containerd[1445]: time="2024-06-25T18:48:46.524565497Z" level=info msg="RemovePodSandbox for \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\"" Jun 25 18:48:46.524609 containerd[1445]: time="2024-06-25T18:48:46.524608037Z" level=info msg="Forcibly stopping sandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\"" Jun 25 18:48:46.594689 containerd[1445]: 2024-06-25 18:48:46.560 [WARNING][4983] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--926qz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b4f157b2-6424-4d5a-852d-442ff6151575", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10eb14c819457b9548f91bd765b8318319f9e4713bd1cca3ab2d6946967111f5", Pod:"coredns-5dd5756b68-926qz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali681b913a504", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:48:46.594689 containerd[1445]: 2024-06-25 18:48:46.560 [INFO][4983] k8s.go 608: Cleaning up netns ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:46.594689 containerd[1445]: 2024-06-25 18:48:46.560 [INFO][4983] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" iface="eth0" netns="" Jun 25 18:48:46.594689 containerd[1445]: 2024-06-25 18:48:46.560 [INFO][4983] k8s.go 615: Releasing IP address(es) ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:46.594689 containerd[1445]: 2024-06-25 18:48:46.560 [INFO][4983] utils.go 188: Calico CNI releasing IP address ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:46.594689 containerd[1445]: 2024-06-25 18:48:46.582 [INFO][4991] ipam_plugin.go 411: Releasing address using handleID ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" HandleID="k8s-pod-network.fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Workload="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:46.594689 containerd[1445]: 2024-06-25 18:48:46.582 [INFO][4991] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:48:46.594689 containerd[1445]: 2024-06-25 18:48:46.582 [INFO][4991] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:48:46.594689 containerd[1445]: 2024-06-25 18:48:46.588 [WARNING][4991] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" HandleID="k8s-pod-network.fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Workload="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:46.594689 containerd[1445]: 2024-06-25 18:48:46.588 [INFO][4991] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" HandleID="k8s-pod-network.fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Workload="localhost-k8s-coredns--5dd5756b68--926qz-eth0" Jun 25 18:48:46.594689 containerd[1445]: 2024-06-25 18:48:46.589 [INFO][4991] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:48:46.594689 containerd[1445]: 2024-06-25 18:48:46.592 [INFO][4983] k8s.go 621: Teardown processing complete. ContainerID="fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb" Jun 25 18:48:46.595123 containerd[1445]: time="2024-06-25T18:48:46.594775318Z" level=info msg="TearDown network for sandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\" successfully" Jun 25 18:48:46.598487 containerd[1445]: time="2024-06-25T18:48:46.598454140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:48:46.598572 containerd[1445]: time="2024-06-25T18:48:46.598512159Z" level=info msg="RemovePodSandbox \"fd967a85d5e8a0c47b5fbfae1bb99991d5b90d3a5e89b682ca58c71ab77e85eb\" returns successfully" Jun 25 18:48:51.322988 systemd[1]: Started sshd@17-10.0.0.161:22-10.0.0.1:36864.service - OpenSSH per-connection server daemon (10.0.0.1:36864). Jun 25 18:48:51.362311 sshd[5020]: Accepted publickey for core from 10.0.0.1 port 36864 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:51.364129 sshd[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:51.368499 systemd-logind[1429]: New session 18 of user core. Jun 25 18:48:51.378672 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:48:51.514842 sshd[5020]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:51.518781 systemd[1]: sshd@17-10.0.0.161:22-10.0.0.1:36864.service: Deactivated successfully. Jun 25 18:48:51.520944 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:48:51.521780 systemd-logind[1429]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:48:51.522761 systemd-logind[1429]: Removed session 18. Jun 25 18:48:55.926726 kubelet[2512]: E0625 18:48:55.926665 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:56.526866 systemd[1]: Started sshd@18-10.0.0.161:22-10.0.0.1:47722.service - OpenSSH per-connection server daemon (10.0.0.1:47722). Jun 25 18:48:56.563041 sshd[5048]: Accepted publickey for core from 10.0.0.1 port 47722 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:56.564490 sshd[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:56.568136 systemd-logind[1429]: New session 19 of user core. Jun 25 18:48:56.577496 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:48:56.692111 sshd[5048]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:56.702342 systemd[1]: sshd@18-10.0.0.161:22-10.0.0.1:47722.service: Deactivated successfully. Jun 25 18:48:56.704222 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:48:56.706060 systemd-logind[1429]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:48:56.711878 systemd[1]: Started sshd@19-10.0.0.161:22-10.0.0.1:47730.service - OpenSSH per-connection server daemon (10.0.0.1:47730). Jun 25 18:48:56.712978 systemd-logind[1429]: Removed session 19. Jun 25 18:48:56.744744 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 47730 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:56.746531 sshd[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:56.750706 systemd-logind[1429]: New session 20 of user core. Jun 25 18:48:56.761486 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:48:57.032027 sshd[5062]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:57.046749 systemd[1]: sshd@19-10.0.0.161:22-10.0.0.1:47730.service: Deactivated successfully. Jun 25 18:48:57.049673 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:48:57.052029 systemd-logind[1429]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:48:57.057869 systemd[1]: Started sshd@20-10.0.0.161:22-10.0.0.1:47746.service - OpenSSH per-connection server daemon (10.0.0.1:47746). Jun 25 18:48:57.058946 systemd-logind[1429]: Removed session 20. Jun 25 18:48:57.094729 sshd[5075]: Accepted publickey for core from 10.0.0.1 port 47746 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:57.096164 sshd[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:57.100205 systemd-logind[1429]: New session 21 of user core. Jun 25 18:48:57.107504 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:48:58.160955 sshd[5075]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:58.170638 systemd[1]: sshd@20-10.0.0.161:22-10.0.0.1:47746.service: Deactivated successfully. Jun 25 18:48:58.173906 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:48:58.177333 systemd-logind[1429]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:48:58.187812 systemd[1]: Started sshd@21-10.0.0.161:22-10.0.0.1:47756.service - OpenSSH per-connection server daemon (10.0.0.1:47756). Jun 25 18:48:58.188673 systemd-logind[1429]: Removed session 21. Jun 25 18:48:58.222906 sshd[5094]: Accepted publickey for core from 10.0.0.1 port 47756 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:58.224413 sshd[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:58.228497 systemd-logind[1429]: New session 22 of user core. Jun 25 18:48:58.235511 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:48:58.577411 sshd[5094]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:58.591511 systemd[1]: sshd@21-10.0.0.161:22-10.0.0.1:47756.service: Deactivated successfully. Jun 25 18:48:58.593462 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:48:58.595137 systemd-logind[1429]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:48:58.600621 systemd[1]: Started sshd@22-10.0.0.161:22-10.0.0.1:47764.service - OpenSSH per-connection server daemon (10.0.0.1:47764). Jun 25 18:48:58.601628 systemd-logind[1429]: Removed session 22. Jun 25 18:48:58.637438 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 47764 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:48:58.638794 sshd[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:58.642981 systemd-logind[1429]: New session 23 of user core. Jun 25 18:48:58.652486 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:48:58.760428 sshd[5107]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:58.764570 systemd[1]: sshd@22-10.0.0.161:22-10.0.0.1:47764.service: Deactivated successfully. Jun 25 18:48:58.767478 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:48:58.768265 systemd-logind[1429]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:48:58.769273 systemd-logind[1429]: Removed session 23. Jun 25 18:49:01.126990 kubelet[2512]: I0625 18:49:01.126921 2512 topology_manager.go:215] "Topology Admit Handler" podUID="132d7f1e-afb1-4a94-95d0-c658c3c53e7f" podNamespace="calico-apiserver" podName="calico-apiserver-647c77d5f9-445ww" Jun 25 18:49:01.142334 systemd[1]: Created slice kubepods-besteffort-pod132d7f1e_afb1_4a94_95d0_c658c3c53e7f.slice - libcontainer container kubepods-besteffort-pod132d7f1e_afb1_4a94_95d0_c658c3c53e7f.slice. Jun 25 18:49:01.207052 kubelet[2512]: I0625 18:49:01.207012 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6lql\" (UniqueName: \"kubernetes.io/projected/132d7f1e-afb1-4a94-95d0-c658c3c53e7f-kube-api-access-z6lql\") pod \"calico-apiserver-647c77d5f9-445ww\" (UID: \"132d7f1e-afb1-4a94-95d0-c658c3c53e7f\") " pod="calico-apiserver/calico-apiserver-647c77d5f9-445ww" Jun 25 18:49:01.207052 kubelet[2512]: I0625 18:49:01.207058 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/132d7f1e-afb1-4a94-95d0-c658c3c53e7f-calico-apiserver-certs\") pod \"calico-apiserver-647c77d5f9-445ww\" (UID: \"132d7f1e-afb1-4a94-95d0-c658c3c53e7f\") " pod="calico-apiserver/calico-apiserver-647c77d5f9-445ww" Jun 25 18:49:01.307715 kubelet[2512]: E0625 18:49:01.307671 2512 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:49:01.310038 kubelet[2512]: E0625 18:49:01.310005 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/132d7f1e-afb1-4a94-95d0-c658c3c53e7f-calico-apiserver-certs podName:132d7f1e-afb1-4a94-95d0-c658c3c53e7f nodeName:}" failed. No retries permitted until 2024-06-25 18:49:01.807751343 +0000 UTC m=+75.962989946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/132d7f1e-afb1-4a94-95d0-c658c3c53e7f-calico-apiserver-certs") pod "calico-apiserver-647c77d5f9-445ww" (UID: "132d7f1e-afb1-4a94-95d0-c658c3c53e7f") : secret "calico-apiserver-certs" not found Jun 25 18:49:01.811863 kubelet[2512]: E0625 18:49:01.811817 2512 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:49:01.812051 kubelet[2512]: E0625 18:49:01.811893 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/132d7f1e-afb1-4a94-95d0-c658c3c53e7f-calico-apiserver-certs podName:132d7f1e-afb1-4a94-95d0-c658c3c53e7f nodeName:}" failed. No retries permitted until 2024-06-25 18:49:02.811877292 +0000 UTC m=+76.967115905 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/132d7f1e-afb1-4a94-95d0-c658c3c53e7f-calico-apiserver-certs") pod "calico-apiserver-647c77d5f9-445ww" (UID: "132d7f1e-afb1-4a94-95d0-c658c3c53e7f") : secret "calico-apiserver-certs" not found Jun 25 18:49:02.949982 containerd[1445]: time="2024-06-25T18:49:02.949936171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c77d5f9-445ww,Uid:132d7f1e-afb1-4a94-95d0-c658c3c53e7f,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:49:03.182894 systemd-networkd[1382]: cali1f69829326d: Link UP Jun 25 18:49:03.183681 systemd-networkd[1382]: cali1f69829326d: Gained carrier Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.112 [INFO][5156] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0 calico-apiserver-647c77d5f9- calico-apiserver 132d7f1e-afb1-4a94-95d0-c658c3c53e7f 1106 0 2024-06-25 18:49:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:647c77d5f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-647c77d5f9-445ww eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1f69829326d [] []}} ContainerID="d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" Namespace="calico-apiserver" Pod="calico-apiserver-647c77d5f9-445ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c77d5f9--445ww-" Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.112 [INFO][5156] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" Namespace="calico-apiserver" Pod="calico-apiserver-647c77d5f9-445ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0" Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.136 [INFO][5169] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" HandleID="k8s-pod-network.d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" Workload="localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0" Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.143 [INFO][5169] ipam_plugin.go 264: Auto assigning IP ContainerID="d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" HandleID="k8s-pod-network.d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" Workload="localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051b80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-647c77d5f9-445ww", "timestamp":"2024-06-25 18:49:03.136539831 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.143 [INFO][5169] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.143 [INFO][5169] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.143 [INFO][5169] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.144 [INFO][5169] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" host="localhost" Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.159 [INFO][5169] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.162 [INFO][5169] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.164 [INFO][5169] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.166 [INFO][5169] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.166 [INFO][5169] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" host="localhost" Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.167 [INFO][5169] ipam.go 1685: Creating new handle: k8s-pod-network.d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2 Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.170 [INFO][5169] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" host="localhost" Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.178 [INFO][5169] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" host="localhost" Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.178 [INFO][5169] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" host="localhost" Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.178 [INFO][5169] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:49:03.237198 containerd[1445]: 2024-06-25 18:49:03.178 [INFO][5169] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" HandleID="k8s-pod-network.d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" Workload="localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0" Jun 25 18:49:03.237746 containerd[1445]: 2024-06-25 18:49:03.181 [INFO][5156] k8s.go 386: Populated endpoint ContainerID="d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" Namespace="calico-apiserver" Pod="calico-apiserver-647c77d5f9-445ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0", GenerateName:"calico-apiserver-647c77d5f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"132d7f1e-afb1-4a94-95d0-c658c3c53e7f", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647c77d5f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-647c77d5f9-445ww", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f69829326d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:49:03.237746 containerd[1445]: 2024-06-25 18:49:03.181 [INFO][5156] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" Namespace="calico-apiserver" Pod="calico-apiserver-647c77d5f9-445ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0" Jun 25 18:49:03.237746 containerd[1445]: 2024-06-25 18:49:03.181 [INFO][5156] dataplane_linux.go 68: Setting the host side veth name to cali1f69829326d ContainerID="d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" Namespace="calico-apiserver" Pod="calico-apiserver-647c77d5f9-445ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0" Jun 25 18:49:03.237746 containerd[1445]: 2024-06-25 18:49:03.183 [INFO][5156] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" Namespace="calico-apiserver" Pod="calico-apiserver-647c77d5f9-445ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0" Jun 25 18:49:03.237746 containerd[1445]: 2024-06-25 18:49:03.183 [INFO][5156] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" Namespace="calico-apiserver" Pod="calico-apiserver-647c77d5f9-445ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0", GenerateName:"calico-apiserver-647c77d5f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"132d7f1e-afb1-4a94-95d0-c658c3c53e7f", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647c77d5f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2", Pod:"calico-apiserver-647c77d5f9-445ww", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f69829326d", MAC:"c2:b8:34:18:4f:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:49:03.237746 containerd[1445]: 2024-06-25 18:49:03.234 [INFO][5156] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2" Namespace="calico-apiserver" Pod="calico-apiserver-647c77d5f9-445ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c77d5f9--445ww-eth0" Jun 25 18:49:03.271650 containerd[1445]: time="2024-06-25T18:49:03.271538408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:49:03.272209 containerd[1445]: time="2024-06-25T18:49:03.272124197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:49:03.272209 containerd[1445]: time="2024-06-25T18:49:03.272153393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:49:03.272209 containerd[1445]: time="2024-06-25T18:49:03.272164223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:49:03.293563 systemd[1]: Started cri-containerd-d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2.scope - libcontainer container d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2. Jun 25 18:49:03.305951 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:49:03.330093 containerd[1445]: time="2024-06-25T18:49:03.330038542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c77d5f9-445ww,Uid:132d7f1e-afb1-4a94-95d0-c658c3c53e7f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2\"" Jun 25 18:49:03.331599 containerd[1445]: time="2024-06-25T18:49:03.331566841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:49:03.774360 systemd[1]: Started sshd@23-10.0.0.161:22-10.0.0.1:47774.service - OpenSSH per-connection server daemon (10.0.0.1:47774). Jun 25 18:49:03.813073 sshd[5234]: Accepted publickey for core from 10.0.0.1 port 47774 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:49:03.814709 sshd[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:03.818788 systemd-logind[1429]: New session 24 of user core. Jun 25 18:49:03.828503 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:49:03.931138 kubelet[2512]: E0625 18:49:03.931075 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:03.932918 sshd[5234]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:03.938631 systemd[1]: sshd@23-10.0.0.161:22-10.0.0.1:47774.service: Deactivated successfully. Jun 25 18:49:03.940490 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:49:03.941100 systemd-logind[1429]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:49:03.941939 systemd-logind[1429]: Removed session 24. Jun 25 18:49:04.942535 systemd-networkd[1382]: cali1f69829326d: Gained IPv6LL Jun 25 18:49:05.235710 containerd[1445]: time="2024-06-25T18:49:05.235562867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:05.236506 containerd[1445]: time="2024-06-25T18:49:05.236412408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 18:49:05.237652 containerd[1445]: time="2024-06-25T18:49:05.237617267Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:05.239855 containerd[1445]: time="2024-06-25T18:49:05.239801075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:05.240550 containerd[1445]: time="2024-06-25T18:49:05.240511881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 1.908907388s" Jun 25 18:49:05.240550 containerd[1445]: time="2024-06-25T18:49:05.240547870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:49:05.242993 containerd[1445]: time="2024-06-25T18:49:05.242963119Z" level=info msg="CreateContainer within sandbox \"d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:49:05.254857 containerd[1445]: time="2024-06-25T18:49:05.254819375Z" level=info msg="CreateContainer within sandbox \"d924191fdfb81600ac36ac1987636d93a36af544dc1ee2ab51ac7367c3ec8ef2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d1de2cb4d60fcbac37fbd50237b90d23821c3aa193c24f84959ff799a4c7c493\"" Jun 25 18:49:05.255894 containerd[1445]: time="2024-06-25T18:49:05.255226562Z" level=info msg="StartContainer for \"d1de2cb4d60fcbac37fbd50237b90d23821c3aa193c24f84959ff799a4c7c493\"" Jun 25 18:49:05.292664 systemd[1]: Started cri-containerd-d1de2cb4d60fcbac37fbd50237b90d23821c3aa193c24f84959ff799a4c7c493.scope - libcontainer container d1de2cb4d60fcbac37fbd50237b90d23821c3aa193c24f84959ff799a4c7c493. Jun 25 18:49:05.390937 containerd[1445]: time="2024-06-25T18:49:05.390891400Z" level=info msg="StartContainer for \"d1de2cb4d60fcbac37fbd50237b90d23821c3aa193c24f84959ff799a4c7c493\" returns successfully" Jun 25 18:49:06.248806 kubelet[2512]: I0625 18:49:06.248758 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-647c77d5f9-445ww" podStartSLOduration=3.338982984 podCreationTimestamp="2024-06-25 18:49:01 +0000 UTC" firstStartedPulling="2024-06-25 18:49:03.33104769 +0000 UTC m=+77.486286293" lastFinishedPulling="2024-06-25 18:49:05.24078356 +0000 UTC m=+79.396022163" observedRunningTime="2024-06-25 18:49:06.181390055 +0000 UTC m=+80.336628648" watchObservedRunningTime="2024-06-25 18:49:06.248718854 +0000 UTC m=+80.403957457" Jun 25 18:49:08.946564 systemd[1]: Started sshd@24-10.0.0.161:22-10.0.0.1:47366.service - OpenSSH per-connection server daemon (10.0.0.1:47366). Jun 25 18:49:08.989941 sshd[5328]: Accepted publickey for core from 10.0.0.1 port 47366 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:49:08.992087 sshd[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:08.996212 systemd-logind[1429]: New session 25 of user core. Jun 25 18:49:09.003492 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:49:09.122854 sshd[5328]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:09.127165 systemd[1]: sshd@24-10.0.0.161:22-10.0.0.1:47366.service: Deactivated successfully. Jun 25 18:49:09.129432 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:49:09.130071 systemd-logind[1429]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:49:09.130966 systemd-logind[1429]: Removed session 25. Jun 25 18:49:14.134281 systemd[1]: Started sshd@25-10.0.0.161:22-10.0.0.1:47370.service - OpenSSH per-connection server daemon (10.0.0.1:47370). Jun 25 18:49:14.171429 sshd[5356]: Accepted publickey for core from 10.0.0.1 port 47370 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:49:14.172876 sshd[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:14.176763 systemd-logind[1429]: New session 26 of user core. Jun 25 18:49:14.181568 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:49:14.283747 sshd[5356]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:14.288695 systemd[1]: sshd@25-10.0.0.161:22-10.0.0.1:47370.service: Deactivated successfully. Jun 25 18:49:14.290601 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:49:14.291188 systemd-logind[1429]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:49:14.291963 systemd-logind[1429]: Removed session 26. Jun 25 18:49:19.294052 systemd[1]: Started sshd@26-10.0.0.161:22-10.0.0.1:34830.service - OpenSSH per-connection server daemon (10.0.0.1:34830). Jun 25 18:49:19.330602 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 34830 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:49:19.332031 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:19.335582 systemd-logind[1429]: New session 27 of user core. Jun 25 18:49:19.346479 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:49:19.448773 sshd[5391]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:19.452247 systemd[1]: sshd@26-10.0.0.161:22-10.0.0.1:34830.service: Deactivated successfully. Jun 25 18:49:19.454358 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:49:19.455113 systemd-logind[1429]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:49:19.455979 systemd-logind[1429]: Removed session 27.