Sep  4 17:51:19.898685 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep  4 15:54:07 -00 2024
Sep  4 17:51:19.898706 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d
Sep  4 17:51:19.898717 kernel: BIOS-provided physical RAM map:
Sep  4 17:51:19.898723 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
Sep  4 17:51:19.898729 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable
Sep  4 17:51:19.898735 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS
Sep  4 17:51:19.898742 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable
Sep  4 17:51:19.898749 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS
Sep  4 17:51:19.898755 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable
Sep  4 17:51:19.898761 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS
Sep  4 17:51:19.898770 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable
Sep  4 17:51:19.898776 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved
Sep  4 17:51:19.898795 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20
Sep  4 17:51:19.898802 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved
Sep  4 17:51:19.898810 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data
Sep  4 17:51:19.898817 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS
Sep  4 17:51:19.898826 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable
Sep  4 17:51:19.898833 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved
Sep  4 17:51:19.898839 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS
Sep  4 17:51:19.898846 kernel: NX (Execute Disable) protection: active
Sep  4 17:51:19.898853 kernel: APIC: Static calls initialized
Sep  4 17:51:19.898859 kernel: efi: EFI v2.7 by EDK II
Sep  4 17:51:19.898866 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b4f9018 
Sep  4 17:51:19.898873 kernel: SMBIOS 2.8 present.
Sep  4 17:51:19.898880 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015
Sep  4 17:51:19.898886 kernel: Hypervisor detected: KVM
Sep  4 17:51:19.898893 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Sep  4 17:51:19.898902 kernel: kvm-clock: using sched offset of 4212840998 cycles
Sep  4 17:51:19.898909 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Sep  4 17:51:19.898916 kernel: tsc: Detected 2794.744 MHz processor
Sep  4 17:51:19.898923 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Sep  4 17:51:19.898930 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Sep  4 17:51:19.898937 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000
Sep  4 17:51:19.898944 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs
Sep  4 17:51:19.898951 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Sep  4 17:51:19.898957 kernel: Using GB pages for direct mapping
Sep  4 17:51:19.898966 kernel: Secure boot disabled
Sep  4 17:51:19.898973 kernel: ACPI: Early table checksum verification disabled
Sep  4 17:51:19.898980 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS )
Sep  4 17:51:19.898987 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS  BXPC     00000001      01000013)
Sep  4 17:51:19.898997 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:51:19.899004 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:51:19.899011 kernel: ACPI: FACS 0x000000009CBDD000 000040
Sep  4 17:51:19.899021 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:51:19.899028 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:51:19.899036 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:51:19.899043 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL  EDK2     00000002      01000013)
Sep  4 17:51:19.899050 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073]
Sep  4 17:51:19.899057 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38]
Sep  4 17:51:19.899064 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f]
Sep  4 17:51:19.899074 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f]
Sep  4 17:51:19.899081 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037]
Sep  4 17:51:19.899088 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027]
Sep  4 17:51:19.899095 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037]
Sep  4 17:51:19.899102 kernel: No NUMA configuration found
Sep  4 17:51:19.899109 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff]
Sep  4 17:51:19.899116 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff]
Sep  4 17:51:19.899123 kernel: Zone ranges:
Sep  4 17:51:19.899130 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Sep  4 17:51:19.899139 kernel:   DMA32    [mem 0x0000000001000000-0x000000009cf3ffff]
Sep  4 17:51:19.899146 kernel:   Normal   empty
Sep  4 17:51:19.899153 kernel: Movable zone start for each node
Sep  4 17:51:19.899160 kernel: Early memory node ranges
Sep  4 17:51:19.899167 kernel:   node   0: [mem 0x0000000000001000-0x000000000009ffff]
Sep  4 17:51:19.899174 kernel:   node   0: [mem 0x0000000000100000-0x00000000007fffff]
Sep  4 17:51:19.899182 kernel:   node   0: [mem 0x0000000000808000-0x000000000080afff]
Sep  4 17:51:19.899189 kernel:   node   0: [mem 0x000000000080c000-0x000000000080ffff]
Sep  4 17:51:19.899196 kernel:   node   0: [mem 0x0000000000900000-0x000000009c8eefff]
Sep  4 17:51:19.899203 kernel:   node   0: [mem 0x000000009cbff000-0x000000009cf3ffff]
Sep  4 17:51:19.899212 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff]
Sep  4 17:51:19.899219 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Sep  4 17:51:19.899226 kernel: On node 0, zone DMA: 96 pages in unavailable ranges
Sep  4 17:51:19.899233 kernel: On node 0, zone DMA: 8 pages in unavailable ranges
Sep  4 17:51:19.899240 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Sep  4 17:51:19.899247 kernel: On node 0, zone DMA: 240 pages in unavailable ranges
Sep  4 17:51:19.899254 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges
Sep  4 17:51:19.899261 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges
Sep  4 17:51:19.899268 kernel: ACPI: PM-Timer IO Port: 0xb008
Sep  4 17:51:19.899278 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Sep  4 17:51:19.899285 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Sep  4 17:51:19.899292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Sep  4 17:51:19.899299 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Sep  4 17:51:19.899306 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Sep  4 17:51:19.899313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Sep  4 17:51:19.899320 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Sep  4 17:51:19.899327 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Sep  4 17:51:19.899334 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Sep  4 17:51:19.899343 kernel: TSC deadline timer available
Sep  4 17:51:19.899350 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs
Sep  4 17:51:19.899358 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Sep  4 17:51:19.899371 kernel: kvm-guest: KVM setup pv remote TLB flush
Sep  4 17:51:19.899378 kernel: kvm-guest: setup PV sched yield
Sep  4 17:51:19.899385 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices
Sep  4 17:51:19.899392 kernel: Booting paravirtualized kernel on KVM
Sep  4 17:51:19.899399 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Sep  4 17:51:19.899407 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Sep  4 17:51:19.899414 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288
Sep  4 17:51:19.899423 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152
Sep  4 17:51:19.899430 kernel: pcpu-alloc: [0] 0 1 2 3 
Sep  4 17:51:19.899437 kernel: kvm-guest: PV spinlocks enabled
Sep  4 17:51:19.899444 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Sep  4 17:51:19.899452 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d
Sep  4 17:51:19.899460 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Sep  4 17:51:19.899467 kernel: random: crng init done
Sep  4 17:51:19.899474 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Sep  4 17:51:19.899484 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Sep  4 17:51:19.899491 kernel: Fallback order for Node 0: 0 
Sep  4 17:51:19.899498 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 629759
Sep  4 17:51:19.899505 kernel: Policy zone: DMA32
Sep  4 17:51:19.899512 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Sep  4 17:51:19.899520 kernel: Memory: 2394348K/2567000K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 172392K reserved, 0K cma-reserved)
Sep  4 17:51:19.899527 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Sep  4 17:51:19.899534 kernel: ftrace: allocating 37748 entries in 148 pages
Sep  4 17:51:19.899541 kernel: ftrace: allocated 148 pages with 3 groups
Sep  4 17:51:19.899551 kernel: Dynamic Preempt: voluntary
Sep  4 17:51:19.899558 kernel: rcu: Preemptible hierarchical RCU implementation.
Sep  4 17:51:19.899566 kernel: rcu:         RCU event tracing is enabled.
Sep  4 17:51:19.899573 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Sep  4 17:51:19.899590 kernel:         Trampoline variant of Tasks RCU enabled.
Sep  4 17:51:19.899598 kernel:         Rude variant of Tasks RCU enabled.
Sep  4 17:51:19.899605 kernel:         Tracing variant of Tasks RCU enabled.
Sep  4 17:51:19.899615 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Sep  4 17:51:19.899625 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Sep  4 17:51:19.899636 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16
Sep  4 17:51:19.899646 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Sep  4 17:51:19.899656 kernel: Console: colour dummy device 80x25
Sep  4 17:51:19.899667 kernel: printk: console [ttyS0] enabled
Sep  4 17:51:19.899675 kernel: ACPI: Core revision 20230628
Sep  4 17:51:19.899683 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
Sep  4 17:51:19.899690 kernel: APIC: Switch to symmetric I/O mode setup
Sep  4 17:51:19.899698 kernel: x2apic enabled
Sep  4 17:51:19.899708 kernel: APIC: Switched APIC routing to: physical x2apic
Sep  4 17:51:19.899716 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Sep  4 17:51:19.899724 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Sep  4 17:51:19.899731 kernel: kvm-guest: setup PV IPIs
Sep  4 17:51:19.899739 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Sep  4 17:51:19.899747 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Sep  4 17:51:19.899755 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744)
Sep  4 17:51:19.899762 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Sep  4 17:51:19.899770 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Sep  4 17:51:19.899780 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Sep  4 17:51:19.899821 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Sep  4 17:51:19.899829 kernel: Spectre V2 : Mitigation: Retpolines
Sep  4 17:51:19.899837 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Sep  4 17:51:19.899845 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Sep  4 17:51:19.899853 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Sep  4 17:51:19.899860 kernel: RETBleed: Mitigation: untrained return thunk
Sep  4 17:51:19.899869 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Sep  4 17:51:19.899876 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Sep  4 17:51:19.899887 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Sep  4 17:51:19.899896 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Sep  4 17:51:19.899904 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Sep  4 17:51:19.899911 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Sep  4 17:51:19.899919 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Sep  4 17:51:19.899927 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Sep  4 17:51:19.899935 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Sep  4 17:51:19.899943 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Sep  4 17:51:19.899953 kernel: Freeing SMP alternatives memory: 32K
Sep  4 17:51:19.899960 kernel: pid_max: default: 32768 minimum: 301
Sep  4 17:51:19.899968 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Sep  4 17:51:19.899976 kernel: landlock: Up and running.
Sep  4 17:51:19.899984 kernel: SELinux:  Initializing.
Sep  4 17:51:19.899991 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Sep  4 17:51:19.899999 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Sep  4 17:51:19.900007 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0)
Sep  4 17:51:19.900015 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1.
Sep  4 17:51:19.900025 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1.
Sep  4 17:51:19.900033 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1.
Sep  4 17:51:19.900041 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Sep  4 17:51:19.900049 kernel: ... version:                0
Sep  4 17:51:19.900056 kernel: ... bit width:              48
Sep  4 17:51:19.900064 kernel: ... generic registers:      6
Sep  4 17:51:19.900072 kernel: ... value mask:             0000ffffffffffff
Sep  4 17:51:19.900079 kernel: ... max period:             00007fffffffffff
Sep  4 17:51:19.900087 kernel: ... fixed-purpose events:   0
Sep  4 17:51:19.900097 kernel: ... event mask:             000000000000003f
Sep  4 17:51:19.900105 kernel: signal: max sigframe size: 1776
Sep  4 17:51:19.900112 kernel: rcu: Hierarchical SRCU implementation.
Sep  4 17:51:19.900120 kernel: rcu:         Max phase no-delay instances is 400.
Sep  4 17:51:19.900128 kernel: smp: Bringing up secondary CPUs ...
Sep  4 17:51:19.900136 kernel: smpboot: x86: Booting SMP configuration:
Sep  4 17:51:19.900143 kernel: .... node  #0, CPUs:      #1 #2 #3
Sep  4 17:51:19.900151 kernel: smp: Brought up 1 node, 4 CPUs
Sep  4 17:51:19.900159 kernel: smpboot: Max logical packages: 1
Sep  4 17:51:19.900169 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS)
Sep  4 17:51:19.900177 kernel: devtmpfs: initialized
Sep  4 17:51:19.900185 kernel: x86/mm: Memory block size: 128MB
Sep  4 17:51:19.900192 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes)
Sep  4 17:51:19.900200 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes)
Sep  4 17:51:19.900208 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes)
Sep  4 17:51:19.900216 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes)
Sep  4 17:51:19.900224 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes)
Sep  4 17:51:19.900231 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Sep  4 17:51:19.900241 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Sep  4 17:51:19.900249 kernel: pinctrl core: initialized pinctrl subsystem
Sep  4 17:51:19.900257 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Sep  4 17:51:19.900264 kernel: audit: initializing netlink subsys (disabled)
Sep  4 17:51:19.900272 kernel: audit: type=2000 audit(1725472279.881:1): state=initialized audit_enabled=0 res=1
Sep  4 17:51:19.900280 kernel: thermal_sys: Registered thermal governor 'step_wise'
Sep  4 17:51:19.900288 kernel: thermal_sys: Registered thermal governor 'user_space'
Sep  4 17:51:19.900295 kernel: cpuidle: using governor menu
Sep  4 17:51:19.900303 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Sep  4 17:51:19.900312 kernel: dca service started, version 1.12.1
Sep  4 17:51:19.900320 kernel: PCI: Using configuration type 1 for base access
Sep  4 17:51:19.900328 kernel: PCI: Using configuration type 1 for extended access
Sep  4 17:51:19.900336 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Sep  4 17:51:19.900344 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Sep  4 17:51:19.900351 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Sep  4 17:51:19.900359 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Sep  4 17:51:19.900373 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Sep  4 17:51:19.900381 kernel: ACPI: Added _OSI(Module Device)
Sep  4 17:51:19.900390 kernel: ACPI: Added _OSI(Processor Device)
Sep  4 17:51:19.900398 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Sep  4 17:51:19.900406 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Sep  4 17:51:19.900413 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Sep  4 17:51:19.900421 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Sep  4 17:51:19.900428 kernel: ACPI: Interpreter enabled
Sep  4 17:51:19.900436 kernel: ACPI: PM: (supports S0 S3 S5)
Sep  4 17:51:19.900444 kernel: ACPI: Using IOAPIC for interrupt routing
Sep  4 17:51:19.900451 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Sep  4 17:51:19.900461 kernel: PCI: Using E820 reservations for host bridge windows
Sep  4 17:51:19.900469 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Sep  4 17:51:19.900476 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Sep  4 17:51:19.900655 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Sep  4 17:51:19.900667 kernel: acpiphp: Slot [3] registered
Sep  4 17:51:19.900675 kernel: acpiphp: Slot [4] registered
Sep  4 17:51:19.900683 kernel: acpiphp: Slot [5] registered
Sep  4 17:51:19.900690 kernel: acpiphp: Slot [6] registered
Sep  4 17:51:19.900705 kernel: acpiphp: Slot [7] registered
Sep  4 17:51:19.900715 kernel: acpiphp: Slot [8] registered
Sep  4 17:51:19.900726 kernel: acpiphp: Slot [9] registered
Sep  4 17:51:19.900736 kernel: acpiphp: Slot [10] registered
Sep  4 17:51:19.900744 kernel: acpiphp: Slot [11] registered
Sep  4 17:51:19.900751 kernel: acpiphp: Slot [12] registered
Sep  4 17:51:19.900758 kernel: acpiphp: Slot [13] registered
Sep  4 17:51:19.900766 kernel: acpiphp: Slot [14] registered
Sep  4 17:51:19.900773 kernel: acpiphp: Slot [15] registered
Sep  4 17:51:19.900780 kernel: acpiphp: Slot [16] registered
Sep  4 17:51:19.900898 kernel: acpiphp: Slot [17] registered
Sep  4 17:51:19.900906 kernel: acpiphp: Slot [18] registered
Sep  4 17:51:19.900913 kernel: acpiphp: Slot [19] registered
Sep  4 17:51:19.900920 kernel: acpiphp: Slot [20] registered
Sep  4 17:51:19.900928 kernel: acpiphp: Slot [21] registered
Sep  4 17:51:19.900935 kernel: acpiphp: Slot [22] registered
Sep  4 17:51:19.900942 kernel: acpiphp: Slot [23] registered
Sep  4 17:51:19.900950 kernel: acpiphp: Slot [24] registered
Sep  4 17:51:19.900957 kernel: acpiphp: Slot [25] registered
Sep  4 17:51:19.900967 kernel: acpiphp: Slot [26] registered
Sep  4 17:51:19.900975 kernel: acpiphp: Slot [27] registered
Sep  4 17:51:19.900982 kernel: acpiphp: Slot [28] registered
Sep  4 17:51:19.900990 kernel: acpiphp: Slot [29] registered
Sep  4 17:51:19.900997 kernel: acpiphp: Slot [30] registered
Sep  4 17:51:19.901004 kernel: acpiphp: Slot [31] registered
Sep  4 17:51:19.901012 kernel: PCI host bridge to bus 0000:00
Sep  4 17:51:19.901154 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Sep  4 17:51:19.901269 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Sep  4 17:51:19.901396 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Sep  4 17:51:19.901509 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window]
Sep  4 17:51:19.901620 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window]
Sep  4 17:51:19.901731 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Sep  4 17:51:19.901905 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Sep  4 17:51:19.902040 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Sep  4 17:51:19.902178 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Sep  4 17:51:19.902300 kernel: pci 0000:00:01.1: reg 0x20: [io  0xc0c0-0xc0cf]
Sep  4 17:51:19.902430 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Sep  4 17:51:19.902552 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Sep  4 17:51:19.902673 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Sep  4 17:51:19.902811 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Sep  4 17:51:19.902998 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Sep  4 17:51:19.903131 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Sep  4 17:51:19.903257 kernel: pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
Sep  4 17:51:19.903400 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000
Sep  4 17:51:19.903527 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref]
Sep  4 17:51:19.903649 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff]
Sep  4 17:51:19.903769 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref]
Sep  4 17:51:19.903913 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb
Sep  4 17:51:19.904050 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Sep  4 17:51:19.904187 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00
Sep  4 17:51:19.904308 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc0a0-0xc0bf]
Sep  4 17:51:19.904442 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff]
Sep  4 17:51:19.904565 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref]
Sep  4 17:51:19.904698 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
Sep  4 17:51:19.904907 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
Sep  4 17:51:19.905034 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff]
Sep  4 17:51:19.905168 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref]
Sep  4 17:51:19.905300 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000
Sep  4 17:51:19.905431 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc080-0xc09f]
Sep  4 17:51:19.905552 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff]
Sep  4 17:51:19.905673 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref]
Sep  4 17:51:19.905818 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref]
Sep  4 17:51:19.905829 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Sep  4 17:51:19.905837 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Sep  4 17:51:19.905845 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Sep  4 17:51:19.905853 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Sep  4 17:51:19.905861 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Sep  4 17:51:19.905868 kernel: iommu: Default domain type: Translated
Sep  4 17:51:19.905876 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Sep  4 17:51:19.905884 kernel: efivars: Registered efivars operations
Sep  4 17:51:19.905895 kernel: PCI: Using ACPI for IRQ routing
Sep  4 17:51:19.905902 kernel: PCI: pci_cache_line_size set to 64 bytes
Sep  4 17:51:19.905910 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff]
Sep  4 17:51:19.905918 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff]
Sep  4 17:51:19.905925 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff]
Sep  4 17:51:19.905933 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff]
Sep  4 17:51:19.906056 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Sep  4 17:51:19.906193 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Sep  4 17:51:19.906316 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Sep  4 17:51:19.906330 kernel: vgaarb: loaded
Sep  4 17:51:19.906338 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Sep  4 17:51:19.906346 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter
Sep  4 17:51:19.906354 kernel: clocksource: Switched to clocksource kvm-clock
Sep  4 17:51:19.906369 kernel: VFS: Disk quotas dquot_6.6.0
Sep  4 17:51:19.906378 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Sep  4 17:51:19.906386 kernel: pnp: PnP ACPI init
Sep  4 17:51:19.906519 kernel: pnp 00:02: [dma 2]
Sep  4 17:51:19.906533 kernel: pnp: PnP ACPI: found 6 devices
Sep  4 17:51:19.906541 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Sep  4 17:51:19.906549 kernel: NET: Registered PF_INET protocol family
Sep  4 17:51:19.906557 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Sep  4 17:51:19.906564 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Sep  4 17:51:19.906572 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Sep  4 17:51:19.906580 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Sep  4 17:51:19.906588 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Sep  4 17:51:19.906595 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Sep  4 17:51:19.906606 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Sep  4 17:51:19.906613 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Sep  4 17:51:19.906621 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Sep  4 17:51:19.906629 kernel: NET: Registered PF_XDP protocol family
Sep  4 17:51:19.906750 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window
Sep  4 17:51:19.906913 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref]
Sep  4 17:51:19.907027 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Sep  4 17:51:19.907137 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Sep  4 17:51:19.907267 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Sep  4 17:51:19.907388 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window]
Sep  4 17:51:19.907499 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window]
Sep  4 17:51:19.907622 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Sep  4 17:51:19.907743 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Sep  4 17:51:19.907753 kernel: PCI: CLS 0 bytes, default 64
Sep  4 17:51:19.907761 kernel: Initialise system trusted keyrings
Sep  4 17:51:19.907769 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Sep  4 17:51:19.907794 kernel: Key type asymmetric registered
Sep  4 17:51:19.907803 kernel: Asymmetric key parser 'x509' registered
Sep  4 17:51:19.907811 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Sep  4 17:51:19.907818 kernel: io scheduler mq-deadline registered
Sep  4 17:51:19.907826 kernel: io scheduler kyber registered
Sep  4 17:51:19.907834 kernel: io scheduler bfq registered
Sep  4 17:51:19.907841 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Sep  4 17:51:19.907850 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Sep  4 17:51:19.907857 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10
Sep  4 17:51:19.907868 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Sep  4 17:51:19.907876 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Sep  4 17:51:19.907884 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Sep  4 17:51:19.907908 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Sep  4 17:51:19.907918 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Sep  4 17:51:19.907926 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Sep  4 17:51:19.908057 kernel: rtc_cmos 00:05: RTC can wake from S4
Sep  4 17:51:19.908173 kernel: rtc_cmos 00:05: registered as rtc0
Sep  4 17:51:19.908188 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Sep  4 17:51:19.908310 kernel: rtc_cmos 00:05: setting system clock to 2024-09-04T17:51:19 UTC (1725472279)
Sep  4 17:51:19.908441 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs
Sep  4 17:51:19.908452 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Sep  4 17:51:19.908460 kernel: efifb: probing for efifb
Sep  4 17:51:19.908468 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k
Sep  4 17:51:19.908476 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1
Sep  4 17:51:19.908484 kernel: efifb: scrolling: redraw
Sep  4 17:51:19.908491 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0
Sep  4 17:51:19.908503 kernel: Console: switching to colour frame buffer device 100x37
Sep  4 17:51:19.908511 kernel: fb0: EFI VGA frame buffer device
Sep  4 17:51:19.908520 kernel: pstore: Using crash dump compression: deflate
Sep  4 17:51:19.908528 kernel: pstore: Registered efi_pstore as persistent store backend
Sep  4 17:51:19.908536 kernel: NET: Registered PF_INET6 protocol family
Sep  4 17:51:19.908544 kernel: Segment Routing with IPv6
Sep  4 17:51:19.908552 kernel: In-situ OAM (IOAM) with IPv6
Sep  4 17:51:19.908560 kernel: NET: Registered PF_PACKET protocol family
Sep  4 17:51:19.908568 kernel: Key type dns_resolver registered
Sep  4 17:51:19.908578 kernel: IPI shorthand broadcast: enabled
Sep  4 17:51:19.908586 kernel: sched_clock: Marking stable (637002602, 112480693)->(789716739, -40233444)
Sep  4 17:51:19.908594 kernel: registered taskstats version 1
Sep  4 17:51:19.908602 kernel: Loading compiled-in X.509 certificates
Sep  4 17:51:19.908610 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18'
Sep  4 17:51:19.908618 kernel: Key type .fscrypt registered
Sep  4 17:51:19.908629 kernel: Key type fscrypt-provisioning registered
Sep  4 17:51:19.908637 kernel: ima: No TPM chip found, activating TPM-bypass!
Sep  4 17:51:19.908645 kernel: ima: Allocated hash algorithm: sha1
Sep  4 17:51:19.908653 kernel: ima: No architecture policies found
Sep  4 17:51:19.908661 kernel: clk: Disabling unused clocks
Sep  4 17:51:19.908669 kernel: Freeing unused kernel image (initmem) memory: 42704K
Sep  4 17:51:19.908677 kernel: Write protecting the kernel read-only data: 36864k
Sep  4 17:51:19.908685 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K
Sep  4 17:51:19.908696 kernel: Run /init as init process
Sep  4 17:51:19.908704 kernel:   with arguments:
Sep  4 17:51:19.908712 kernel:     /init
Sep  4 17:51:19.908719 kernel:   with environment:
Sep  4 17:51:19.908727 kernel:     HOME=/
Sep  4 17:51:19.908735 kernel:     TERM=linux
Sep  4 17:51:19.908743 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Sep  4 17:51:19.908754 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Sep  4 17:51:19.908766 systemd[1]: Detected virtualization kvm.
Sep  4 17:51:19.908775 systemd[1]: Detected architecture x86-64.
Sep  4 17:51:19.908822 systemd[1]: Running in initrd.
Sep  4 17:51:19.908831 systemd[1]: No hostname configured, using default hostname.
Sep  4 17:51:19.908839 systemd[1]: Hostname set to <localhost>.
Sep  4 17:51:19.908848 systemd[1]: Initializing machine ID from VM UUID.
Sep  4 17:51:19.908856 systemd[1]: Queued start job for default target initrd.target.
Sep  4 17:51:19.908865 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Sep  4 17:51:19.908877 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Sep  4 17:51:19.908890 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Sep  4 17:51:19.908899 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Sep  4 17:51:19.908908 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Sep  4 17:51:19.908917 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Sep  4 17:51:19.908927 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Sep  4 17:51:19.908938 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Sep  4 17:51:19.908947 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Sep  4 17:51:19.908955 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Sep  4 17:51:19.908964 systemd[1]: Reached target paths.target - Path Units.
Sep  4 17:51:19.908972 systemd[1]: Reached target slices.target - Slice Units.
Sep  4 17:51:19.908981 systemd[1]: Reached target swap.target - Swaps.
Sep  4 17:51:19.908989 systemd[1]: Reached target timers.target - Timer Units.
Sep  4 17:51:19.908998 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Sep  4 17:51:19.909006 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Sep  4 17:51:19.909017 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Sep  4 17:51:19.909026 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Sep  4 17:51:19.909035 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Sep  4 17:51:19.909043 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Sep  4 17:51:19.909052 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Sep  4 17:51:19.909061 systemd[1]: Reached target sockets.target - Socket Units.
Sep  4 17:51:19.909069 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Sep  4 17:51:19.909078 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Sep  4 17:51:19.909087 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Sep  4 17:51:19.909098 systemd[1]: Starting systemd-fsck-usr.service...
Sep  4 17:51:19.909106 systemd[1]: Starting systemd-journald.service - Journal Service...
Sep  4 17:51:19.909115 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Sep  4 17:51:19.909123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:51:19.909132 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Sep  4 17:51:19.909161 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Sep  4 17:51:19.909170 systemd[1]: Finished systemd-fsck-usr.service.
Sep  4 17:51:19.909182 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Sep  4 17:51:19.909209 systemd-journald[192]: Collecting audit messages is disabled.
Sep  4 17:51:19.909231 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Sep  4 17:51:19.909240 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Sep  4 17:51:19.909249 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:51:19.909258 systemd-journald[192]: Journal started
Sep  4 17:51:19.909276 systemd-journald[192]: Runtime Journal (/run/log/journal/cf4179534d9d48ea9723be59e987cdd6) is 6.0M, max 48.3M, 42.2M free.
Sep  4 17:51:19.913820 systemd[1]: Started systemd-journald.service - Journal Service.
Sep  4 17:51:19.916590 systemd-modules-load[194]: Inserted module 'overlay'
Sep  4 17:51:19.924953 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Sep  4 17:51:19.929183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Sep  4 17:51:19.930571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Sep  4 17:51:19.942322 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Sep  4 17:51:19.944432 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 17:51:19.954806 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Sep  4 17:51:19.956011 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Sep  4 17:51:19.958338 kernel: Bridge firewalling registered
Sep  4 17:51:19.958305 systemd-modules-load[194]: Inserted module 'br_netfilter'
Sep  4 17:51:19.960597 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Sep  4 17:51:19.963485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Sep  4 17:51:19.970042 dracut-cmdline[222]: dracut-dracut-053
Sep  4 17:51:19.973342 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d
Sep  4 17:51:19.977331 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Sep  4 17:51:19.988004 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Sep  4 17:51:20.016677 systemd-resolved[246]: Positive Trust Anchors:
Sep  4 17:51:20.016692 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Sep  4 17:51:20.016722 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Sep  4 17:51:20.027483 systemd-resolved[246]: Defaulting to hostname 'linux'.
Sep  4 17:51:20.029373 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Sep  4 17:51:20.029710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Sep  4 17:51:20.059823 kernel: SCSI subsystem initialized
Sep  4 17:51:20.068808 kernel: Loading iSCSI transport class v2.0-870.
Sep  4 17:51:20.079811 kernel: iscsi: registered transport (tcp)
Sep  4 17:51:20.100088 kernel: iscsi: registered transport (qla4xxx)
Sep  4 17:51:20.100113 kernel: QLogic iSCSI HBA Driver
Sep  4 17:51:20.148945 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Sep  4 17:51:20.158997 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Sep  4 17:51:20.185712 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Sep  4 17:51:20.185758 kernel: device-mapper: uevent: version 1.0.3
Sep  4 17:51:20.185770 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Sep  4 17:51:20.226810 kernel: raid6: avx2x4   gen() 30313 MB/s
Sep  4 17:51:20.243805 kernel: raid6: avx2x2   gen() 30810 MB/s
Sep  4 17:51:20.260905 kernel: raid6: avx2x1   gen() 25958 MB/s
Sep  4 17:51:20.260925 kernel: raid6: using algorithm avx2x2 gen() 30810 MB/s
Sep  4 17:51:20.278906 kernel: raid6: .... xor() 19920 MB/s, rmw enabled
Sep  4 17:51:20.278941 kernel: raid6: using avx2x2 recovery algorithm
Sep  4 17:51:20.298810 kernel: xor: automatically using best checksumming function   avx       
Sep  4 17:51:20.448821 kernel: Btrfs loaded, zoned=no, fsverity=no
Sep  4 17:51:20.461530 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Sep  4 17:51:20.468017 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Sep  4 17:51:20.480530 systemd-udevd[413]: Using default interface naming scheme 'v255'.
Sep  4 17:51:20.485007 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Sep  4 17:51:20.488938 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Sep  4 17:51:20.506958 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation
Sep  4 17:51:20.539218 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Sep  4 17:51:20.552969 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Sep  4 17:51:20.617250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Sep  4 17:51:20.628958 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Sep  4 17:51:20.643544 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Sep  4 17:51:20.645282 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Sep  4 17:51:20.645770 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep  4 17:51:20.650954 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Sep  4 17:51:20.661330 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Sep  4 17:51:20.675807 kernel: cryptd: max_cpu_qlen set to 1000
Sep  4 17:51:20.675862 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues
Sep  4 17:51:20.676743 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Sep  4 17:51:20.681708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Sep  4 17:51:20.684026 kernel: AVX2 version of gcm_enc/dec engaged.
Sep  4 17:51:20.684042 kernel: AES CTR mode by8 optimization enabled
Sep  4 17:51:20.681877 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 17:51:20.689936 kernel: libata version 3.00 loaded.
Sep  4 17:51:20.689950 kernel: ata_piix 0000:00:01.1: version 2.13
Sep  4 17:51:20.690117 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Sep  4 17:51:20.690251 kernel: scsi host0: ata_piix
Sep  4 17:51:20.691804 kernel: scsi host1: ata_piix
Sep  4 17:51:20.691974 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14
Sep  4 17:51:20.691991 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15
Sep  4 17:51:20.694513 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Sep  4 17:51:20.701427 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Sep  4 17:51:20.701466 kernel: GPT:9289727 != 19775487
Sep  4 17:51:20.701483 kernel: GPT:Alternate GPT header not at the end of the disk.
Sep  4 17:51:20.701496 kernel: GPT:9289727 != 19775487
Sep  4 17:51:20.701506 kernel: GPT: Use GNU Parted to correct GPT errors.
Sep  4 17:51:20.701516 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Sep  4 17:51:20.700527 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep  4 17:51:20.700700 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:51:20.702912 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:51:20.713200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:51:20.717838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep  4 17:51:20.717961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:51:20.721348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:51:20.737650 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:51:20.740690 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Sep  4 17:51:20.762256 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 17:51:20.847868 kernel: ata2: found unknown device (class 0)
Sep  4 17:51:20.848847 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Sep  4 17:51:20.850810 kernel: scsi 1:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Sep  4 17:51:20.882818 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (477)
Sep  4 17:51:20.882861 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (470)
Sep  4 17:51:20.886851 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Sep  4 17:51:20.887101 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Sep  4 17:51:20.890273 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Sep  4 17:51:20.897059 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Sep  4 17:51:20.906746 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Sep  4 17:51:20.911413 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0
Sep  4 17:51:20.908140 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Sep  4 17:51:20.917535 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Sep  4 17:51:20.928917 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Sep  4 17:51:20.936195 disk-uuid[567]: Primary Header is updated.
Sep  4 17:51:20.936195 disk-uuid[567]: Secondary Entries is updated.
Sep  4 17:51:20.936195 disk-uuid[567]: Secondary Header is updated.
Sep  4 17:51:20.940811 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Sep  4 17:51:20.944812 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Sep  4 17:51:21.946535 disk-uuid[568]: The operation has completed successfully.
Sep  4 17:51:21.947935 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Sep  4 17:51:21.975727 systemd[1]: disk-uuid.service: Deactivated successfully.
Sep  4 17:51:21.975869 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Sep  4 17:51:21.999049 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Sep  4 17:51:22.002410 sh[584]: Success
Sep  4 17:51:22.014810 kernel: device-mapper: verity: sha256 using implementation "sha256-ni"
Sep  4 17:51:22.047905 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Sep  4 17:51:22.059222 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Sep  4 17:51:22.062384 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Sep  4 17:51:22.076386 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772
Sep  4 17:51:22.076414 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Sep  4 17:51:22.076426 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Sep  4 17:51:22.077416 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Sep  4 17:51:22.078163 kernel: BTRFS info (device dm-0): using free space tree
Sep  4 17:51:22.082707 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Sep  4 17:51:22.083727 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Sep  4 17:51:22.087968 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Sep  4 17:51:22.088955 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Sep  4 17:51:22.100933 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d
Sep  4 17:51:22.100986 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Sep  4 17:51:22.101002 kernel: BTRFS info (device vda6): using free space tree
Sep  4 17:51:22.103825 kernel: BTRFS info (device vda6): auto enabling async discard
Sep  4 17:51:22.113765 systemd[1]: mnt-oem.mount: Deactivated successfully.
Sep  4 17:51:22.115811 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d
Sep  4 17:51:22.124159 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Sep  4 17:51:22.128924 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Sep  4 17:51:22.183016 ignition[678]: Ignition 2.19.0
Sep  4 17:51:22.183031 ignition[678]: Stage: fetch-offline
Sep  4 17:51:22.183083 ignition[678]: no configs at "/usr/lib/ignition/base.d"
Sep  4 17:51:22.183097 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Sep  4 17:51:22.183202 ignition[678]: parsed url from cmdline: ""
Sep  4 17:51:22.183207 ignition[678]: no config URL provided
Sep  4 17:51:22.183214 ignition[678]: reading system config file "/usr/lib/ignition/user.ign"
Sep  4 17:51:22.183225 ignition[678]: no config at "/usr/lib/ignition/user.ign"
Sep  4 17:51:22.183254 ignition[678]: op(1): [started]  loading QEMU firmware config module
Sep  4 17:51:22.183261 ignition[678]: op(1): executing: "modprobe" "qemu_fw_cfg"
Sep  4 17:51:22.190838 ignition[678]: op(1): [finished] loading QEMU firmware config module
Sep  4 17:51:22.212347 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Sep  4 17:51:22.220933 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Sep  4 17:51:22.235519 ignition[678]: parsing config with SHA512: 3d344f6517ccb4fef3e07fcd656785548d93a0dd129ed4c9f77aecc8489f93701bda9a991882c1296f2eb80dde540cde27cd5379a90aac63c1f5d2b2654108d5
Sep  4 17:51:22.241043 unknown[678]: fetched base config from "system"
Sep  4 17:51:22.241057 unknown[678]: fetched user config from "qemu"
Sep  4 17:51:22.241590 ignition[678]: fetch-offline: fetch-offline passed
Sep  4 17:51:22.241668 ignition[678]: Ignition finished successfully
Sep  4 17:51:22.244177 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Sep  4 17:51:22.244418 systemd-networkd[774]: lo: Link UP
Sep  4 17:51:22.244423 systemd-networkd[774]: lo: Gained carrier
Sep  4 17:51:22.245953 systemd-networkd[774]: Enumeration completed
Sep  4 17:51:22.246128 systemd[1]: Started systemd-networkd.service - Network Configuration.
Sep  4 17:51:22.246324 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:51:22.246328 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Sep  4 17:51:22.247330 systemd-networkd[774]: eth0: Link UP
Sep  4 17:51:22.247333 systemd-networkd[774]: eth0: Gained carrier
Sep  4 17:51:22.247339 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:51:22.248884 systemd[1]: Reached target network.target - Network.
Sep  4 17:51:22.249833 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Sep  4 17:51:22.256904 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Sep  4 17:51:22.261826 systemd-networkd[774]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1
Sep  4 17:51:22.272470 ignition[779]: Ignition 2.19.0
Sep  4 17:51:22.272482 ignition[779]: Stage: kargs
Sep  4 17:51:22.272666 ignition[779]: no configs at "/usr/lib/ignition/base.d"
Sep  4 17:51:22.272680 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Sep  4 17:51:22.273772 ignition[779]: kargs: kargs passed
Sep  4 17:51:22.273843 ignition[779]: Ignition finished successfully
Sep  4 17:51:22.277107 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Sep  4 17:51:22.293951 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Sep  4 17:51:22.306314 ignition[789]: Ignition 2.19.0
Sep  4 17:51:22.306324 ignition[789]: Stage: disks
Sep  4 17:51:22.306484 ignition[789]: no configs at "/usr/lib/ignition/base.d"
Sep  4 17:51:22.306495 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Sep  4 17:51:22.307307 ignition[789]: disks: disks passed
Sep  4 17:51:22.309744 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Sep  4 17:51:22.307350 ignition[789]: Ignition finished successfully
Sep  4 17:51:22.311168 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Sep  4 17:51:22.312706 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Sep  4 17:51:22.314875 systemd[1]: Reached target local-fs.target - Local File Systems.
Sep  4 17:51:22.315901 systemd[1]: Reached target sysinit.target - System Initialization.
Sep  4 17:51:22.316940 systemd[1]: Reached target basic.target - Basic System.
Sep  4 17:51:22.327907 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Sep  4 17:51:22.339373 systemd-resolved[246]: Detected conflict on linux IN A 10.0.0.147
Sep  4 17:51:22.339389 systemd-resolved[246]: Hostname conflict, changing published hostname from 'linux' to 'linux2'.
Sep  4 17:51:22.340314 systemd-fsck[799]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Sep  4 17:51:22.347015 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Sep  4 17:51:22.358857 systemd[1]: Mounting sysroot.mount - /sysroot...
Sep  4 17:51:22.441807 kernel: EXT4-fs (vda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none.
Sep  4 17:51:22.442178 systemd[1]: Mounted sysroot.mount - /sysroot.
Sep  4 17:51:22.443315 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Sep  4 17:51:22.454861 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Sep  4 17:51:22.456666 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Sep  4 17:51:22.458313 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Sep  4 17:51:22.467085 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (807)
Sep  4 17:51:22.467108 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d
Sep  4 17:51:22.467122 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Sep  4 17:51:22.467136 kernel: BTRFS info (device vda6): using free space tree
Sep  4 17:51:22.467150 kernel: BTRFS info (device vda6): auto enabling async discard
Sep  4 17:51:22.458360 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Sep  4 17:51:22.458386 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Sep  4 17:51:22.468696 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Sep  4 17:51:22.470506 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Sep  4 17:51:22.479915 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Sep  4 17:51:22.509267 initrd-setup-root[831]: cut: /sysroot/etc/passwd: No such file or directory
Sep  4 17:51:22.514489 initrd-setup-root[838]: cut: /sysroot/etc/group: No such file or directory
Sep  4 17:51:22.519732 initrd-setup-root[845]: cut: /sysroot/etc/shadow: No such file or directory
Sep  4 17:51:22.523447 initrd-setup-root[852]: cut: /sysroot/etc/gshadow: No such file or directory
Sep  4 17:51:22.602863 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Sep  4 17:51:22.613873 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Sep  4 17:51:22.615108 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Sep  4 17:51:22.623815 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d
Sep  4 17:51:22.641057 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Sep  4 17:51:22.645760 ignition[921]: INFO     : Ignition 2.19.0
Sep  4 17:51:22.645760 ignition[921]: INFO     : Stage: mount
Sep  4 17:51:22.647752 ignition[921]: INFO     : no configs at "/usr/lib/ignition/base.d"
Sep  4 17:51:22.647752 ignition[921]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Sep  4 17:51:22.647752 ignition[921]: INFO     : mount: mount passed
Sep  4 17:51:22.647752 ignition[921]: INFO     : Ignition finished successfully
Sep  4 17:51:22.649587 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Sep  4 17:51:22.659869 systemd[1]: Starting ignition-files.service - Ignition (files)...
Sep  4 17:51:23.075716 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Sep  4 17:51:23.089097 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Sep  4 17:51:23.096387 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (934)
Sep  4 17:51:23.096432 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d
Sep  4 17:51:23.096447 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Sep  4 17:51:23.097248 kernel: BTRFS info (device vda6): using free space tree
Sep  4 17:51:23.100810 kernel: BTRFS info (device vda6): auto enabling async discard
Sep  4 17:51:23.102190 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Sep  4 17:51:23.132445 ignition[951]: INFO     : Ignition 2.19.0
Sep  4 17:51:23.132445 ignition[951]: INFO     : Stage: files
Sep  4 17:51:23.134245 ignition[951]: INFO     : no configs at "/usr/lib/ignition/base.d"
Sep  4 17:51:23.134245 ignition[951]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Sep  4 17:51:23.134245 ignition[951]: DEBUG    : files: compiled without relabeling support, skipping
Sep  4 17:51:23.134245 ignition[951]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Sep  4 17:51:23.134245 ignition[951]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Sep  4 17:51:23.140852 ignition[951]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Sep  4 17:51:23.140852 ignition[951]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Sep  4 17:51:23.140852 ignition[951]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Sep  4 17:51:23.140852 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Sep  4 17:51:23.140852 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Sep  4 17:51:23.140852 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Sep  4 17:51:23.140852 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Sep  4 17:51:23.137526 unknown[951]: wrote ssh authorized keys file for user: core
Sep  4 17:51:23.219881 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Sep  4 17:51:23.368485 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Sep  4 17:51:23.368485 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw"
Sep  4 17:51:23.372468 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1
Sep  4 17:51:23.623058 systemd-networkd[774]: eth0: Gained IPv6LL
Sep  4 17:51:23.703504 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET result: OK
Sep  4 17:51:24.100050 ignition[951]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw"
Sep  4 17:51:24.100050 ignition[951]: INFO     : files: op(c): [started]  processing unit "containerd.service"
Sep  4 17:51:24.104320 ignition[951]: INFO     : files: op(c): op(d): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Sep  4 17:51:24.104320 ignition[951]: INFO     : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Sep  4 17:51:24.104320 ignition[951]: INFO     : files: op(c): [finished] processing unit "containerd.service"
Sep  4 17:51:24.104320 ignition[951]: INFO     : files: op(e): [started]  processing unit "prepare-helm.service"
Sep  4 17:51:24.104320 ignition[951]: INFO     : files: op(e): op(f): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Sep  4 17:51:24.104320 ignition[951]: INFO     : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Sep  4 17:51:24.104320 ignition[951]: INFO     : files: op(e): [finished] processing unit "prepare-helm.service"
Sep  4 17:51:24.104320 ignition[951]: INFO     : files: op(10): [started]  processing unit "coreos-metadata.service"
Sep  4 17:51:24.104320 ignition[951]: INFO     : files: op(10): op(11): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Sep  4 17:51:24.104320 ignition[951]: INFO     : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Sep  4 17:51:24.104320 ignition[951]: INFO     : files: op(10): [finished] processing unit "coreos-metadata.service"
Sep  4 17:51:24.104320 ignition[951]: INFO     : files: op(12): [started]  setting preset to disabled for "coreos-metadata.service"
Sep  4 17:51:24.127405 ignition[951]: INFO     : files: op(12): op(13): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Sep  4 17:51:24.129437 ignition[951]: INFO     : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Sep  4 17:51:24.131214 ignition[951]: INFO     : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service"
Sep  4 17:51:24.131214 ignition[951]: INFO     : files: op(14): [started]  setting preset to enabled for "prepare-helm.service"
Sep  4 17:51:24.131214 ignition[951]: INFO     : files: op(14): [finished] setting preset to enabled for "prepare-helm.service"
Sep  4 17:51:24.131214 ignition[951]: INFO     : files: createResultFile: createFiles: op(15): [started]  writing file "/sysroot/etc/.ignition-result.json"
Sep  4 17:51:24.131214 ignition[951]: INFO     : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json"
Sep  4 17:51:24.131214 ignition[951]: INFO     : files: files passed
Sep  4 17:51:24.131214 ignition[951]: INFO     : Ignition finished successfully
Sep  4 17:51:24.131875 systemd[1]: Finished ignition-files.service - Ignition (files).
Sep  4 17:51:24.140976 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Sep  4 17:51:24.142154 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Sep  4 17:51:24.144610 systemd[1]: ignition-quench.service: Deactivated successfully.
Sep  4 17:51:24.144741 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Sep  4 17:51:24.153293 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory
Sep  4 17:51:24.156110 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Sep  4 17:51:24.156110 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Sep  4 17:51:24.161166 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Sep  4 17:51:24.159571 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Sep  4 17:51:24.161388 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Sep  4 17:51:24.170990 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Sep  4 17:51:24.197314 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Sep  4 17:51:24.197444 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Sep  4 17:51:24.200041 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Sep  4 17:51:24.202209 systemd[1]: Reached target initrd.target - Initrd Default Target.
Sep  4 17:51:24.202757 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Sep  4 17:51:24.203727 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Sep  4 17:51:24.221980 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Sep  4 17:51:24.232968 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Sep  4 17:51:24.244936 systemd[1]: Stopped target network.target - Network.
Sep  4 17:51:24.247133 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Sep  4 17:51:24.248518 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep  4 17:51:24.251206 systemd[1]: Stopped target timers.target - Timer Units.
Sep  4 17:51:24.253447 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Sep  4 17:51:24.253613 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Sep  4 17:51:24.255849 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Sep  4 17:51:24.257764 systemd[1]: Stopped target basic.target - Basic System.
Sep  4 17:51:24.260228 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Sep  4 17:51:24.262623 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Sep  4 17:51:24.264862 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Sep  4 17:51:24.267201 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Sep  4 17:51:24.269444 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Sep  4 17:51:24.272142 systemd[1]: Stopped target sysinit.target - System Initialization.
Sep  4 17:51:24.274477 systemd[1]: Stopped target local-fs.target - Local File Systems.
Sep  4 17:51:24.276775 systemd[1]: Stopped target swap.target - Swaps.
Sep  4 17:51:24.278591 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Sep  4 17:51:24.278760 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Sep  4 17:51:24.281240 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Sep  4 17:51:24.283027 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Sep  4 17:51:24.285270 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Sep  4 17:51:24.285403 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Sep  4 17:51:24.287573 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Sep  4 17:51:24.287720 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Sep  4 17:51:24.290136 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Sep  4 17:51:24.290281 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Sep  4 17:51:24.292293 systemd[1]: Stopped target paths.target - Path Units.
Sep  4 17:51:24.294055 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Sep  4 17:51:24.297846 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Sep  4 17:51:24.300077 systemd[1]: Stopped target slices.target - Slice Units.
Sep  4 17:51:24.302133 systemd[1]: Stopped target sockets.target - Socket Units.
Sep  4 17:51:24.304243 systemd[1]: iscsid.socket: Deactivated successfully.
Sep  4 17:51:24.304391 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Sep  4 17:51:24.306659 systemd[1]: iscsiuio.socket: Deactivated successfully.
Sep  4 17:51:24.306768 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Sep  4 17:51:24.309341 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Sep  4 17:51:24.309490 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Sep  4 17:51:24.311836 systemd[1]: ignition-files.service: Deactivated successfully.
Sep  4 17:51:24.311969 systemd[1]: Stopped ignition-files.service - Ignition (files).
Sep  4 17:51:24.326046 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Sep  4 17:51:24.328054 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Sep  4 17:51:24.328224 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Sep  4 17:51:24.331472 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Sep  4 17:51:24.332876 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Sep  4 17:51:24.335342 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Sep  4 17:51:24.338442 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Sep  4 17:51:24.338634 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Sep  4 17:51:24.343294 ignition[1005]: INFO     : Ignition 2.19.0
Sep  4 17:51:24.343294 ignition[1005]: INFO     : Stage: umount
Sep  4 17:51:24.343294 ignition[1005]: INFO     : no configs at "/usr/lib/ignition/base.d"
Sep  4 17:51:24.343294 ignition[1005]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Sep  4 17:51:24.340921 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Sep  4 17:51:24.350226 ignition[1005]: INFO     : umount: umount passed
Sep  4 17:51:24.350226 ignition[1005]: INFO     : Ignition finished successfully
Sep  4 17:51:24.341065 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Sep  4 17:51:24.342875 systemd-networkd[774]: eth0: DHCPv6 lease lost
Sep  4 17:51:24.345971 systemd[1]: systemd-resolved.service: Deactivated successfully.
Sep  4 17:51:24.346414 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Sep  4 17:51:24.349406 systemd[1]: systemd-networkd.service: Deactivated successfully.
Sep  4 17:51:24.349554 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Sep  4 17:51:24.352160 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Sep  4 17:51:24.352297 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Sep  4 17:51:24.362401 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Sep  4 17:51:24.363417 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Sep  4 17:51:24.363479 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Sep  4 17:51:24.365933 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Sep  4 17:51:24.365983 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Sep  4 17:51:24.366568 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Sep  4 17:51:24.366611 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Sep  4 17:51:24.367201 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Sep  4 17:51:24.367244 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Sep  4 17:51:24.373319 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Sep  4 17:51:24.373993 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Sep  4 17:51:24.374105 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Sep  4 17:51:24.374899 systemd[1]: ignition-mount.service: Deactivated successfully.
Sep  4 17:51:24.375004 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Sep  4 17:51:24.378564 systemd[1]: ignition-disks.service: Deactivated successfully.
Sep  4 17:51:24.378634 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Sep  4 17:51:24.379182 systemd[1]: ignition-kargs.service: Deactivated successfully.
Sep  4 17:51:24.379228 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Sep  4 17:51:24.379517 systemd[1]: ignition-setup.service: Deactivated successfully.
Sep  4 17:51:24.379559 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Sep  4 17:51:24.380010 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Sep  4 17:51:24.380051 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Sep  4 17:51:24.385986 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Sep  4 17:51:24.413227 systemd[1]: systemd-udevd.service: Deactivated successfully.
Sep  4 17:51:24.413472 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Sep  4 17:51:24.414731 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Sep  4 17:51:24.414828 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Sep  4 17:51:24.417926 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Sep  4 17:51:24.417970 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Sep  4 17:51:24.420055 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Sep  4 17:51:24.420110 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Sep  4 17:51:24.422253 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Sep  4 17:51:24.422301 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Sep  4 17:51:24.424459 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Sep  4 17:51:24.424509 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 17:51:24.427376 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Sep  4 17:51:24.428676 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Sep  4 17:51:24.428728 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Sep  4 17:51:24.431394 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep  4 17:51:24.431444 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:51:24.434171 systemd[1]: network-cleanup.service: Deactivated successfully.
Sep  4 17:51:24.434303 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Sep  4 17:51:24.440833 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Sep  4 17:51:24.440951 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Sep  4 17:51:24.553765 systemd[1]: sysroot-boot.service: Deactivated successfully.
Sep  4 17:51:24.553919 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Sep  4 17:51:24.556047 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Sep  4 17:51:24.557757 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Sep  4 17:51:24.557830 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Sep  4 17:51:24.571093 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Sep  4 17:51:24.578232 systemd[1]: Switching root.
Sep  4 17:51:24.616655 systemd-journald[192]: Journal stopped
Sep  4 17:51:25.817145 systemd-journald[192]: Received SIGTERM from PID 1 (systemd).
Sep  4 17:51:25.817234 kernel: SELinux:  policy capability network_peer_controls=1
Sep  4 17:51:25.817248 kernel: SELinux:  policy capability open_perms=1
Sep  4 17:51:25.817260 kernel: SELinux:  policy capability extended_socket_class=1
Sep  4 17:51:25.817271 kernel: SELinux:  policy capability always_check_network=0
Sep  4 17:51:25.817284 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep  4 17:51:25.817295 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep  4 17:51:25.817310 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Sep  4 17:51:25.817321 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Sep  4 17:51:25.817332 kernel: audit: type=1403 audit(1725472285.069:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Sep  4 17:51:25.817344 systemd[1]: Successfully loaded SELinux policy in 46.644ms.
Sep  4 17:51:25.817373 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.128ms.
Sep  4 17:51:25.817386 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Sep  4 17:51:25.817398 systemd[1]: Detected virtualization kvm.
Sep  4 17:51:25.817410 systemd[1]: Detected architecture x86-64.
Sep  4 17:51:25.817422 systemd[1]: Detected first boot.
Sep  4 17:51:25.817436 systemd[1]: Initializing machine ID from VM UUID.
Sep  4 17:51:25.817453 zram_generator::config[1067]: No configuration found.
Sep  4 17:51:25.817467 systemd[1]: Populated /etc with preset unit settings.
Sep  4 17:51:25.817479 systemd[1]: Queued start job for default target multi-user.target.
Sep  4 17:51:25.817491 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Sep  4 17:51:25.817503 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Sep  4 17:51:25.817515 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Sep  4 17:51:25.817528 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Sep  4 17:51:25.817542 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Sep  4 17:51:25.817559 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Sep  4 17:51:25.817572 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Sep  4 17:51:25.817584 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Sep  4 17:51:25.817596 systemd[1]: Created slice user.slice - User and Session Slice.
Sep  4 17:51:25.817608 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Sep  4 17:51:25.817621 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Sep  4 17:51:25.817633 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Sep  4 17:51:25.817645 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Sep  4 17:51:25.817660 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Sep  4 17:51:25.817672 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Sep  4 17:51:25.817684 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Sep  4 17:51:25.817695 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Sep  4 17:51:25.817707 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Sep  4 17:51:25.817719 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep  4 17:51:25.817731 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Sep  4 17:51:25.817743 systemd[1]: Reached target slices.target - Slice Units.
Sep  4 17:51:25.817757 systemd[1]: Reached target swap.target - Swaps.
Sep  4 17:51:25.817769 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Sep  4 17:51:25.817781 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Sep  4 17:51:25.817807 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Sep  4 17:51:25.817820 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Sep  4 17:51:25.817832 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Sep  4 17:51:25.817845 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Sep  4 17:51:25.817857 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Sep  4 17:51:25.817869 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Sep  4 17:51:25.817881 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Sep  4 17:51:25.817895 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Sep  4 17:51:25.817907 systemd[1]: Mounting media.mount - External Media Directory...
Sep  4 17:51:25.817919 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:51:25.817931 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Sep  4 17:51:25.817943 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Sep  4 17:51:25.817955 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Sep  4 17:51:25.817967 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Sep  4 17:51:25.817980 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 17:51:25.817994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Sep  4 17:51:25.818006 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Sep  4 17:51:25.818018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 17:51:25.818030 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Sep  4 17:51:25.818042 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Sep  4 17:51:25.818054 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Sep  4 17:51:25.818066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Sep  4 17:51:25.818078 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Sep  4 17:51:25.818093 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
Sep  4 17:51:25.818107 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.)
Sep  4 17:51:25.818118 kernel: loop: module loaded
Sep  4 17:51:25.818130 kernel: fuse: init (API version 7.39)
Sep  4 17:51:25.818141 systemd[1]: Starting systemd-journald.service - Journal Service...
Sep  4 17:51:25.818153 kernel: ACPI: bus type drm_connector registered
Sep  4 17:51:25.818165 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Sep  4 17:51:25.818177 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Sep  4 17:51:25.818214 systemd-journald[1150]: Collecting audit messages is disabled.
Sep  4 17:51:25.818247 systemd-journald[1150]: Journal started
Sep  4 17:51:25.818274 systemd-journald[1150]: Runtime Journal (/run/log/journal/cf4179534d9d48ea9723be59e987cdd6) is 6.0M, max 48.3M, 42.2M free.
Sep  4 17:51:25.821917 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Sep  4 17:51:25.826866 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Sep  4 17:51:25.830741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:51:25.832870 systemd[1]: Started systemd-journald.service - Journal Service.
Sep  4 17:51:25.834946 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Sep  4 17:51:25.836113 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Sep  4 17:51:25.837320 systemd[1]: Mounted media.mount - External Media Directory.
Sep  4 17:51:25.838491 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Sep  4 17:51:25.839673 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Sep  4 17:51:25.840886 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Sep  4 17:51:25.842201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Sep  4 17:51:25.843739 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Sep  4 17:51:25.843965 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Sep  4 17:51:25.845446 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 17:51:25.845656 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 17:51:25.847106 systemd[1]: modprobe@drm.service: Deactivated successfully.
Sep  4 17:51:25.847324 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Sep  4 17:51:25.848959 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Sep  4 17:51:25.849272 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Sep  4 17:51:25.850984 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Sep  4 17:51:25.851247 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Sep  4 17:51:25.852732 systemd[1]: modprobe@loop.service: Deactivated successfully.
Sep  4 17:51:25.853032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Sep  4 17:51:25.854637 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Sep  4 17:51:25.856290 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Sep  4 17:51:25.858014 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Sep  4 17:51:25.872010 systemd[1]: Reached target network-pre.target - Preparation for Network.
Sep  4 17:51:25.880891 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Sep  4 17:51:25.885453 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Sep  4 17:51:25.886653 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Sep  4 17:51:25.905440 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Sep  4 17:51:25.907761 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Sep  4 17:51:25.908972 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Sep  4 17:51:25.911395 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Sep  4 17:51:25.912594 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Sep  4 17:51:25.913964 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Sep  4 17:51:25.918915 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Sep  4 17:51:25.921605 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Sep  4 17:51:25.927505 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Sep  4 17:51:25.929015 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Sep  4 17:51:25.933541 systemd-journald[1150]: Time spent on flushing to /var/log/journal/cf4179534d9d48ea9723be59e987cdd6 is 13.282ms for 983 entries.
Sep  4 17:51:25.933541 systemd-journald[1150]: System Journal (/var/log/journal/cf4179534d9d48ea9723be59e987cdd6) is 8.0M, max 195.6M, 187.6M free.
Sep  4 17:51:26.250936 systemd-journald[1150]: Received client request to flush runtime journal.
Sep  4 17:51:25.941002 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Sep  4 17:51:25.948948 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Sep  4 17:51:25.951932 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Sep  4 17:51:25.955885 systemd-tmpfiles[1194]: ACLs are not supported, ignoring.
Sep  4 17:51:25.955898 systemd-tmpfiles[1194]: ACLs are not supported, ignoring.
Sep  4 17:51:25.962892 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Sep  4 17:51:26.119969 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Sep  4 17:51:26.123322 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Sep  4 17:51:26.124950 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Sep  4 17:51:26.137968 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Sep  4 17:51:26.212017 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Sep  4 17:51:26.220070 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Sep  4 17:51:26.237660 systemd-tmpfiles[1226]: ACLs are not supported, ignoring.
Sep  4 17:51:26.237674 systemd-tmpfiles[1226]: ACLs are not supported, ignoring.
Sep  4 17:51:26.242642 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Sep  4 17:51:26.253113 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Sep  4 17:51:26.757419 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Sep  4 17:51:26.777011 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Sep  4 17:51:26.801238 systemd-udevd[1236]: Using default interface naming scheme 'v255'.
Sep  4 17:51:26.817252 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Sep  4 17:51:26.830191 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Sep  4 17:51:26.844930 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Sep  4 17:51:26.847692 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0.
Sep  4 17:51:26.860810 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1237)
Sep  4 17:51:26.860882 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1255)
Sep  4 17:51:26.872814 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1237)
Sep  4 17:51:26.898908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Sep  4 17:51:26.903805 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
Sep  4 17:51:26.916636 kernel: ACPI: button: Power Button [PWRF]
Sep  4 17:51:26.916670 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0
Sep  4 17:51:26.917355 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Sep  4 17:51:26.924806 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4
Sep  4 17:51:26.948808 kernel: mousedev: PS/2 mouse device common for all mice
Sep  4 17:51:26.964060 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:51:27.016910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:51:27.044893 kernel: kvm_amd: TSC scaling supported
Sep  4 17:51:27.044957 kernel: kvm_amd: Nested Virtualization enabled
Sep  4 17:51:27.044976 kernel: kvm_amd: Nested Paging enabled
Sep  4 17:51:27.044994 kernel: kvm_amd: LBR virtualization supported
Sep  4 17:51:27.046016 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Sep  4 17:51:27.046054 kernel: kvm_amd: Virtual GIF supported
Sep  4 17:51:27.057446 systemd-networkd[1243]: lo: Link UP
Sep  4 17:51:27.057455 systemd-networkd[1243]: lo: Gained carrier
Sep  4 17:51:27.059466 systemd-networkd[1243]: Enumeration completed
Sep  4 17:51:27.059636 systemd[1]: Started systemd-networkd.service - Network Configuration.
Sep  4 17:51:27.059951 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:51:27.059955 systemd-networkd[1243]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Sep  4 17:51:27.061022 systemd-networkd[1243]: eth0: Link UP
Sep  4 17:51:27.061032 systemd-networkd[1243]: eth0: Gained carrier
Sep  4 17:51:27.061045 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:51:27.064815 kernel: EDAC MC: Ver: 3.0.0
Sep  4 17:51:27.069940 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Sep  4 17:51:27.090853 systemd-networkd[1243]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1
Sep  4 17:51:27.108317 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Sep  4 17:51:27.120907 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Sep  4 17:51:27.129459 lvm[1283]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Sep  4 17:51:27.158845 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Sep  4 17:51:27.160454 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Sep  4 17:51:27.170918 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Sep  4 17:51:27.176368 lvm[1286]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Sep  4 17:51:27.210032 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Sep  4 17:51:27.211592 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Sep  4 17:51:27.212880 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Sep  4 17:51:27.212906 systemd[1]: Reached target local-fs.target - Local File Systems.
Sep  4 17:51:27.213958 systemd[1]: Reached target machines.target - Containers.
Sep  4 17:51:27.215981 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Sep  4 17:51:27.227991 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Sep  4 17:51:27.230559 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Sep  4 17:51:27.231713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 17:51:27.232691 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Sep  4 17:51:27.235022 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Sep  4 17:51:27.240035 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Sep  4 17:51:27.242509 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Sep  4 17:51:27.255116 kernel: loop0: detected capacity change from 0 to 89336
Sep  4 17:51:27.258676 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Sep  4 17:51:27.265002 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Sep  4 17:51:27.265865 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Sep  4 17:51:27.275814 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Sep  4 17:51:27.300820 kernel: loop1: detected capacity change from 0 to 209816
Sep  4 17:51:27.332808 kernel: loop2: detected capacity change from 0 to 140728
Sep  4 17:51:27.367818 kernel: loop3: detected capacity change from 0 to 89336
Sep  4 17:51:27.374812 kernel: loop4: detected capacity change from 0 to 209816
Sep  4 17:51:27.381812 kernel: loop5: detected capacity change from 0 to 140728
Sep  4 17:51:27.390553 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Sep  4 17:51:27.391138 (sd-merge)[1310]: Merged extensions into '/usr'.
Sep  4 17:51:27.395100 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)...
Sep  4 17:51:27.395119 systemd[1]: Reloading...
Sep  4 17:51:27.439817 zram_generator::config[1336]: No configuration found.
Sep  4 17:51:27.484319 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Sep  4 17:51:27.565740 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 17:51:27.628406 systemd[1]: Reloading finished in 232 ms.
Sep  4 17:51:27.649687 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Sep  4 17:51:27.651303 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Sep  4 17:51:27.661910 systemd[1]: Starting ensure-sysext.service...
Sep  4 17:51:27.663942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Sep  4 17:51:27.668106 systemd[1]: Reloading requested from client PID 1380 ('systemctl') (unit ensure-sysext.service)...
Sep  4 17:51:27.668119 systemd[1]: Reloading...
Sep  4 17:51:27.687775 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Sep  4 17:51:27.688182 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Sep  4 17:51:27.689171 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Sep  4 17:51:27.689472 systemd-tmpfiles[1391]: ACLs are not supported, ignoring.
Sep  4 17:51:27.689552 systemd-tmpfiles[1391]: ACLs are not supported, ignoring.
Sep  4 17:51:27.694439 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot.
Sep  4 17:51:27.694451 systemd-tmpfiles[1391]: Skipping /boot
Sep  4 17:51:27.706876 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot.
Sep  4 17:51:27.706959 systemd-tmpfiles[1391]: Skipping /boot
Sep  4 17:51:27.712823 zram_generator::config[1420]: No configuration found.
Sep  4 17:51:27.816995 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 17:51:27.880253 systemd[1]: Reloading finished in 211 ms.
Sep  4 17:51:27.903436 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Sep  4 17:51:27.920139 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Sep  4 17:51:27.922628 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Sep  4 17:51:27.925038 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Sep  4 17:51:27.928504 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Sep  4 17:51:27.931959 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Sep  4 17:51:27.937286 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:51:27.937919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 17:51:27.941027 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 17:51:27.944100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Sep  4 17:51:27.947668 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Sep  4 17:51:27.948883 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 17:51:27.948984 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:51:27.950047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 17:51:27.950313 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 17:51:27.957412 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:51:27.957621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 17:51:27.960001 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 17:51:27.961857 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 17:51:27.961947 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:51:27.962675 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Sep  4 17:51:27.967566 augenrules[1491]: No rules
Sep  4 17:51:27.966437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Sep  4 17:51:27.966644 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Sep  4 17:51:27.969538 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Sep  4 17:51:27.971697 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Sep  4 17:51:27.973754 systemd[1]: modprobe@loop.service: Deactivated successfully.
Sep  4 17:51:27.973981 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Sep  4 17:51:27.976049 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 17:51:27.976327 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 17:51:27.983963 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Sep  4 17:51:27.984217 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Sep  4 17:51:27.992110 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Sep  4 17:51:27.996584 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:51:27.997067 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 17:51:28.000983 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 17:51:28.004397 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Sep  4 17:51:28.008070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Sep  4 17:51:28.013108 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Sep  4 17:51:28.013779 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 17:51:28.014106 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:51:28.015843 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Sep  4 17:51:28.020451 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Sep  4 17:51:28.022094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 17:51:28.022321 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 17:51:28.024063 systemd[1]: modprobe@drm.service: Deactivated successfully.
Sep  4 17:51:28.024401 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Sep  4 17:51:28.026091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Sep  4 17:51:28.026310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Sep  4 17:51:28.028180 systemd[1]: modprobe@loop.service: Deactivated successfully.
Sep  4 17:51:28.028414 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Sep  4 17:51:28.031732 systemd[1]: Finished ensure-sysext.service.
Sep  4 17:51:28.035414 systemd-resolved[1466]: Positive Trust Anchors:
Sep  4 17:51:28.035431 systemd-resolved[1466]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Sep  4 17:51:28.035462 systemd-resolved[1466]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Sep  4 17:51:28.038988 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Sep  4 17:51:28.038989 systemd-resolved[1466]: Defaulting to hostname 'linux'.
Sep  4 17:51:28.039037 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Sep  4 17:51:28.049908 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Sep  4 17:51:28.051078 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Sep  4 17:51:28.051176 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Sep  4 17:51:28.052416 systemd[1]: Reached target network.target - Network.
Sep  4 17:51:28.053341 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Sep  4 17:51:28.102919 systemd-networkd[1243]: eth0: Gained IPv6LL
Sep  4 17:51:28.105844 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Sep  4 17:51:28.107314 systemd[1]: Reached target network-online.target - Network is Online.
Sep  4 17:51:28.116506 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Sep  4 17:51:28.117884 systemd[1]: Reached target sysinit.target - System Initialization.
Sep  4 17:51:28.119059 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Sep  4 17:51:28.120347 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Sep  4 17:51:29.154905 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Sep  4 17:51:29.154957 systemd-resolved[1466]: Clock change detected. Flushing caches.
Sep  4 17:51:29.156161 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Sep  4 17:51:29.156190 systemd[1]: Reached target paths.target - Path Units.
Sep  4 17:51:29.157094 systemd-timesyncd[1527]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Sep  4 17:51:29.157096 systemd[1]: Reached target time-set.target - System Time Set.
Sep  4 17:51:29.158090 systemd-timesyncd[1527]: Initial clock synchronization to Wed 2024-09-04 17:51:29.154898 UTC.
Sep  4 17:51:29.158297 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Sep  4 17:51:29.159506 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Sep  4 17:51:29.160855 systemd[1]: Reached target timers.target - Timer Units.
Sep  4 17:51:29.162598 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Sep  4 17:51:29.165606 systemd[1]: Starting docker.socket - Docker Socket for the API...
Sep  4 17:51:29.167821 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Sep  4 17:51:29.170208 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Sep  4 17:51:29.171331 systemd[1]: Reached target sockets.target - Socket Units.
Sep  4 17:51:29.172315 systemd[1]: Reached target basic.target - Basic System.
Sep  4 17:51:29.173426 systemd[1]: System is tainted: cgroupsv1
Sep  4 17:51:29.173462 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Sep  4 17:51:29.173483 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Sep  4 17:51:29.174648 systemd[1]: Starting containerd.service - containerd container runtime...
Sep  4 17:51:29.176769 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Sep  4 17:51:29.178903 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Sep  4 17:51:29.182189 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Sep  4 17:51:29.185698 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Sep  4 17:51:29.186770 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Sep  4 17:51:29.190714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:51:29.191463 jq[1537]: false
Sep  4 17:51:29.193538 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Sep  4 17:51:29.198227 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Sep  4 17:51:29.200965 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Sep  4 17:51:29.207417 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Sep  4 17:51:29.211320 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found loop3
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found loop4
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found loop5
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found sr0
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found vda
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found vda1
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found vda2
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found vda3
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found usr
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found vda4
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found vda6
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found vda7
Sep  4 17:51:29.213538 extend-filesystems[1539]: Found vda9
Sep  4 17:51:29.213538 extend-filesystems[1539]: Checking size of /dev/vda9
Sep  4 17:51:29.224820 systemd[1]: Starting systemd-logind.service - User Login Management...
Sep  4 17:51:29.226320 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Sep  4 17:51:29.228699 systemd[1]: Starting update-engine.service - Update Engine...
Sep  4 17:51:29.231137 extend-filesystems[1539]: Resized partition /dev/vda9
Sep  4 17:51:29.233729 dbus-daemon[1535]: [system] SELinux support is enabled
Sep  4 17:51:29.237812 extend-filesystems[1569]: resize2fs 1.47.1 (20-May-2024)
Sep  4 17:51:29.246452 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Sep  4 17:51:29.238242 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Sep  4 17:51:29.247628 jq[1568]: true
Sep  4 17:51:29.240705 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Sep  4 17:51:29.246912 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Sep  4 17:51:29.247239 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Sep  4 17:51:29.249083 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1241)
Sep  4 17:51:29.256907 update_engine[1567]: I0904 17:51:29.256847  1567 main.cc:92] Flatcar Update Engine starting
Sep  4 17:51:29.262797 update_engine[1567]: I0904 17:51:29.259179  1567 update_check_scheduler.cc:74] Next update check in 3m31s
Sep  4 17:51:29.260612 systemd[1]: motdgen.service: Deactivated successfully.
Sep  4 17:51:29.260931 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Sep  4 17:51:29.263478 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Sep  4 17:51:29.267602 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Sep  4 17:51:29.267902 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Sep  4 17:51:29.289069 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Sep  4 17:51:29.292655 systemd[1]: coreos-metadata.service: Deactivated successfully.
Sep  4 17:51:29.319424 tar[1579]: linux-amd64/helm
Sep  4 17:51:29.292981 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Sep  4 17:51:29.293895 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Sep  4 17:51:29.320071 jq[1582]: true
Sep  4 17:51:29.323107 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Sep  4 17:51:29.323107 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1
Sep  4 17:51:29.323107 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Sep  4 17:51:29.330415 extend-filesystems[1539]: Resized filesystem in /dev/vda9
Sep  4 17:51:29.324158 systemd-logind[1557]: Watching system buttons on /dev/input/event1 (Power Button)
Sep  4 17:51:29.324179 systemd-logind[1557]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Sep  4 17:51:29.325371 systemd-logind[1557]: New seat seat0.
Sep  4 17:51:29.327878 systemd[1]: Started systemd-logind.service - User Login Management.
Sep  4 17:51:29.331938 systemd[1]: extend-filesystems.service: Deactivated successfully.
Sep  4 17:51:29.332298 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Sep  4 17:51:29.343900 systemd[1]: Started update-engine.service - Update Engine.
Sep  4 17:51:29.348953 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Sep  4 17:51:29.349144 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Sep  4 17:51:29.349263 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Sep  4 17:51:29.351823 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Sep  4 17:51:29.351932 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Sep  4 17:51:29.354596 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Sep  4 17:51:29.366457 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Sep  4 17:51:29.389150 bash[1618]: Updated "/home/core/.ssh/authorized_keys"
Sep  4 17:51:29.395322 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Sep  4 17:51:29.398483 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Sep  4 17:51:29.413380 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Sep  4 17:51:29.523800 containerd[1583]: time="2024-09-04T17:51:29.523234478Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20
Sep  4 17:51:29.547205 containerd[1583]: time="2024-09-04T17:51:29.546921935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Sep  4 17:51:29.549450 containerd[1583]: time="2024-09-04T17:51:29.549425593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:51:29.550169 containerd[1583]: time="2024-09-04T17:51:29.549493611Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Sep  4 17:51:29.550169 containerd[1583]: time="2024-09-04T17:51:29.549510974Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Sep  4 17:51:29.550169 containerd[1583]: time="2024-09-04T17:51:29.549683187Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Sep  4 17:51:29.550169 containerd[1583]: time="2024-09-04T17:51:29.549698005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Sep  4 17:51:29.550169 containerd[1583]: time="2024-09-04T17:51:29.549762716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:51:29.550169 containerd[1583]: time="2024-09-04T17:51:29.549773917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Sep  4 17:51:29.550169 containerd[1583]: time="2024-09-04T17:51:29.550020851Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:51:29.550169 containerd[1583]: time="2024-09-04T17:51:29.550038424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Sep  4 17:51:29.550169 containerd[1583]: time="2024-09-04T17:51:29.550074040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:51:29.550169 containerd[1583]: time="2024-09-04T17:51:29.550084029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Sep  4 17:51:29.550457 containerd[1583]: time="2024-09-04T17:51:29.550440839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Sep  4 17:51:29.550739 containerd[1583]: time="2024-09-04T17:51:29.550723449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Sep  4 17:51:29.550942 containerd[1583]: time="2024-09-04T17:51:29.550926981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:51:29.550987 containerd[1583]: time="2024-09-04T17:51:29.550976755Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Sep  4 17:51:29.551171 containerd[1583]: time="2024-09-04T17:51:29.551143718Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Sep  4 17:51:29.551282 containerd[1583]: time="2024-09-04T17:51:29.551269414Z" level=info msg="metadata content store policy set" policy=shared
Sep  4 17:51:29.557194 containerd[1583]: time="2024-09-04T17:51:29.557165459Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Sep  4 17:51:29.557277 containerd[1583]: time="2024-09-04T17:51:29.557265327Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Sep  4 17:51:29.557368 containerd[1583]: time="2024-09-04T17:51:29.557356258Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Sep  4 17:51:29.557434 containerd[1583]: time="2024-09-04T17:51:29.557422482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Sep  4 17:51:29.557493 containerd[1583]: time="2024-09-04T17:51:29.557482014Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Sep  4 17:51:29.557654 containerd[1583]: time="2024-09-04T17:51:29.557639930Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Sep  4 17:51:29.558151 containerd[1583]: time="2024-09-04T17:51:29.558135240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Sep  4 17:51:29.558311 containerd[1583]: time="2024-09-04T17:51:29.558296753Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Sep  4 17:51:29.558363 containerd[1583]: time="2024-09-04T17:51:29.558352127Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Sep  4 17:51:29.558418 containerd[1583]: time="2024-09-04T17:51:29.558406869Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Sep  4 17:51:29.558464 containerd[1583]: time="2024-09-04T17:51:29.558454108Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Sep  4 17:51:29.558507 containerd[1583]: time="2024-09-04T17:51:29.558497359Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Sep  4 17:51:29.558565 containerd[1583]: time="2024-09-04T17:51:29.558552783Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Sep  4 17:51:29.558612 containerd[1583]: time="2024-09-04T17:51:29.558602236Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Sep  4 17:51:29.558659 containerd[1583]: time="2024-09-04T17:51:29.558648773Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Sep  4 17:51:29.558703 containerd[1583]: time="2024-09-04T17:51:29.558692956Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Sep  4 17:51:29.558746 containerd[1583]: time="2024-09-04T17:51:29.558736398Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Sep  4 17:51:29.558789 containerd[1583]: time="2024-09-04T17:51:29.558779579Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Sep  4 17:51:29.558853 containerd[1583]: time="2024-09-04T17:51:29.558841405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.558903 containerd[1583]: time="2024-09-04T17:51:29.558893583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.558959 containerd[1583]: time="2024-09-04T17:51:29.558948095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559005 containerd[1583]: time="2024-09-04T17:51:29.558995324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559081 containerd[1583]: time="2024-09-04T17:51:29.559068100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559132 containerd[1583]: time="2024-09-04T17:51:29.559121571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559206 containerd[1583]: time="2024-09-04T17:51:29.559193395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559269 containerd[1583]: time="2024-09-04T17:51:29.559258157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559316 containerd[1583]: time="2024-09-04T17:51:29.559305867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559364 containerd[1583]: time="2024-09-04T17:51:29.559354388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559417 containerd[1583]: time="2024-09-04T17:51:29.559405393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559476 containerd[1583]: time="2024-09-04T17:51:29.559464384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559521 containerd[1583]: time="2024-09-04T17:51:29.559511643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559572 containerd[1583]: time="2024-09-04T17:51:29.559561496Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Sep  4 17:51:29.559624 containerd[1583]: time="2024-09-04T17:51:29.559614005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559673 containerd[1583]: time="2024-09-04T17:51:29.559663337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.559717 containerd[1583]: time="2024-09-04T17:51:29.559707581Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Sep  4 17:51:29.559811 containerd[1583]: time="2024-09-04T17:51:29.559798792Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Sep  4 17:51:29.559894 containerd[1583]: time="2024-09-04T17:51:29.559879914Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Sep  4 17:51:29.559987 containerd[1583]: time="2024-09-04T17:51:29.559975964Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Sep  4 17:51:29.560038 containerd[1583]: time="2024-09-04T17:51:29.560026409Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Sep  4 17:51:29.560930 containerd[1583]: time="2024-09-04T17:51:29.560240010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.560930 containerd[1583]: time="2024-09-04T17:51:29.560265027Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Sep  4 17:51:29.560930 containerd[1583]: time="2024-09-04T17:51:29.560282970Z" level=info msg="NRI interface is disabled by configuration."
Sep  4 17:51:29.560930 containerd[1583]: time="2024-09-04T17:51:29.560295674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Sep  4 17:51:29.561022 containerd[1583]: time="2024-09-04T17:51:29.560606057Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Sep  4 17:51:29.561022 containerd[1583]: time="2024-09-04T17:51:29.560654738Z" level=info msg="Connect containerd service"
Sep  4 17:51:29.561022 containerd[1583]: time="2024-09-04T17:51:29.560675297Z" level=info msg="using legacy CRI server"
Sep  4 17:51:29.561022 containerd[1583]: time="2024-09-04T17:51:29.560682009Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Sep  4 17:51:29.561022 containerd[1583]: time="2024-09-04T17:51:29.560765827Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Sep  4 17:51:29.563314 containerd[1583]: time="2024-09-04T17:51:29.563287309Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Sep  4 17:51:29.563876 containerd[1583]: time="2024-09-04T17:51:29.563860545Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Sep  4 17:51:29.563997 containerd[1583]: time="2024-09-04T17:51:29.563982514Z" level=info msg=serving... address=/run/containerd/containerd.sock
Sep  4 17:51:29.564163 containerd[1583]: time="2024-09-04T17:51:29.564075398Z" level=info msg="Start subscribing containerd event"
Sep  4 17:51:29.564226 containerd[1583]: time="2024-09-04T17:51:29.564215301Z" level=info msg="Start recovering state"
Sep  4 17:51:29.564561 containerd[1583]: time="2024-09-04T17:51:29.564545941Z" level=info msg="Start event monitor"
Sep  4 17:51:29.564805 containerd[1583]: time="2024-09-04T17:51:29.564790710Z" level=info msg="Start snapshots syncer"
Sep  4 17:51:29.565841 sshd_keygen[1577]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Sep  4 17:51:29.566125 containerd[1583]: time="2024-09-04T17:51:29.566110537Z" level=info msg="Start cni network conf syncer for default"
Sep  4 17:51:29.566183 containerd[1583]: time="2024-09-04T17:51:29.566170871Z" level=info msg="Start streaming server"
Sep  4 17:51:29.566363 systemd[1]: Started containerd.service - containerd container runtime.
Sep  4 17:51:29.566748 containerd[1583]: time="2024-09-04T17:51:29.566734298Z" level=info msg="containerd successfully booted in 0.045112s"
Sep  4 17:51:29.591168 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Sep  4 17:51:29.598233 systemd[1]: Starting issuegen.service - Generate /run/issue...
Sep  4 17:51:29.606738 systemd[1]: issuegen.service: Deactivated successfully.
Sep  4 17:51:29.607609 systemd[1]: Finished issuegen.service - Generate /run/issue.
Sep  4 17:51:29.610887 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Sep  4 17:51:29.624281 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Sep  4 17:51:29.639377 systemd[1]: Started getty@tty1.service - Getty on tty1.
Sep  4 17:51:29.641880 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Sep  4 17:51:29.643247 systemd[1]: Reached target getty.target - Login Prompts.
Sep  4 17:51:29.718034 tar[1579]: linux-amd64/LICENSE
Sep  4 17:51:29.718342 tar[1579]: linux-amd64/README.md
Sep  4 17:51:29.730588 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Sep  4 17:51:29.980094 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:51:29.981715 systemd[1]: Reached target multi-user.target - Multi-User System.
Sep  4 17:51:29.982928 systemd[1]: Startup finished in 6.129s (kernel) + 3.923s (userspace) = 10.053s.
Sep  4 17:51:29.985804 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Sep  4 17:51:30.456673 kubelet[1669]: E0904 17:51:30.456474    1669 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Sep  4 17:51:30.460632 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep  4 17:51:30.460897 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep  4 17:51:38.582749 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Sep  4 17:51:38.595380 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:50666.service - OpenSSH per-connection server daemon (10.0.0.1:50666).
Sep  4 17:51:38.631075 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 50666 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:51:38.633026 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:51:38.641752 systemd-logind[1557]: New session 1 of user core.
Sep  4 17:51:38.642869 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Sep  4 17:51:38.655387 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Sep  4 17:51:38.669250 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Sep  4 17:51:38.679405 systemd[1]: Starting user@500.service - User Manager for UID 500...
Sep  4 17:51:38.682767 (systemd)[1689]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:51:38.808446 systemd[1689]: Queued start job for default target default.target.
Sep  4 17:51:38.808847 systemd[1689]: Created slice app.slice - User Application Slice.
Sep  4 17:51:38.808867 systemd[1689]: Reached target paths.target - Paths.
Sep  4 17:51:38.808881 systemd[1689]: Reached target timers.target - Timers.
Sep  4 17:51:38.816143 systemd[1689]: Starting dbus.socket - D-Bus User Message Bus Socket...
Sep  4 17:51:38.822504 systemd[1689]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Sep  4 17:51:38.822575 systemd[1689]: Reached target sockets.target - Sockets.
Sep  4 17:51:38.822590 systemd[1689]: Reached target basic.target - Basic System.
Sep  4 17:51:38.822628 systemd[1689]: Reached target default.target - Main User Target.
Sep  4 17:51:38.822662 systemd[1689]: Startup finished in 133ms.
Sep  4 17:51:38.823571 systemd[1]: Started user@500.service - User Manager for UID 500.
Sep  4 17:51:38.825346 systemd[1]: Started session-1.scope - Session 1 of User core.
Sep  4 17:51:38.891275 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:50674.service - OpenSSH per-connection server daemon (10.0.0.1:50674).
Sep  4 17:51:38.921074 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 50674 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:51:38.922632 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:51:38.926740 systemd-logind[1557]: New session 2 of user core.
Sep  4 17:51:38.940305 systemd[1]: Started session-2.scope - Session 2 of User core.
Sep  4 17:51:38.993695 sshd[1701]: pam_unix(sshd:session): session closed for user core
Sep  4 17:51:39.002287 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:50690.service - OpenSSH per-connection server daemon (10.0.0.1:50690).
Sep  4 17:51:39.002739 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:50674.service: Deactivated successfully.
Sep  4 17:51:39.005181 systemd-logind[1557]: Session 2 logged out. Waiting for processes to exit.
Sep  4 17:51:39.006137 systemd[1]: session-2.scope: Deactivated successfully.
Sep  4 17:51:39.006955 systemd-logind[1557]: Removed session 2.
Sep  4 17:51:39.027466 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 50690 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:51:39.028843 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:51:39.032816 systemd-logind[1557]: New session 3 of user core.
Sep  4 17:51:39.042283 systemd[1]: Started session-3.scope - Session 3 of User core.
Sep  4 17:51:39.091163 sshd[1706]: pam_unix(sshd:session): session closed for user core
Sep  4 17:51:39.103285 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:50702.service - OpenSSH per-connection server daemon (10.0.0.1:50702).
Sep  4 17:51:39.103751 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:50690.service: Deactivated successfully.
Sep  4 17:51:39.106249 systemd-logind[1557]: Session 3 logged out. Waiting for processes to exit.
Sep  4 17:51:39.107166 systemd[1]: session-3.scope: Deactivated successfully.
Sep  4 17:51:39.108328 systemd-logind[1557]: Removed session 3.
Sep  4 17:51:39.128650 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 50702 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:51:39.130103 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:51:39.133848 systemd-logind[1557]: New session 4 of user core.
Sep  4 17:51:39.143293 systemd[1]: Started session-4.scope - Session 4 of User core.
Sep  4 17:51:39.195358 sshd[1714]: pam_unix(sshd:session): session closed for user core
Sep  4 17:51:39.212268 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:50708.service - OpenSSH per-connection server daemon (10.0.0.1:50708).
Sep  4 17:51:39.212717 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:50702.service: Deactivated successfully.
Sep  4 17:51:39.214979 systemd-logind[1557]: Session 4 logged out. Waiting for processes to exit.
Sep  4 17:51:39.215890 systemd[1]: session-4.scope: Deactivated successfully.
Sep  4 17:51:39.217035 systemd-logind[1557]: Removed session 4.
Sep  4 17:51:39.238284 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 50708 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:51:39.239843 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:51:39.243821 systemd-logind[1557]: New session 5 of user core.
Sep  4 17:51:39.253286 systemd[1]: Started session-5.scope - Session 5 of User core.
Sep  4 17:51:39.311565 sudo[1729]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Sep  4 17:51:39.311915 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Sep  4 17:51:39.334652 sudo[1729]: pam_unix(sudo:session): session closed for user root
Sep  4 17:51:39.336757 sshd[1722]: pam_unix(sshd:session): session closed for user core
Sep  4 17:51:39.346338 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:50712.service - OpenSSH per-connection server daemon (10.0.0.1:50712).
Sep  4 17:51:39.346848 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:50708.service: Deactivated successfully.
Sep  4 17:51:39.349683 systemd-logind[1557]: Session 5 logged out. Waiting for processes to exit.
Sep  4 17:51:39.350484 systemd[1]: session-5.scope: Deactivated successfully.
Sep  4 17:51:39.351516 systemd-logind[1557]: Removed session 5.
Sep  4 17:51:39.371682 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 50712 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:51:39.373187 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:51:39.377027 systemd-logind[1557]: New session 6 of user core.
Sep  4 17:51:39.387306 systemd[1]: Started session-6.scope - Session 6 of User core.
Sep  4 17:51:39.441506 sudo[1739]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Sep  4 17:51:39.441887 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Sep  4 17:51:39.445495 sudo[1739]: pam_unix(sudo:session): session closed for user root
Sep  4 17:51:39.451579 sudo[1738]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Sep  4 17:51:39.451910 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Sep  4 17:51:39.472252 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Sep  4 17:51:39.474012 auditctl[1742]: No rules
Sep  4 17:51:39.475413 systemd[1]: audit-rules.service: Deactivated successfully.
Sep  4 17:51:39.475742 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Sep  4 17:51:39.477538 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Sep  4 17:51:39.506971 augenrules[1761]: No rules
Sep  4 17:51:39.508970 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Sep  4 17:51:39.510307 sudo[1738]: pam_unix(sudo:session): session closed for user root
Sep  4 17:51:39.512131 sshd[1731]: pam_unix(sshd:session): session closed for user core
Sep  4 17:51:39.520299 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:50720.service - OpenSSH per-connection server daemon (10.0.0.1:50720).
Sep  4 17:51:39.520797 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:50712.service: Deactivated successfully.
Sep  4 17:51:39.523322 systemd-logind[1557]: Session 6 logged out. Waiting for processes to exit.
Sep  4 17:51:39.524679 systemd[1]: session-6.scope: Deactivated successfully.
Sep  4 17:51:39.525613 systemd-logind[1557]: Removed session 6.
Sep  4 17:51:39.549186 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 50720 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:51:39.550730 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:51:39.554572 systemd-logind[1557]: New session 7 of user core.
Sep  4 17:51:39.565374 systemd[1]: Started session-7.scope - Session 7 of User core.
Sep  4 17:51:39.618381 sudo[1774]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Sep  4 17:51:39.618757 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Sep  4 17:51:39.728269 systemd[1]: Starting docker.service - Docker Application Container Engine...
Sep  4 17:51:39.728552 (dockerd)[1784]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Sep  4 17:51:40.003416 dockerd[1784]: time="2024-09-04T17:51:40.003255520Z" level=info msg="Starting up"
Sep  4 17:51:40.559371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Sep  4 17:51:40.565232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:51:40.603024 dockerd[1784]: time="2024-09-04T17:51:40.602966669Z" level=info msg="Loading containers: start."
Sep  4 17:51:40.704014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:51:40.708831 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Sep  4 17:51:40.754359 kubelet[1820]: E0904 17:51:40.754297    1820 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Sep  4 17:51:40.762254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep  4 17:51:40.762506 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep  4 17:51:40.912087 kernel: Initializing XFRM netlink socket
Sep  4 17:51:40.991813 systemd-networkd[1243]: docker0: Link UP
Sep  4 17:51:41.018846 dockerd[1784]: time="2024-09-04T17:51:41.018787175Z" level=info msg="Loading containers: done."
Sep  4 17:51:41.034888 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4189427425-merged.mount: Deactivated successfully.
Sep  4 17:51:41.037483 dockerd[1784]: time="2024-09-04T17:51:41.037440333Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Sep  4 17:51:41.037572 dockerd[1784]: time="2024-09-04T17:51:41.037550810Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0
Sep  4 17:51:41.037694 dockerd[1784]: time="2024-09-04T17:51:41.037673029Z" level=info msg="Daemon has completed initialization"
Sep  4 17:51:41.075615 dockerd[1784]: time="2024-09-04T17:51:41.075538224Z" level=info msg="API listen on /run/docker.sock"
Sep  4 17:51:41.076433 systemd[1]: Started docker.service - Docker Application Container Engine.
Sep  4 17:51:41.706490 containerd[1583]: time="2024-09-04T17:51:41.706446504Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\""
Sep  4 17:51:42.377394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2441578127.mount: Deactivated successfully.
Sep  4 17:51:43.379985 containerd[1583]: time="2024-09-04T17:51:43.379924613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:43.380547 containerd[1583]: time="2024-09-04T17:51:43.380499342Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=34530735"
Sep  4 17:51:43.381646 containerd[1583]: time="2024-09-04T17:51:43.381615948Z" level=info msg="ImageCreate event name:\"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:43.384239 containerd[1583]: time="2024-09-04T17:51:43.384181553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:43.385365 containerd[1583]: time="2024-09-04T17:51:43.385333645Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"34527535\" in 1.678837338s"
Sep  4 17:51:43.385403 containerd[1583]: time="2024-09-04T17:51:43.385370043Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\""
Sep  4 17:51:43.407341 containerd[1583]: time="2024-09-04T17:51:43.407315520Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\""
Sep  4 17:51:44.795012 containerd[1583]: time="2024-09-04T17:51:44.794947037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:44.795746 containerd[1583]: time="2024-09-04T17:51:44.795679952Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=31849709"
Sep  4 17:51:44.796908 containerd[1583]: time="2024-09-04T17:51:44.796862692Z" level=info msg="ImageCreate event name:\"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:44.799678 containerd[1583]: time="2024-09-04T17:51:44.799649072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:44.800566 containerd[1583]: time="2024-09-04T17:51:44.800531307Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"33399655\" in 1.393188005s"
Sep  4 17:51:44.800566 containerd[1583]: time="2024-09-04T17:51:44.800563588Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\""
Sep  4 17:51:44.823357 containerd[1583]: time="2024-09-04T17:51:44.823305459Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\""
Sep  4 17:51:45.962160 containerd[1583]: time="2024-09-04T17:51:45.962099311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:45.962739 containerd[1583]: time="2024-09-04T17:51:45.962696732Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=17097777"
Sep  4 17:51:45.963844 containerd[1583]: time="2024-09-04T17:51:45.963816474Z" level=info msg="ImageCreate event name:\"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:45.966445 containerd[1583]: time="2024-09-04T17:51:45.966405653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:45.967636 containerd[1583]: time="2024-09-04T17:51:45.967592861Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"18647741\" in 1.144249952s"
Sep  4 17:51:45.967673 containerd[1583]: time="2024-09-04T17:51:45.967636884Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\""
Sep  4 17:51:45.991506 containerd[1583]: time="2024-09-04T17:51:45.991460185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\""
Sep  4 17:51:46.927893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851925424.mount: Deactivated successfully.
Sep  4 17:51:47.947318 containerd[1583]: time="2024-09-04T17:51:47.947250785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:47.948107 containerd[1583]: time="2024-09-04T17:51:47.948071926Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=28303449"
Sep  4 17:51:47.949357 containerd[1583]: time="2024-09-04T17:51:47.949330939Z" level=info msg="ImageCreate event name:\"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:47.951512 containerd[1583]: time="2024-09-04T17:51:47.951461067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:47.951919 containerd[1583]: time="2024-09-04T17:51:47.951889060Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"28302468\" in 1.960387386s"
Sep  4 17:51:47.951966 containerd[1583]: time="2024-09-04T17:51:47.951919006Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\""
Sep  4 17:51:47.976655 containerd[1583]: time="2024-09-04T17:51:47.976624513Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Sep  4 17:51:48.485375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3440046005.mount: Deactivated successfully.
Sep  4 17:51:48.490994 containerd[1583]: time="2024-09-04T17:51:48.490948609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:48.491778 containerd[1583]: time="2024-09-04T17:51:48.491707874Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290"
Sep  4 17:51:48.493038 containerd[1583]: time="2024-09-04T17:51:48.493003065Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:48.495556 containerd[1583]: time="2024-09-04T17:51:48.495516492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:48.496136 containerd[1583]: time="2024-09-04T17:51:48.496099035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 519.445678ms"
Sep  4 17:51:48.496136 containerd[1583]: time="2024-09-04T17:51:48.496130294Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Sep  4 17:51:48.524291 containerd[1583]: time="2024-09-04T17:51:48.524225953Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Sep  4 17:51:49.080832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2614461785.mount: Deactivated successfully.
Sep  4 17:51:50.647488 containerd[1583]: time="2024-09-04T17:51:50.647426345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:50.648232 containerd[1583]: time="2024-09-04T17:51:50.648186642Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625"
Sep  4 17:51:50.649334 containerd[1583]: time="2024-09-04T17:51:50.649305051Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:50.651999 containerd[1583]: time="2024-09-04T17:51:50.651949915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:50.653092 containerd[1583]: time="2024-09-04T17:51:50.653061992Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.128774072s"
Sep  4 17:51:50.653134 containerd[1583]: time="2024-09-04T17:51:50.653093140Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\""
Sep  4 17:51:50.673278 containerd[1583]: time="2024-09-04T17:51:50.673255691Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\""
Sep  4 17:51:51.012678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Sep  4 17:51:51.025185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:51:51.167097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:51:51.171402 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Sep  4 17:51:51.431172 kubelet[2135]: E0904 17:51:51.430999    2135 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Sep  4 17:51:51.435997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep  4 17:51:51.436272 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep  4 17:51:52.808613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476170965.mount: Deactivated successfully.
Sep  4 17:51:53.071108 containerd[1583]: time="2024-09-04T17:51:53.070958415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:53.071740 containerd[1583]: time="2024-09-04T17:51:53.071678256Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749"
Sep  4 17:51:53.072740 containerd[1583]: time="2024-09-04T17:51:53.072709792Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:53.075003 containerd[1583]: time="2024-09-04T17:51:53.074956969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:51:53.075746 containerd[1583]: time="2024-09-04T17:51:53.075704122Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 2.402421249s"
Sep  4 17:51:53.075746 containerd[1583]: time="2024-09-04T17:51:53.075740470Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\""
Sep  4 17:51:55.169939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:51:55.185246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:51:55.201256 systemd[1]: Reloading requested from client PID 2232 ('systemctl') (unit session-7.scope)...
Sep  4 17:51:55.201272 systemd[1]: Reloading...
Sep  4 17:51:55.282077 zram_generator::config[2269]: No configuration found.
Sep  4 17:51:55.570636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 17:51:55.641685 systemd[1]: Reloading finished in 440 ms.
Sep  4 17:51:55.685093 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Sep  4 17:51:55.685196 systemd[1]: kubelet.service: Failed with result 'signal'.
Sep  4 17:51:55.685610 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:51:55.699412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:51:55.831413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:51:55.835748 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Sep  4 17:51:55.877103 kubelet[2329]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 17:51:55.877103 kubelet[2329]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Sep  4 17:51:55.877103 kubelet[2329]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 17:51:55.877505 kubelet[2329]: I0904 17:51:55.877146    2329 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Sep  4 17:51:56.568563 kubelet[2329]: I0904 17:51:56.568531    2329 server.go:467] "Kubelet version" kubeletVersion="v1.28.7"
Sep  4 17:51:56.568563 kubelet[2329]: I0904 17:51:56.568557    2329 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep  4 17:51:56.568810 kubelet[2329]: I0904 17:51:56.568790    2329 server.go:895] "Client rotation is on, will bootstrap in background"
Sep  4 17:51:56.581658 kubelet[2329]: I0904 17:51:56.581625    2329 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep  4 17:51:56.583030 kubelet[2329]: E0904 17:51:56.583003    2329 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:56.592715 kubelet[2329]: I0904 17:51:56.592686    2329 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Sep  4 17:51:56.593754 kubelet[2329]: I0904 17:51:56.593726    2329 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Sep  4 17:51:56.593893 kubelet[2329]: I0904 17:51:56.593873    2329 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Sep  4 17:51:56.594180 kubelet[2329]: I0904 17:51:56.594156    2329 topology_manager.go:138] "Creating topology manager with none policy"
Sep  4 17:51:56.594180 kubelet[2329]: I0904 17:51:56.594173    2329 container_manager_linux.go:301] "Creating device plugin manager"
Sep  4 17:51:56.594681 kubelet[2329]: I0904 17:51:56.594658    2329 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 17:51:56.596442 kubelet[2329]: I0904 17:51:56.596409    2329 kubelet.go:393] "Attempting to sync node with API server"
Sep  4 17:51:56.596442 kubelet[2329]: I0904 17:51:56.596427    2329 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Sep  4 17:51:56.596509 kubelet[2329]: I0904 17:51:56.596452    2329 kubelet.go:309] "Adding apiserver pod source"
Sep  4 17:51:56.596509 kubelet[2329]: I0904 17:51:56.596468    2329 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Sep  4 17:51:56.597076 kubelet[2329]: W0904 17:51:56.596984    2329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:56.597076 kubelet[2329]: E0904 17:51:56.597034    2329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:56.597427 kubelet[2329]: I0904 17:51:56.597406    2329 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1"
Sep  4 17:51:56.597792 kubelet[2329]: W0904 17:51:56.597752    2329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:56.597792 kubelet[2329]: E0904 17:51:56.597791    2329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:56.599433 kubelet[2329]: W0904 17:51:56.599414    2329 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Sep  4 17:51:56.600035 kubelet[2329]: I0904 17:51:56.599905    2329 server.go:1232] "Started kubelet"
Sep  4 17:51:56.600035 kubelet[2329]: I0904 17:51:56.599996    2329 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Sep  4 17:51:56.600035 kubelet[2329]: I0904 17:51:56.600019    2329 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Sep  4 17:51:56.600670 kubelet[2329]: I0904 17:51:56.600251    2329 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Sep  4 17:51:56.600806 kubelet[2329]: E0904 17:51:56.600789    2329 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Sep  4 17:51:56.600891 kubelet[2329]: E0904 17:51:56.600810    2329 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Sep  4 17:51:56.601231 kubelet[2329]: I0904 17:51:56.601203    2329 server.go:462] "Adding debug handlers to kubelet server"
Sep  4 17:51:56.601957 kubelet[2329]: E0904 17:51:56.601881    2329 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17f21bee14b4ea05", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 51, 56, 599888389, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 51, 56, 599888389, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.147:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.147:6443: connect: connection refused'(may retry after sleeping)
Sep  4 17:51:56.602308 kubelet[2329]: I0904 17:51:56.602289    2329 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Sep  4 17:51:56.603014 kubelet[2329]: E0904 17:51:56.602375    2329 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Sep  4 17:51:56.603014 kubelet[2329]: I0904 17:51:56.602386    2329 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Sep  4 17:51:56.603014 kubelet[2329]: I0904 17:51:56.602374    2329 volume_manager.go:291] "Starting Kubelet Volume Manager"
Sep  4 17:51:56.603014 kubelet[2329]: I0904 17:51:56.602644    2329 reconciler_new.go:29] "Reconciler: start to sync state"
Sep  4 17:51:56.603014 kubelet[2329]: W0904 17:51:56.602708    2329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:56.603014 kubelet[2329]: E0904 17:51:56.602738    2329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:56.603014 kubelet[2329]: E0904 17:51:56.602907    2329 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="200ms"
Sep  4 17:51:56.617430 kubelet[2329]: I0904 17:51:56.617397    2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Sep  4 17:51:56.618711 kubelet[2329]: I0904 17:51:56.618683    2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Sep  4 17:51:56.618764 kubelet[2329]: I0904 17:51:56.618714    2329 status_manager.go:217] "Starting to sync pod status with apiserver"
Sep  4 17:51:56.618764 kubelet[2329]: I0904 17:51:56.618730    2329 kubelet.go:2303] "Starting kubelet main sync loop"
Sep  4 17:51:56.618816 kubelet[2329]: E0904 17:51:56.618773    2329 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Sep  4 17:51:56.625066 kubelet[2329]: W0904 17:51:56.623901    2329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:56.625066 kubelet[2329]: E0904 17:51:56.623943    2329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:56.650581 kubelet[2329]: I0904 17:51:56.650558    2329 cpu_manager.go:214] "Starting CPU manager" policy="none"
Sep  4 17:51:56.650581 kubelet[2329]: I0904 17:51:56.650573    2329 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Sep  4 17:51:56.650581 kubelet[2329]: I0904 17:51:56.650590    2329 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 17:51:56.703412 kubelet[2329]: I0904 17:51:56.703384    2329 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Sep  4 17:51:56.703672 kubelet[2329]: E0904 17:51:56.703658    2329 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost"
Sep  4 17:51:56.719880 kubelet[2329]: E0904 17:51:56.719849    2329 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Sep  4 17:51:56.803315 kubelet[2329]: E0904 17:51:56.803297    2329 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="400ms"
Sep  4 17:51:56.905368 kubelet[2329]: I0904 17:51:56.905306    2329 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Sep  4 17:51:56.905676 kubelet[2329]: E0904 17:51:56.905501    2329 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost"
Sep  4 17:51:56.920723 kubelet[2329]: E0904 17:51:56.920697    2329 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Sep  4 17:51:57.113772 kubelet[2329]: I0904 17:51:57.113740    2329 policy_none.go:49] "None policy: Start"
Sep  4 17:51:57.114309 kubelet[2329]: I0904 17:51:57.114293    2329 memory_manager.go:169] "Starting memorymanager" policy="None"
Sep  4 17:51:57.114343 kubelet[2329]: I0904 17:51:57.114322    2329 state_mem.go:35] "Initializing new in-memory state store"
Sep  4 17:51:57.121464 kubelet[2329]: I0904 17:51:57.121440    2329 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Sep  4 17:51:57.122115 kubelet[2329]: I0904 17:51:57.121734    2329 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Sep  4 17:51:57.122873 kubelet[2329]: E0904 17:51:57.122655    2329 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Sep  4 17:51:57.204710 kubelet[2329]: E0904 17:51:57.204623    2329 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="800ms"
Sep  4 17:51:57.307152 kubelet[2329]: I0904 17:51:57.307125    2329 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Sep  4 17:51:57.307455 kubelet[2329]: E0904 17:51:57.307429    2329 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost"
Sep  4 17:51:57.321574 kubelet[2329]: I0904 17:51:57.321540    2329 topology_manager.go:215] "Topology Admit Handler" podUID="a9a3e852f1fb13a9c43d871839330c6e" podNamespace="kube-system" podName="kube-apiserver-localhost"
Sep  4 17:51:57.322463 kubelet[2329]: I0904 17:51:57.322444    2329 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Sep  4 17:51:57.323169 kubelet[2329]: I0904 17:51:57.323151    2329 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost"
Sep  4 17:51:57.405147 kubelet[2329]: I0904 17:51:57.405097    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:51:57.405147 kubelet[2329]: I0904 17:51:57.405148    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost"
Sep  4 17:51:57.405293 kubelet[2329]: I0904 17:51:57.405178    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:51:57.405293 kubelet[2329]: I0904 17:51:57.405213    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:51:57.405293 kubelet[2329]: I0904 17:51:57.405230    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9a3e852f1fb13a9c43d871839330c6e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9a3e852f1fb13a9c43d871839330c6e\") " pod="kube-system/kube-apiserver-localhost"
Sep  4 17:51:57.405293 kubelet[2329]: I0904 17:51:57.405249    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9a3e852f1fb13a9c43d871839330c6e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9a3e852f1fb13a9c43d871839330c6e\") " pod="kube-system/kube-apiserver-localhost"
Sep  4 17:51:57.405293 kubelet[2329]: I0904 17:51:57.405266    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9a3e852f1fb13a9c43d871839330c6e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a9a3e852f1fb13a9c43d871839330c6e\") " pod="kube-system/kube-apiserver-localhost"
Sep  4 17:51:57.405452 kubelet[2329]: I0904 17:51:57.405284    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:51:57.405452 kubelet[2329]: I0904 17:51:57.405301    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:51:57.626915 kubelet[2329]: E0904 17:51:57.626803    2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:51:57.627435 containerd[1583]: time="2024-09-04T17:51:57.627395981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a9a3e852f1fb13a9c43d871839330c6e,Namespace:kube-system,Attempt:0,}"
Sep  4 17:51:57.628525 kubelet[2329]: E0904 17:51:57.628501    2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:51:57.628986 containerd[1583]: time="2024-09-04T17:51:57.628795277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,}"
Sep  4 17:51:57.629988 kubelet[2329]: E0904 17:51:57.629965    2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:51:57.630251 containerd[1583]: time="2024-09-04T17:51:57.630219560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,}"
Sep  4 17:51:57.708009 kubelet[2329]: W0904 17:51:57.707960    2329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:57.708009 kubelet[2329]: E0904 17:51:57.708001    2329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:57.849803 kubelet[2329]: W0904 17:51:57.849742    2329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:57.849803 kubelet[2329]: E0904 17:51:57.849800    2329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:57.915652 kubelet[2329]: W0904 17:51:57.915502    2329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:57.915652 kubelet[2329]: E0904 17:51:57.915554    2329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:58.005207 kubelet[2329]: E0904 17:51:58.005166    2329 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="1.6s"
Sep  4 17:51:58.039700 kubelet[2329]: W0904 17:51:58.039616    2329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:58.039700 kubelet[2329]: E0904 17:51:58.039700    2329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused
Sep  4 17:51:58.109329 kubelet[2329]: I0904 17:51:58.109301    2329 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Sep  4 17:51:58.109687 kubelet[2329]: E0904 17:51:58.109650    2329 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost"
Sep  4 17:51:58.164605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553468722.mount: Deactivated successfully.
Sep  4 17:51:58.168996 containerd[1583]: time="2024-09-04T17:51:58.168901821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Sep  4 17:51:58.172476 containerd[1583]: time="2024-09-04T17:51:58.172430684Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Sep  4 17:51:58.173578 containerd[1583]: time="2024-09-04T17:51:58.173537441Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Sep  4 17:51:58.174605 containerd[1583]: time="2024-09-04T17:51:58.174566162Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Sep  4 17:51:58.175693 containerd[1583]: time="2024-09-04T17:51:58.175652320Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Sep  4 17:51:58.176815 containerd[1583]: time="2024-09-04T17:51:58.176754329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Sep  4 17:51:58.177679 containerd[1583]: time="2024-09-04T17:51:58.177628609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056"
Sep  4 17:51:58.180593 containerd[1583]: time="2024-09-04T17:51:58.180564459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Sep  4 17:51:58.181333 containerd[1583]: time="2024-09-04T17:51:58.181301593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.025837ms"
Sep  4 17:51:58.182032 containerd[1583]: time="2024-09-04T17:51:58.182008610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.53329ms"
Sep  4 17:51:58.184267 containerd[1583]: time="2024-09-04T17:51:58.184230910Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.384698ms"
Sep  4 17:51:58.333355 containerd[1583]: time="2024-09-04T17:51:58.333268262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:51:58.333581 containerd[1583]: time="2024-09-04T17:51:58.333268191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:51:58.333581 containerd[1583]: time="2024-09-04T17:51:58.333320339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:51:58.333581 containerd[1583]: time="2024-09-04T17:51:58.333333795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:51:58.333581 containerd[1583]: time="2024-09-04T17:51:58.333414406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:51:58.334341 containerd[1583]: time="2024-09-04T17:51:58.334265653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:51:58.334341 containerd[1583]: time="2024-09-04T17:51:58.334286543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:51:58.334458 containerd[1583]: time="2024-09-04T17:51:58.334419312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:51:58.335670 containerd[1583]: time="2024-09-04T17:51:58.335396506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:51:58.335670 containerd[1583]: time="2024-09-04T17:51:58.335460766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:51:58.335670 containerd[1583]: time="2024-09-04T17:51:58.335477598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:51:58.335670 containerd[1583]: time="2024-09-04T17:51:58.335567417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:51:58.386889 containerd[1583]: time="2024-09-04T17:51:58.386847568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b40499f526aaaa4c0c8249039e7a7d7e9bd9b09a101460afc76ac84d9212e482\""
Sep  4 17:51:58.389220 kubelet[2329]: E0904 17:51:58.389191    2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:51:58.390738 containerd[1583]: time="2024-09-04T17:51:58.390706992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a9a3e852f1fb13a9c43d871839330c6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"226aae35ffa03173ec553c9956b97b8f5e40ea480cbf49f2c9672a004b1f179d\""
Sep  4 17:51:58.392035 kubelet[2329]: E0904 17:51:58.392022    2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:51:58.393680 containerd[1583]: time="2024-09-04T17:51:58.393659172Z" level=info msg="CreateContainer within sandbox \"226aae35ffa03173ec553c9956b97b8f5e40ea480cbf49f2c9672a004b1f179d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Sep  4 17:51:58.393824 containerd[1583]: time="2024-09-04T17:51:58.393722361Z" level=info msg="CreateContainer within sandbox \"b40499f526aaaa4c0c8249039e7a7d7e9bd9b09a101460afc76ac84d9212e482\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Sep  4 17:51:58.395515 containerd[1583]: time="2024-09-04T17:51:58.395493876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,} returns sandbox id \"145e2d3090b4d82046c73c6a38aab0ab6d0d551da66f66dcf87126d4b1bc133d\""
Sep  4 17:51:58.396177 kubelet[2329]: E0904 17:51:58.396150    2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:51:58.398586 containerd[1583]: time="2024-09-04T17:51:58.398550332Z" level=info msg="CreateContainer within sandbox \"145e2d3090b4d82046c73c6a38aab0ab6d0d551da66f66dcf87126d4b1bc133d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Sep  4 17:51:58.416600 containerd[1583]: time="2024-09-04T17:51:58.416562307Z" level=info msg="CreateContainer within sandbox \"226aae35ffa03173ec553c9956b97b8f5e40ea480cbf49f2c9672a004b1f179d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dfa7f9d79ebb4e7c516d1a5543352db2eafb4e857c2cbe84b379433a594e740a\""
Sep  4 17:51:58.417219 containerd[1583]: time="2024-09-04T17:51:58.417185366Z" level=info msg="StartContainer for \"dfa7f9d79ebb4e7c516d1a5543352db2eafb4e857c2cbe84b379433a594e740a\""
Sep  4 17:51:58.427394 containerd[1583]: time="2024-09-04T17:51:58.427108409Z" level=info msg="CreateContainer within sandbox \"145e2d3090b4d82046c73c6a38aab0ab6d0d551da66f66dcf87126d4b1bc133d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d8da46cd31103821aad97f0370a8b8bea2231ccb53f9ee33e7cd25c236e8c1f3\""
Sep  4 17:51:58.427604 containerd[1583]: time="2024-09-04T17:51:58.427584102Z" level=info msg="StartContainer for \"d8da46cd31103821aad97f0370a8b8bea2231ccb53f9ee33e7cd25c236e8c1f3\""
Sep  4 17:51:58.432558 containerd[1583]: time="2024-09-04T17:51:58.432516650Z" level=info msg="CreateContainer within sandbox \"b40499f526aaaa4c0c8249039e7a7d7e9bd9b09a101460afc76ac84d9212e482\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d6f604b2788344617eedcb66de49c3f279f07949cc2915f65e6b4d51545bb2a8\""
Sep  4 17:51:58.433238 containerd[1583]: time="2024-09-04T17:51:58.433200082Z" level=info msg="StartContainer for \"d6f604b2788344617eedcb66de49c3f279f07949cc2915f65e6b4d51545bb2a8\""
Sep  4 17:51:58.493545 containerd[1583]: time="2024-09-04T17:51:58.493443045Z" level=info msg="StartContainer for \"dfa7f9d79ebb4e7c516d1a5543352db2eafb4e857c2cbe84b379433a594e740a\" returns successfully"
Sep  4 17:51:58.500805 containerd[1583]: time="2024-09-04T17:51:58.500707820Z" level=info msg="StartContainer for \"d6f604b2788344617eedcb66de49c3f279f07949cc2915f65e6b4d51545bb2a8\" returns successfully"
Sep  4 17:51:58.500805 containerd[1583]: time="2024-09-04T17:51:58.500736303Z" level=info msg="StartContainer for \"d8da46cd31103821aad97f0370a8b8bea2231ccb53f9ee33e7cd25c236e8c1f3\" returns successfully"
Sep  4 17:51:58.628896 kubelet[2329]: E0904 17:51:58.628864    2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:51:58.631077 kubelet[2329]: E0904 17:51:58.631061    2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:51:58.634431 kubelet[2329]: E0904 17:51:58.634417    2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:51:59.608661 kubelet[2329]: E0904 17:51:59.608609    2329 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost"
Sep  4 17:51:59.635819 kubelet[2329]: E0904 17:51:59.635786    2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:51:59.711828 kubelet[2329]: I0904 17:51:59.711808    2329 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Sep  4 17:51:59.718747 kubelet[2329]: I0904 17:51:59.718717    2329 kubelet_node_status.go:73] "Successfully registered node" node="localhost"
Sep  4 17:51:59.724723 kubelet[2329]: E0904 17:51:59.724696    2329 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Sep  4 17:51:59.825377 kubelet[2329]: E0904 17:51:59.825314    2329 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Sep  4 17:51:59.926162 kubelet[2329]: E0904 17:51:59.926031    2329 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Sep  4 17:52:00.026660 kubelet[2329]: E0904 17:52:00.026593    2329 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Sep  4 17:52:00.127585 kubelet[2329]: E0904 17:52:00.127528    2329 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Sep  4 17:52:00.228635 kubelet[2329]: E0904 17:52:00.228466    2329 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Sep  4 17:52:00.329467 kubelet[2329]: E0904 17:52:00.329366    2329 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Sep  4 17:52:00.429968 kubelet[2329]: E0904 17:52:00.429896    2329 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Sep  4 17:52:00.435755 kubelet[2329]: E0904 17:52:00.435730    2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:00.531008 kubelet[2329]: E0904 17:52:00.530884    2329 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Sep  4 17:52:00.631776 kubelet[2329]: E0904 17:52:00.631728    2329 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Sep  4 17:52:01.600813 kubelet[2329]: I0904 17:52:01.600783    2329 apiserver.go:52] "Watching apiserver"
Sep  4 17:52:01.602483 kubelet[2329]: I0904 17:52:01.602440    2329 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Sep  4 17:52:01.642714 kubelet[2329]: E0904 17:52:01.642693    2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:01.905828 systemd[1]: Reloading requested from client PID 2606 ('systemctl') (unit session-7.scope)...
Sep  4 17:52:01.905844 systemd[1]: Reloading...
Sep  4 17:52:01.983078 zram_generator::config[2646]: No configuration found.
Sep  4 17:52:02.468790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 17:52:02.547408 systemd[1]: Reloading finished in 641 ms.
Sep  4 17:52:02.577089 kubelet[2329]: I0904 17:52:02.576999    2329 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep  4 17:52:02.577071 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:52:02.589365 systemd[1]: kubelet.service: Deactivated successfully.
Sep  4 17:52:02.589826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:52:02.600238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:52:02.736908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:52:02.746489 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Sep  4 17:52:02.795514 kubelet[2698]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 17:52:02.795514 kubelet[2698]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Sep  4 17:52:02.795514 kubelet[2698]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 17:52:02.795879 kubelet[2698]: I0904 17:52:02.795557    2698 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Sep  4 17:52:02.802452 kubelet[2698]: I0904 17:52:02.802426    2698 server.go:467] "Kubelet version" kubeletVersion="v1.28.7"
Sep  4 17:52:02.802452 kubelet[2698]: I0904 17:52:02.802445    2698 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep  4 17:52:02.802620 kubelet[2698]: I0904 17:52:02.802605    2698 server.go:895] "Client rotation is on, will bootstrap in background"
Sep  4 17:52:02.804054 kubelet[2698]: I0904 17:52:02.804024    2698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Sep  4 17:52:02.805405 kubelet[2698]: I0904 17:52:02.804968    2698 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep  4 17:52:02.811713 kubelet[2698]: I0904 17:52:02.811696    2698 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Sep  4 17:52:02.812278 kubelet[2698]: I0904 17:52:02.812266    2698 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Sep  4 17:52:02.812462 kubelet[2698]: I0904 17:52:02.812449    2698 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Sep  4 17:52:02.812556 kubelet[2698]: I0904 17:52:02.812548    2698 topology_manager.go:138] "Creating topology manager with none policy"
Sep  4 17:52:02.812623 kubelet[2698]: I0904 17:52:02.812614    2698 container_manager_linux.go:301] "Creating device plugin manager"
Sep  4 17:52:02.812693 kubelet[2698]: I0904 17:52:02.812684    2698 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 17:52:02.812843 kubelet[2698]: I0904 17:52:02.812833    2698 kubelet.go:393] "Attempting to sync node with API server"
Sep  4 17:52:02.812902 kubelet[2698]: I0904 17:52:02.812894    2698 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Sep  4 17:52:02.812963 kubelet[2698]: I0904 17:52:02.812956    2698 kubelet.go:309] "Adding apiserver pod source"
Sep  4 17:52:02.813011 kubelet[2698]: I0904 17:52:02.813003    2698 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Sep  4 17:52:02.813658 kubelet[2698]: I0904 17:52:02.813637    2698 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1"
Sep  4 17:52:02.814201 kubelet[2698]: I0904 17:52:02.814184    2698 server.go:1232] "Started kubelet"
Sep  4 17:52:02.814821 kubelet[2698]: I0904 17:52:02.814788    2698 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Sep  4 17:52:02.821380 kubelet[2698]: E0904 17:52:02.819353    2698 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Sep  4 17:52:02.823103 kubelet[2698]: I0904 17:52:02.814796    2698 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Sep  4 17:52:02.823418 kubelet[2698]: E0904 17:52:02.823398    2698 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Sep  4 17:52:02.823607 kubelet[2698]: I0904 17:52:02.823574    2698 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Sep  4 17:52:02.825001 kubelet[2698]: I0904 17:52:02.824977    2698 server.go:462] "Adding debug handlers to kubelet server"
Sep  4 17:52:02.826924 kubelet[2698]: I0904 17:52:02.826901    2698 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Sep  4 17:52:02.827370 kubelet[2698]: I0904 17:52:02.827347    2698 volume_manager.go:291] "Starting Kubelet Volume Manager"
Sep  4 17:52:02.827469 kubelet[2698]: I0904 17:52:02.827447    2698 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Sep  4 17:52:02.827638 kubelet[2698]: I0904 17:52:02.827621    2698 reconciler_new.go:29] "Reconciler: start to sync state"
Sep  4 17:52:02.836520 kubelet[2698]: I0904 17:52:02.836489    2698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Sep  4 17:52:02.837633 kubelet[2698]: I0904 17:52:02.837620    2698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Sep  4 17:52:02.837669 kubelet[2698]: I0904 17:52:02.837639    2698 status_manager.go:217] "Starting to sync pod status with apiserver"
Sep  4 17:52:02.837669 kubelet[2698]: I0904 17:52:02.837658    2698 kubelet.go:2303] "Starting kubelet main sync loop"
Sep  4 17:52:02.837737 kubelet[2698]: E0904 17:52:02.837710    2698 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Sep  4 17:52:02.912258 kubelet[2698]: I0904 17:52:02.912166    2698 cpu_manager.go:214] "Starting CPU manager" policy="none"
Sep  4 17:52:02.912258 kubelet[2698]: I0904 17:52:02.912183    2698 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Sep  4 17:52:02.912258 kubelet[2698]: I0904 17:52:02.912198    2698 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 17:52:02.912423 kubelet[2698]: I0904 17:52:02.912365    2698 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Sep  4 17:52:02.912423 kubelet[2698]: I0904 17:52:02.912382    2698 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Sep  4 17:52:02.912423 kubelet[2698]: I0904 17:52:02.912389    2698 policy_none.go:49] "None policy: Start"
Sep  4 17:52:02.913071 kubelet[2698]: I0904 17:52:02.913037    2698 memory_manager.go:169] "Starting memorymanager" policy="None"
Sep  4 17:52:02.913113 kubelet[2698]: I0904 17:52:02.913075    2698 state_mem.go:35] "Initializing new in-memory state store"
Sep  4 17:52:02.913225 kubelet[2698]: I0904 17:52:02.913202    2698 state_mem.go:75] "Updated machine memory state"
Sep  4 17:52:02.914576 kubelet[2698]: I0904 17:52:02.914545    2698 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Sep  4 17:52:02.915200 kubelet[2698]: I0904 17:52:02.914974    2698 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Sep  4 17:52:02.933040 kubelet[2698]: I0904 17:52:02.932851    2698 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Sep  4 17:52:02.937890 kubelet[2698]: I0904 17:52:02.937856    2698 topology_manager.go:215] "Topology Admit Handler" podUID="a9a3e852f1fb13a9c43d871839330c6e" podNamespace="kube-system" podName="kube-apiserver-localhost"
Sep  4 17:52:02.938056 kubelet[2698]: I0904 17:52:02.938013    2698 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Sep  4 17:52:02.938607 kubelet[2698]: I0904 17:52:02.938514    2698 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost"
Sep  4 17:52:02.938607 kubelet[2698]: I0904 17:52:02.938549    2698 kubelet_node_status.go:108] "Node was previously registered" node="localhost"
Sep  4 17:52:02.938708 kubelet[2698]: I0904 17:52:02.938613    2698 kubelet_node_status.go:73] "Successfully registered node" node="localhost"
Sep  4 17:52:02.943619 kubelet[2698]: E0904 17:52:02.943531    2698 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost"
Sep  4 17:52:03.132320 kubelet[2698]: I0904 17:52:03.132202    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9a3e852f1fb13a9c43d871839330c6e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9a3e852f1fb13a9c43d871839330c6e\") " pod="kube-system/kube-apiserver-localhost"
Sep  4 17:52:03.135062 kubelet[2698]: I0904 17:52:03.132463    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9a3e852f1fb13a9c43d871839330c6e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9a3e852f1fb13a9c43d871839330c6e\") " pod="kube-system/kube-apiserver-localhost"
Sep  4 17:52:03.135062 kubelet[2698]: I0904 17:52:03.132499    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost"
Sep  4 17:52:03.135062 kubelet[2698]: I0904 17:52:03.132527    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9a3e852f1fb13a9c43d871839330c6e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a9a3e852f1fb13a9c43d871839330c6e\") " pod="kube-system/kube-apiserver-localhost"
Sep  4 17:52:03.135062 kubelet[2698]: I0904 17:52:03.132552    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:52:03.135062 kubelet[2698]: I0904 17:52:03.132612    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:52:03.135254 kubelet[2698]: I0904 17:52:03.132637    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:52:03.135254 kubelet[2698]: I0904 17:52:03.132659    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:52:03.135254 kubelet[2698]: I0904 17:52:03.132684    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:52:03.244460 kubelet[2698]: E0904 17:52:03.244429    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:03.245655 kubelet[2698]: E0904 17:52:03.245037    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:03.245655 kubelet[2698]: E0904 17:52:03.245302    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:03.815071 kubelet[2698]: I0904 17:52:03.814306    2698 apiserver.go:52] "Watching apiserver"
Sep  4 17:52:03.828524 kubelet[2698]: I0904 17:52:03.828468    2698 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Sep  4 17:52:03.856418 kubelet[2698]: E0904 17:52:03.856388    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:03.857690 kubelet[2698]: E0904 17:52:03.857668    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:03.864400 kubelet[2698]: E0904 17:52:03.864371    2698 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost"
Sep  4 17:52:03.864795 kubelet[2698]: E0904 17:52:03.864772    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:03.873780 kubelet[2698]: I0904 17:52:03.873738    2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.873668201 podCreationTimestamp="2024-09-04 17:52:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:52:03.871585097 +0000 UTC m=+1.120527646" watchObservedRunningTime="2024-09-04 17:52:03.873668201 +0000 UTC m=+1.122610750"
Sep  4 17:52:03.904437 kubelet[2698]: I0904 17:52:03.901941    2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.901901079 podCreationTimestamp="2024-09-04 17:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:52:03.901691685 +0000 UTC m=+1.150634224" watchObservedRunningTime="2024-09-04 17:52:03.901901079 +0000 UTC m=+1.150843628"
Sep  4 17:52:03.904437 kubelet[2698]: I0904 17:52:03.902011    2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.9019980969999999 podCreationTimestamp="2024-09-04 17:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:52:03.889648535 +0000 UTC m=+1.138591084" watchObservedRunningTime="2024-09-04 17:52:03.901998097 +0000 UTC m=+1.150940656"
Sep  4 17:52:04.859075 kubelet[2698]: E0904 17:52:04.857618    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:05.667317 kubelet[2698]: E0904 17:52:05.667277    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:07.589318 sudo[1774]: pam_unix(sudo:session): session closed for user root
Sep  4 17:52:07.591246 sshd[1767]: pam_unix(sshd:session): session closed for user core
Sep  4 17:52:07.594992 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:50720.service: Deactivated successfully.
Sep  4 17:52:07.597118 systemd-logind[1557]: Session 7 logged out. Waiting for processes to exit.
Sep  4 17:52:07.597162 systemd[1]: session-7.scope: Deactivated successfully.
Sep  4 17:52:07.598346 systemd-logind[1557]: Removed session 7.
Sep  4 17:52:10.772877 kubelet[2698]: E0904 17:52:10.772829    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:10.867392 kubelet[2698]: E0904 17:52:10.867352    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:13.095449 kubelet[2698]: E0904 17:52:13.095379    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:13.871830 kubelet[2698]: E0904 17:52:13.871793    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:14.936564 update_engine[1567]: I0904 17:52:14.936501  1567 update_attempter.cc:509] Updating boot flags...
Sep  4 17:52:14.962726 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2812)
Sep  4 17:52:14.996505 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2811)
Sep  4 17:52:15.022079 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2811)
Sep  4 17:52:15.670543 kubelet[2698]: E0904 17:52:15.670518    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:16.619642 kubelet[2698]: I0904 17:52:16.619602    2698 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Sep  4 17:52:16.620055 containerd[1583]: time="2024-09-04T17:52:16.620016626Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Sep  4 17:52:16.620477 kubelet[2698]: I0904 17:52:16.620211    2698 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Sep  4 17:52:17.380740 kubelet[2698]: I0904 17:52:17.380688    2698 topology_manager.go:215] "Topology Admit Handler" podUID="7c61eee8-d17d-442f-8ff3-cd5a48b262cb" podNamespace="kube-system" podName="kube-proxy-ftn2r"
Sep  4 17:52:17.425506 kubelet[2698]: I0904 17:52:17.425453    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7c61eee8-d17d-442f-8ff3-cd5a48b262cb-kube-proxy\") pod \"kube-proxy-ftn2r\" (UID: \"7c61eee8-d17d-442f-8ff3-cd5a48b262cb\") " pod="kube-system/kube-proxy-ftn2r"
Sep  4 17:52:17.425506 kubelet[2698]: I0904 17:52:17.425500    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c61eee8-d17d-442f-8ff3-cd5a48b262cb-xtables-lock\") pod \"kube-proxy-ftn2r\" (UID: \"7c61eee8-d17d-442f-8ff3-cd5a48b262cb\") " pod="kube-system/kube-proxy-ftn2r"
Sep  4 17:52:17.425605 kubelet[2698]: I0904 17:52:17.425524    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c61eee8-d17d-442f-8ff3-cd5a48b262cb-lib-modules\") pod \"kube-proxy-ftn2r\" (UID: \"7c61eee8-d17d-442f-8ff3-cd5a48b262cb\") " pod="kube-system/kube-proxy-ftn2r"
Sep  4 17:52:17.425605 kubelet[2698]: I0904 17:52:17.425549    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nsht\" (UniqueName: \"kubernetes.io/projected/7c61eee8-d17d-442f-8ff3-cd5a48b262cb-kube-api-access-4nsht\") pod \"kube-proxy-ftn2r\" (UID: \"7c61eee8-d17d-442f-8ff3-cd5a48b262cb\") " pod="kube-system/kube-proxy-ftn2r"
Sep  4 17:52:17.598292 kubelet[2698]: I0904 17:52:17.598252    2698 topology_manager.go:215] "Topology Admit Handler" podUID="3008f931-bcb3-472d-82d6-41897a4200df" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-6gqp9"
Sep  4 17:52:17.627468 kubelet[2698]: I0904 17:52:17.627426    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v757z\" (UniqueName: \"kubernetes.io/projected/3008f931-bcb3-472d-82d6-41897a4200df-kube-api-access-v757z\") pod \"tigera-operator-5d56685c77-6gqp9\" (UID: \"3008f931-bcb3-472d-82d6-41897a4200df\") " pod="tigera-operator/tigera-operator-5d56685c77-6gqp9"
Sep  4 17:52:17.627468 kubelet[2698]: I0904 17:52:17.627471    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3008f931-bcb3-472d-82d6-41897a4200df-var-lib-calico\") pod \"tigera-operator-5d56685c77-6gqp9\" (UID: \"3008f931-bcb3-472d-82d6-41897a4200df\") " pod="tigera-operator/tigera-operator-5d56685c77-6gqp9"
Sep  4 17:52:17.684473 kubelet[2698]: E0904 17:52:17.684328    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:17.685016 containerd[1583]: time="2024-09-04T17:52:17.684964599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ftn2r,Uid:7c61eee8-d17d-442f-8ff3-cd5a48b262cb,Namespace:kube-system,Attempt:0,}"
Sep  4 17:52:17.705808 containerd[1583]: time="2024-09-04T17:52:17.705685916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:52:17.705808 containerd[1583]: time="2024-09-04T17:52:17.705784543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:52:17.705948 containerd[1583]: time="2024-09-04T17:52:17.705799081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:17.706656 containerd[1583]: time="2024-09-04T17:52:17.706578100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:17.748193 containerd[1583]: time="2024-09-04T17:52:17.748153909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ftn2r,Uid:7c61eee8-d17d-442f-8ff3-cd5a48b262cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d69fb96cffb6bf247b760b092a4e35773cf8223e39a44834460f4ac1cc1185ec\""
Sep  4 17:52:17.748819 kubelet[2698]: E0904 17:52:17.748799    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:17.750924 containerd[1583]: time="2024-09-04T17:52:17.750886888Z" level=info msg="CreateContainer within sandbox \"d69fb96cffb6bf247b760b092a4e35773cf8223e39a44834460f4ac1cc1185ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Sep  4 17:52:17.765603 containerd[1583]: time="2024-09-04T17:52:17.765566302Z" level=info msg="CreateContainer within sandbox \"d69fb96cffb6bf247b760b092a4e35773cf8223e39a44834460f4ac1cc1185ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2bd77c7a618b530007e97cea5bf6ef4b3166b7513f1af886e1ac4eacc92824eb\""
Sep  4 17:52:17.766230 containerd[1583]: time="2024-09-04T17:52:17.765996500Z" level=info msg="StartContainer for \"2bd77c7a618b530007e97cea5bf6ef4b3166b7513f1af886e1ac4eacc92824eb\""
Sep  4 17:52:17.830349 containerd[1583]: time="2024-09-04T17:52:17.830303222Z" level=info msg="StartContainer for \"2bd77c7a618b530007e97cea5bf6ef4b3166b7513f1af886e1ac4eacc92824eb\" returns successfully"
Sep  4 17:52:17.879648 kubelet[2698]: E0904 17:52:17.879609    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:17.886263 kubelet[2698]: I0904 17:52:17.886075    2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ftn2r" podStartSLOduration=0.885985556 podCreationTimestamp="2024-09-04 17:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:52:17.885854619 +0000 UTC m=+15.134797168" watchObservedRunningTime="2024-09-04 17:52:17.885985556 +0000 UTC m=+15.134928105"
Sep  4 17:52:17.902633 containerd[1583]: time="2024-09-04T17:52:17.902590728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-6gqp9,Uid:3008f931-bcb3-472d-82d6-41897a4200df,Namespace:tigera-operator,Attempt:0,}"
Sep  4 17:52:17.926097 containerd[1583]: time="2024-09-04T17:52:17.925490102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:52:17.926224 containerd[1583]: time="2024-09-04T17:52:17.926174652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:52:17.926252 containerd[1583]: time="2024-09-04T17:52:17.926215840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:17.926354 containerd[1583]: time="2024-09-04T17:52:17.926324947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:17.972714 containerd[1583]: time="2024-09-04T17:52:17.972595541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-6gqp9,Uid:3008f931-bcb3-472d-82d6-41897a4200df,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5a22cd13ff0600c862db4e936aad19afd163c6c7766d9c0ee7a3eb200604aba9\""
Sep  4 17:52:17.973942 containerd[1583]: time="2024-09-04T17:52:17.973921148Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\""
Sep  4 17:52:19.096993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115503386.mount: Deactivated successfully.
Sep  4 17:52:19.478263 containerd[1583]: time="2024-09-04T17:52:19.478214352Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:19.479075 containerd[1583]: time="2024-09-04T17:52:19.479000472Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136549"
Sep  4 17:52:19.480253 containerd[1583]: time="2024-09-04T17:52:19.480228832Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:19.482558 containerd[1583]: time="2024-09-04T17:52:19.482532470Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:19.483254 containerd[1583]: time="2024-09-04T17:52:19.483223832Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.509273318s"
Sep  4 17:52:19.483292 containerd[1583]: time="2024-09-04T17:52:19.483260030Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\""
Sep  4 17:52:19.484786 containerd[1583]: time="2024-09-04T17:52:19.484757820Z" level=info msg="CreateContainer within sandbox \"5a22cd13ff0600c862db4e936aad19afd163c6c7766d9c0ee7a3eb200604aba9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}"
Sep  4 17:52:19.497176 containerd[1583]: time="2024-09-04T17:52:19.497141933Z" level=info msg="CreateContainer within sandbox \"5a22cd13ff0600c862db4e936aad19afd163c6c7766d9c0ee7a3eb200604aba9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6308e2a0588401d6899f82ad3402d58aa7bd12b1b051415ac352eabe3ee1f5e1\""
Sep  4 17:52:19.497840 containerd[1583]: time="2024-09-04T17:52:19.497482889Z" level=info msg="StartContainer for \"6308e2a0588401d6899f82ad3402d58aa7bd12b1b051415ac352eabe3ee1f5e1\""
Sep  4 17:52:19.625990 containerd[1583]: time="2024-09-04T17:52:19.625934232Z" level=info msg="StartContainer for \"6308e2a0588401d6899f82ad3402d58aa7bd12b1b051415ac352eabe3ee1f5e1\" returns successfully"
Sep  4 17:52:19.889751 kubelet[2698]: I0904 17:52:19.889597    2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-6gqp9" podStartSLOduration=1.379553537 podCreationTimestamp="2024-09-04 17:52:17 +0000 UTC" firstStartedPulling="2024-09-04 17:52:17.973566454 +0000 UTC m=+15.222509003" lastFinishedPulling="2024-09-04 17:52:19.483574506 +0000 UTC m=+16.732517055" observedRunningTime="2024-09-04 17:52:19.888757263 +0000 UTC m=+17.137699812" watchObservedRunningTime="2024-09-04 17:52:19.889561589 +0000 UTC m=+17.138504138"
Sep  4 17:52:22.263486 kubelet[2698]: I0904 17:52:22.263434    2698 topology_manager.go:215] "Topology Admit Handler" podUID="2d92f6c6-1066-417b-a523-0bc1b62c209b" podNamespace="calico-system" podName="calico-typha-6b597897f9-b5dgf"
Sep  4 17:52:22.294814 kubelet[2698]: I0904 17:52:22.294440    2698 topology_manager.go:215] "Topology Admit Handler" podUID="3a2f2db1-2adc-4688-8f47-225429ef8e67" podNamespace="calico-system" podName="calico-node-5vc62"
Sep  4 17:52:22.359170 kubelet[2698]: I0904 17:52:22.359128    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2d92f6c6-1066-417b-a523-0bc1b62c209b-typha-certs\") pod \"calico-typha-6b597897f9-b5dgf\" (UID: \"2d92f6c6-1066-417b-a523-0bc1b62c209b\") " pod="calico-system/calico-typha-6b597897f9-b5dgf"
Sep  4 17:52:22.359170 kubelet[2698]: I0904 17:52:22.359166    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a2f2db1-2adc-4688-8f47-225429ef8e67-lib-modules\") pod \"calico-node-5vc62\" (UID: \"3a2f2db1-2adc-4688-8f47-225429ef8e67\") " pod="calico-system/calico-node-5vc62"
Sep  4 17:52:22.359327 kubelet[2698]: I0904 17:52:22.359188    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khjqx\" (UniqueName: \"kubernetes.io/projected/3a2f2db1-2adc-4688-8f47-225429ef8e67-kube-api-access-khjqx\") pod \"calico-node-5vc62\" (UID: \"3a2f2db1-2adc-4688-8f47-225429ef8e67\") " pod="calico-system/calico-node-5vc62"
Sep  4 17:52:22.359327 kubelet[2698]: I0904 17:52:22.359208    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3a2f2db1-2adc-4688-8f47-225429ef8e67-cni-bin-dir\") pod \"calico-node-5vc62\" (UID: \"3a2f2db1-2adc-4688-8f47-225429ef8e67\") " pod="calico-system/calico-node-5vc62"
Sep  4 17:52:22.359327 kubelet[2698]: I0904 17:52:22.359303    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3a2f2db1-2adc-4688-8f47-225429ef8e67-var-run-calico\") pod \"calico-node-5vc62\" (UID: \"3a2f2db1-2adc-4688-8f47-225429ef8e67\") " pod="calico-system/calico-node-5vc62"
Sep  4 17:52:22.359396 kubelet[2698]: I0904 17:52:22.359360    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3a2f2db1-2adc-4688-8f47-225429ef8e67-flexvol-driver-host\") pod \"calico-node-5vc62\" (UID: \"3a2f2db1-2adc-4688-8f47-225429ef8e67\") " pod="calico-system/calico-node-5vc62"
Sep  4 17:52:22.359430 kubelet[2698]: I0904 17:52:22.359409    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d92f6c6-1066-417b-a523-0bc1b62c209b-tigera-ca-bundle\") pod \"calico-typha-6b597897f9-b5dgf\" (UID: \"2d92f6c6-1066-417b-a523-0bc1b62c209b\") " pod="calico-system/calico-typha-6b597897f9-b5dgf"
Sep  4 17:52:22.359455 kubelet[2698]: I0904 17:52:22.359438    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcjbj\" (UniqueName: \"kubernetes.io/projected/2d92f6c6-1066-417b-a523-0bc1b62c209b-kube-api-access-dcjbj\") pod \"calico-typha-6b597897f9-b5dgf\" (UID: \"2d92f6c6-1066-417b-a523-0bc1b62c209b\") " pod="calico-system/calico-typha-6b597897f9-b5dgf"
Sep  4 17:52:22.359528 kubelet[2698]: I0904 17:52:22.359484    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3a2f2db1-2adc-4688-8f47-225429ef8e67-node-certs\") pod \"calico-node-5vc62\" (UID: \"3a2f2db1-2adc-4688-8f47-225429ef8e67\") " pod="calico-system/calico-node-5vc62"
Sep  4 17:52:22.359556 kubelet[2698]: I0904 17:52:22.359528    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3a2f2db1-2adc-4688-8f47-225429ef8e67-cni-log-dir\") pod \"calico-node-5vc62\" (UID: \"3a2f2db1-2adc-4688-8f47-225429ef8e67\") " pod="calico-system/calico-node-5vc62"
Sep  4 17:52:22.359667 kubelet[2698]: I0904 17:52:22.359633    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a2f2db1-2adc-4688-8f47-225429ef8e67-xtables-lock\") pod \"calico-node-5vc62\" (UID: \"3a2f2db1-2adc-4688-8f47-225429ef8e67\") " pod="calico-system/calico-node-5vc62"
Sep  4 17:52:22.359701 kubelet[2698]: I0904 17:52:22.359674    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3a2f2db1-2adc-4688-8f47-225429ef8e67-cni-net-dir\") pod \"calico-node-5vc62\" (UID: \"3a2f2db1-2adc-4688-8f47-225429ef8e67\") " pod="calico-system/calico-node-5vc62"
Sep  4 17:52:22.359735 kubelet[2698]: I0904 17:52:22.359727    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3a2f2db1-2adc-4688-8f47-225429ef8e67-policysync\") pod \"calico-node-5vc62\" (UID: \"3a2f2db1-2adc-4688-8f47-225429ef8e67\") " pod="calico-system/calico-node-5vc62"
Sep  4 17:52:22.359760 kubelet[2698]: I0904 17:52:22.359752    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a2f2db1-2adc-4688-8f47-225429ef8e67-tigera-ca-bundle\") pod \"calico-node-5vc62\" (UID: \"3a2f2db1-2adc-4688-8f47-225429ef8e67\") " pod="calico-system/calico-node-5vc62"
Sep  4 17:52:22.359786 kubelet[2698]: I0904 17:52:22.359774    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a2f2db1-2adc-4688-8f47-225429ef8e67-var-lib-calico\") pod \"calico-node-5vc62\" (UID: \"3a2f2db1-2adc-4688-8f47-225429ef8e67\") " pod="calico-system/calico-node-5vc62"
Sep  4 17:52:22.413322 kubelet[2698]: I0904 17:52:22.413282    2698 topology_manager.go:215] "Topology Admit Handler" podUID="520ef3bc-9622-4072-8027-438b0db6b0ef" podNamespace="calico-system" podName="csi-node-driver-22jch"
Sep  4 17:52:22.413588 kubelet[2698]: E0904 17:52:22.413527    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22jch" podUID="520ef3bc-9622-4072-8027-438b0db6b0ef"
Sep  4 17:52:22.460116 kubelet[2698]: I0904 17:52:22.460039    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/520ef3bc-9622-4072-8027-438b0db6b0ef-varrun\") pod \"csi-node-driver-22jch\" (UID: \"520ef3bc-9622-4072-8027-438b0db6b0ef\") " pod="calico-system/csi-node-driver-22jch"
Sep  4 17:52:22.460585 kubelet[2698]: I0904 17:52:22.460335    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/520ef3bc-9622-4072-8027-438b0db6b0ef-registration-dir\") pod \"csi-node-driver-22jch\" (UID: \"520ef3bc-9622-4072-8027-438b0db6b0ef\") " pod="calico-system/csi-node-driver-22jch"
Sep  4 17:52:22.461741 kubelet[2698]: E0904 17:52:22.461679    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.461741 kubelet[2698]: W0904 17:52:22.461701    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.462146 kubelet[2698]: E0904 17:52:22.462120    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.462463 kubelet[2698]: E0904 17:52:22.462314    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.462463 kubelet[2698]: W0904 17:52:22.462331    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.462463 kubelet[2698]: E0904 17:52:22.462390    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.462925 kubelet[2698]: I0904 17:52:22.462752    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whgwd\" (UniqueName: \"kubernetes.io/projected/520ef3bc-9622-4072-8027-438b0db6b0ef-kube-api-access-whgwd\") pod \"csi-node-driver-22jch\" (UID: \"520ef3bc-9622-4072-8027-438b0db6b0ef\") " pod="calico-system/csi-node-driver-22jch"
Sep  4 17:52:22.463992 kubelet[2698]: E0904 17:52:22.463479    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.464669 kubelet[2698]: W0904 17:52:22.464082    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.464669 kubelet[2698]: E0904 17:52:22.464110    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.465305 kubelet[2698]: E0904 17:52:22.465266    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.465305 kubelet[2698]: W0904 17:52:22.465287    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.465483 kubelet[2698]: E0904 17:52:22.465309    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.465582 kubelet[2698]: E0904 17:52:22.465553    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.465582 kubelet[2698]: W0904 17:52:22.465567    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.465663 kubelet[2698]: E0904 17:52:22.465643    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.467183 kubelet[2698]: E0904 17:52:22.466874    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.467183 kubelet[2698]: W0904 17:52:22.466888    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.467183 kubelet[2698]: E0904 17:52:22.466973    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.467183 kubelet[2698]: E0904 17:52:22.467127    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.467183 kubelet[2698]: W0904 17:52:22.467134    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.467624 kubelet[2698]: E0904 17:52:22.467227    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.467715 kubelet[2698]: E0904 17:52:22.467683    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.467715 kubelet[2698]: W0904 17:52:22.467699    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.468241 kubelet[2698]: E0904 17:52:22.468158    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.469694 kubelet[2698]: E0904 17:52:22.469678    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.469694 kubelet[2698]: W0904 17:52:22.469691    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.470116 kubelet[2698]: E0904 17:52:22.470095    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.470795 kubelet[2698]: E0904 17:52:22.470307    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.470795 kubelet[2698]: W0904 17:52:22.470327    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.470985 kubelet[2698]: E0904 17:52:22.470942    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.471239 kubelet[2698]: E0904 17:52:22.471207    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.471354 kubelet[2698]: W0904 17:52:22.471288    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.471474 kubelet[2698]: E0904 17:52:22.471437    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.471704 kubelet[2698]: E0904 17:52:22.471687    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.471861 kubelet[2698]: W0904 17:52:22.471757    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.471942 kubelet[2698]: E0904 17:52:22.471916    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.472226 kubelet[2698]: E0904 17:52:22.472188    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.472226 kubelet[2698]: W0904 17:52:22.472197    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.472400 kubelet[2698]: E0904 17:52:22.472351    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.472655 kubelet[2698]: E0904 17:52:22.472639    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.473250 kubelet[2698]: W0904 17:52:22.473002    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.473250 kubelet[2698]: E0904 17:52:22.473139    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.473250 kubelet[2698]: I0904 17:52:22.473170    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/520ef3bc-9622-4072-8027-438b0db6b0ef-kubelet-dir\") pod \"csi-node-driver-22jch\" (UID: \"520ef3bc-9622-4072-8027-438b0db6b0ef\") " pod="calico-system/csi-node-driver-22jch"
Sep  4 17:52:22.473459 kubelet[2698]: E0904 17:52:22.473279    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.473459 kubelet[2698]: W0904 17:52:22.473290    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.473459 kubelet[2698]: E0904 17:52:22.473315    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.474794 kubelet[2698]: E0904 17:52:22.474779    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.474794 kubelet[2698]: W0904 17:52:22.474791    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.474862 kubelet[2698]: E0904 17:52:22.474810    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.475068 kubelet[2698]: E0904 17:52:22.475040    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.475068 kubelet[2698]: W0904 17:52:22.475064    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.475180 kubelet[2698]: E0904 17:52:22.475159    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.475312 kubelet[2698]: E0904 17:52:22.475297    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.475312 kubelet[2698]: W0904 17:52:22.475310    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.475364 kubelet[2698]: E0904 17:52:22.475349    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.475391 kubelet[2698]: I0904 17:52:22.475382    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/520ef3bc-9622-4072-8027-438b0db6b0ef-socket-dir\") pod \"csi-node-driver-22jch\" (UID: \"520ef3bc-9622-4072-8027-438b0db6b0ef\") " pod="calico-system/csi-node-driver-22jch"
Sep  4 17:52:22.475551 kubelet[2698]: E0904 17:52:22.475538    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.475551 kubelet[2698]: W0904 17:52:22.475549    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.475607 kubelet[2698]: E0904 17:52:22.475566    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.475803 kubelet[2698]: E0904 17:52:22.475773    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.475803 kubelet[2698]: W0904 17:52:22.475785    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.475803 kubelet[2698]: E0904 17:52:22.475801    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.476030 kubelet[2698]: E0904 17:52:22.476013    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.476030 kubelet[2698]: W0904 17:52:22.476026    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.476123 kubelet[2698]: E0904 17:52:22.476074    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.476360 kubelet[2698]: E0904 17:52:22.476320    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.476360 kubelet[2698]: W0904 17:52:22.476333    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.476360 kubelet[2698]: E0904 17:52:22.476346    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.476642 kubelet[2698]: E0904 17:52:22.476625    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.476642 kubelet[2698]: W0904 17:52:22.476638    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.476703 kubelet[2698]: E0904 17:52:22.476655    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.476873 kubelet[2698]: E0904 17:52:22.476860    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.476932 kubelet[2698]: W0904 17:52:22.476913    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.476991 kubelet[2698]: E0904 17:52:22.476938    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.477166 kubelet[2698]: E0904 17:52:22.477152    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.477166 kubelet[2698]: W0904 17:52:22.477162    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.477231 kubelet[2698]: E0904 17:52:22.477186    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.477443 kubelet[2698]: E0904 17:52:22.477429    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.477443 kubelet[2698]: W0904 17:52:22.477440    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.477492 kubelet[2698]: E0904 17:52:22.477455    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.477704 kubelet[2698]: E0904 17:52:22.477678    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.477704 kubelet[2698]: W0904 17:52:22.477690    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.477985 kubelet[2698]: E0904 17:52:22.477733    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.477985 kubelet[2698]: E0904 17:52:22.477937    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.477985 kubelet[2698]: W0904 17:52:22.477945    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.478104 kubelet[2698]: E0904 17:52:22.478088    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.478209 kubelet[2698]: E0904 17:52:22.478196    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.478209 kubelet[2698]: W0904 17:52:22.478207    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.478324 kubelet[2698]: E0904 17:52:22.478295    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.478473 kubelet[2698]: E0904 17:52:22.478459    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.478473 kubelet[2698]: W0904 17:52:22.478469    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.478515 kubelet[2698]: E0904 17:52:22.478482    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.478797 kubelet[2698]: E0904 17:52:22.478777    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.478835 kubelet[2698]: W0904 17:52:22.478798    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.478835 kubelet[2698]: E0904 17:52:22.478817    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.479468 kubelet[2698]: E0904 17:52:22.479032    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.479468 kubelet[2698]: W0904 17:52:22.479056    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.479468 kubelet[2698]: E0904 17:52:22.479081    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.479468 kubelet[2698]: E0904 17:52:22.479266    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.479468 kubelet[2698]: W0904 17:52:22.479273    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.479468 kubelet[2698]: E0904 17:52:22.479289    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.479635 kubelet[2698]: E0904 17:52:22.479481    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.479635 kubelet[2698]: W0904 17:52:22.479487    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.479635 kubelet[2698]: E0904 17:52:22.479515    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.479701 kubelet[2698]: E0904 17:52:22.479674    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.479701 kubelet[2698]: W0904 17:52:22.479681    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.479760 kubelet[2698]: E0904 17:52:22.479717    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.479893 kubelet[2698]: E0904 17:52:22.479874    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.479893 kubelet[2698]: W0904 17:52:22.479888    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.480331 kubelet[2698]: E0904 17:52:22.479908    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.480331 kubelet[2698]: E0904 17:52:22.480302    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.480331 kubelet[2698]: W0904 17:52:22.480309    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.480331 kubelet[2698]: E0904 17:52:22.480320    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.480500 kubelet[2698]: E0904 17:52:22.480483    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.480500 kubelet[2698]: W0904 17:52:22.480494    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.480500 kubelet[2698]: E0904 17:52:22.480508    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.480769 kubelet[2698]: E0904 17:52:22.480753    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.480769 kubelet[2698]: W0904 17:52:22.480766    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.480826 kubelet[2698]: E0904 17:52:22.480803    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.481002 kubelet[2698]: E0904 17:52:22.480987    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.481002 kubelet[2698]: W0904 17:52:22.480999    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.481117 kubelet[2698]: E0904 17:52:22.481098    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.482270 kubelet[2698]: E0904 17:52:22.481407    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.482270 kubelet[2698]: W0904 17:52:22.481421    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.482270 kubelet[2698]: E0904 17:52:22.481441    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.482270 kubelet[2698]: E0904 17:52:22.482120    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.482270 kubelet[2698]: W0904 17:52:22.482128    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.482739 kubelet[2698]: E0904 17:52:22.482689    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.484274 kubelet[2698]: E0904 17:52:22.484217    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.484274 kubelet[2698]: W0904 17:52:22.484227    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.484274 kubelet[2698]: E0904 17:52:22.484239    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.489334 kubelet[2698]: E0904 17:52:22.489312    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.489334 kubelet[2698]: W0904 17:52:22.489329    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.489398 kubelet[2698]: E0904 17:52:22.489350    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.575904 kubelet[2698]: E0904 17:52:22.575795    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:22.576669 containerd[1583]: time="2024-09-04T17:52:22.576264770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b597897f9-b5dgf,Uid:2d92f6c6-1066-417b-a523-0bc1b62c209b,Namespace:calico-system,Attempt:0,}"
Sep  4 17:52:22.582012 kubelet[2698]: E0904 17:52:22.581970    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.582012 kubelet[2698]: W0904 17:52:22.581996    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.582210 kubelet[2698]: E0904 17:52:22.582025    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.582423 kubelet[2698]: E0904 17:52:22.582387    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.582423 kubelet[2698]: W0904 17:52:22.582402    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.582572 kubelet[2698]: E0904 17:52:22.582430    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.582802 kubelet[2698]: E0904 17:52:22.582783    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.582802 kubelet[2698]: W0904 17:52:22.582794    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.582873 kubelet[2698]: E0904 17:52:22.582816    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.583192 kubelet[2698]: E0904 17:52:22.583170    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.583192 kubelet[2698]: W0904 17:52:22.583189    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.583272 kubelet[2698]: E0904 17:52:22.583215    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.583625 kubelet[2698]: E0904 17:52:22.583599    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.583625 kubelet[2698]: W0904 17:52:22.583611    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.583692 kubelet[2698]: E0904 17:52:22.583666    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.584304 kubelet[2698]: E0904 17:52:22.583918    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.584304 kubelet[2698]: W0904 17:52:22.583929    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.584304 kubelet[2698]: E0904 17:52:22.583989    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.584304 kubelet[2698]: E0904 17:52:22.584189    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.584304 kubelet[2698]: W0904 17:52:22.584196    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.584304 kubelet[2698]: E0904 17:52:22.584248    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.584520 kubelet[2698]: E0904 17:52:22.584396    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.584520 kubelet[2698]: W0904 17:52:22.584403    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.584520 kubelet[2698]: E0904 17:52:22.584495    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.584647 kubelet[2698]: E0904 17:52:22.584634    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.584647 kubelet[2698]: W0904 17:52:22.584644    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.584767 kubelet[2698]: E0904 17:52:22.584739    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.584883 kubelet[2698]: E0904 17:52:22.584869    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.584883 kubelet[2698]: W0904 17:52:22.584879    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.584938 kubelet[2698]: E0904 17:52:22.584897    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.585177 kubelet[2698]: E0904 17:52:22.585145    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.585177 kubelet[2698]: W0904 17:52:22.585161    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.585370 kubelet[2698]: E0904 17:52:22.585193    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.585427 kubelet[2698]: E0904 17:52:22.585413    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.585427 kubelet[2698]: W0904 17:52:22.585424    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.585488 kubelet[2698]: E0904 17:52:22.585454    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.585676 kubelet[2698]: E0904 17:52:22.585631    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.585676 kubelet[2698]: W0904 17:52:22.585646    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.585840 kubelet[2698]: E0904 17:52:22.585738    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.585892 kubelet[2698]: E0904 17:52:22.585876    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.585892 kubelet[2698]: W0904 17:52:22.585888    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.585961 kubelet[2698]: E0904 17:52:22.585917    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.586156 kubelet[2698]: E0904 17:52:22.586139    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.586156 kubelet[2698]: W0904 17:52:22.586151    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.586236 kubelet[2698]: E0904 17:52:22.586178    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.586367 kubelet[2698]: E0904 17:52:22.586353    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.586367 kubelet[2698]: W0904 17:52:22.586363    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.586431 kubelet[2698]: E0904 17:52:22.586380    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.586969 kubelet[2698]: E0904 17:52:22.586605    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.586969 kubelet[2698]: W0904 17:52:22.586620    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.586969 kubelet[2698]: E0904 17:52:22.586636    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.586969 kubelet[2698]: E0904 17:52:22.586841    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.586969 kubelet[2698]: W0904 17:52:22.586848    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.586969 kubelet[2698]: E0904 17:52:22.586864    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.587138 kubelet[2698]: E0904 17:52:22.587091    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.587138 kubelet[2698]: W0904 17:52:22.587100    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.587203 kubelet[2698]: E0904 17:52:22.587187    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.587457 kubelet[2698]: E0904 17:52:22.587316    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.587457 kubelet[2698]: W0904 17:52:22.587328    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.587457 kubelet[2698]: E0904 17:52:22.587351    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.587745 kubelet[2698]: E0904 17:52:22.587735    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.587833 kubelet[2698]: W0904 17:52:22.587823    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.587943 kubelet[2698]: E0904 17:52:22.587893    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.588251 kubelet[2698]: E0904 17:52:22.588203    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.588251 kubelet[2698]: W0904 17:52:22.588218    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.588628 kubelet[2698]: E0904 17:52:22.588255    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.588628 kubelet[2698]: E0904 17:52:22.588550    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.588628 kubelet[2698]: W0904 17:52:22.588559    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.588628 kubelet[2698]: E0904 17:52:22.588626    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.588948 kubelet[2698]: E0904 17:52:22.588918    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.588948 kubelet[2698]: W0904 17:52:22.588933    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.589608 kubelet[2698]: E0904 17:52:22.588951    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.589608 kubelet[2698]: E0904 17:52:22.589202    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.589608 kubelet[2698]: W0904 17:52:22.589210    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.589608 kubelet[2698]: E0904 17:52:22.589223    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.595174 kubelet[2698]: E0904 17:52:22.595040    2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:52:22.595174 kubelet[2698]: W0904 17:52:22.595091    2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:52:22.595174 kubelet[2698]: E0904 17:52:22.595105    2698 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:52:22.598091 kubelet[2698]: E0904 17:52:22.598071    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:22.598556 containerd[1583]: time="2024-09-04T17:52:22.598525637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5vc62,Uid:3a2f2db1-2adc-4688-8f47-225429ef8e67,Namespace:calico-system,Attempt:0,}"
Sep  4 17:52:22.608689 containerd[1583]: time="2024-09-04T17:52:22.608586469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:52:22.608689 containerd[1583]: time="2024-09-04T17:52:22.608660970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:52:22.608689 containerd[1583]: time="2024-09-04T17:52:22.608680468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:22.608851 containerd[1583]: time="2024-09-04T17:52:22.608797809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:22.628504 containerd[1583]: time="2024-09-04T17:52:22.628307959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:52:22.628755 containerd[1583]: time="2024-09-04T17:52:22.628505142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:52:22.628755 containerd[1583]: time="2024-09-04T17:52:22.628524588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:22.628755 containerd[1583]: time="2024-09-04T17:52:22.628673079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:22.672414 containerd[1583]: time="2024-09-04T17:52:22.671564034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5vc62,Uid:3a2f2db1-2adc-4688-8f47-225429ef8e67,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f7e76e6b1caed0e8f1b4b22f0d6089a90fdec255d9c1595b70d1fbb768fa02f\""
Sep  4 17:52:22.672678 kubelet[2698]: E0904 17:52:22.672633    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:22.675142 containerd[1583]: time="2024-09-04T17:52:22.675110836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\""
Sep  4 17:52:22.676676 containerd[1583]: time="2024-09-04T17:52:22.676640873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b597897f9-b5dgf,Uid:2d92f6c6-1066-417b-a523-0bc1b62c209b,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6e27d7f3a9e5b01245f91b82418bd3b7ffbd63228d3c7e098b2f48489e717db\""
Sep  4 17:52:22.677454 kubelet[2698]: E0904 17:52:22.677243    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:23.838370 kubelet[2698]: E0904 17:52:23.838324    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22jch" podUID="520ef3bc-9622-4072-8027-438b0db6b0ef"
Sep  4 17:52:24.234885 containerd[1583]: time="2024-09-04T17:52:24.234831113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:24.235747 containerd[1583]: time="2024-09-04T17:52:24.235685759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007"
Sep  4 17:52:24.236570 containerd[1583]: time="2024-09-04T17:52:24.236516881Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:24.239197 containerd[1583]: time="2024-09-04T17:52:24.239149399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:24.239814 containerd[1583]: time="2024-09-04T17:52:24.239773860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.563584702s"
Sep  4 17:52:24.239879 containerd[1583]: time="2024-09-04T17:52:24.239810749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\""
Sep  4 17:52:24.240858 containerd[1583]: time="2024-09-04T17:52:24.240820559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\""
Sep  4 17:52:24.241951 containerd[1583]: time="2024-09-04T17:52:24.241918685Z" level=info msg="CreateContainer within sandbox \"2f7e76e6b1caed0e8f1b4b22f0d6089a90fdec255d9c1595b70d1fbb768fa02f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Sep  4 17:52:24.263484 containerd[1583]: time="2024-09-04T17:52:24.263427976Z" level=info msg="CreateContainer within sandbox \"2f7e76e6b1caed0e8f1b4b22f0d6089a90fdec255d9c1595b70d1fbb768fa02f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f21f86c440070b73e59df099fb0fb6ac3476ffdb1365fa049a87c70ec1f15d2c\""
Sep  4 17:52:24.263790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3748082489.mount: Deactivated successfully.
Sep  4 17:52:24.268653 containerd[1583]: time="2024-09-04T17:52:24.266965204Z" level=info msg="StartContainer for \"f21f86c440070b73e59df099fb0fb6ac3476ffdb1365fa049a87c70ec1f15d2c\""
Sep  4 17:52:24.356139 containerd[1583]: time="2024-09-04T17:52:24.356100477Z" level=info msg="StartContainer for \"f21f86c440070b73e59df099fb0fb6ac3476ffdb1365fa049a87c70ec1f15d2c\" returns successfully"
Sep  4 17:52:24.422699 containerd[1583]: time="2024-09-04T17:52:24.421126181Z" level=info msg="shim disconnected" id=f21f86c440070b73e59df099fb0fb6ac3476ffdb1365fa049a87c70ec1f15d2c namespace=k8s.io
Sep  4 17:52:24.422699 containerd[1583]: time="2024-09-04T17:52:24.422680510Z" level=warning msg="cleaning up after shim disconnected" id=f21f86c440070b73e59df099fb0fb6ac3476ffdb1365fa049a87c70ec1f15d2c namespace=k8s.io
Sep  4 17:52:24.422699 containerd[1583]: time="2024-09-04T17:52:24.422698044Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:52:24.465061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f21f86c440070b73e59df099fb0fb6ac3476ffdb1365fa049a87c70ec1f15d2c-rootfs.mount: Deactivated successfully.
Sep  4 17:52:24.892787 kubelet[2698]: E0904 17:52:24.892698    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:25.830162 containerd[1583]: time="2024-09-04T17:52:25.830110717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:25.830940 containerd[1583]: time="2024-09-04T17:52:25.830875052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335"
Sep  4 17:52:25.832186 containerd[1583]: time="2024-09-04T17:52:25.832148208Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:25.834437 containerd[1583]: time="2024-09-04T17:52:25.834391127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:25.835256 containerd[1583]: time="2024-09-04T17:52:25.835226506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 1.594370961s"
Sep  4 17:52:25.835323 containerd[1583]: time="2024-09-04T17:52:25.835258907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\""
Sep  4 17:52:25.836095 containerd[1583]: time="2024-09-04T17:52:25.836067866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\""
Sep  4 17:52:25.837944 kubelet[2698]: E0904 17:52:25.837891    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22jch" podUID="520ef3bc-9622-4072-8027-438b0db6b0ef"
Sep  4 17:52:25.844384 containerd[1583]: time="2024-09-04T17:52:25.844349728Z" level=info msg="CreateContainer within sandbox \"a6e27d7f3a9e5b01245f91b82418bd3b7ffbd63228d3c7e098b2f48489e717db\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Sep  4 17:52:25.864066 containerd[1583]: time="2024-09-04T17:52:25.864000127Z" level=info msg="CreateContainer within sandbox \"a6e27d7f3a9e5b01245f91b82418bd3b7ffbd63228d3c7e098b2f48489e717db\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"109b54cae5011abf7d7e3b393ccba858abf4a0d76875dbeb68ae5232b031ed85\""
Sep  4 17:52:25.864705 containerd[1583]: time="2024-09-04T17:52:25.864651158Z" level=info msg="StartContainer for \"109b54cae5011abf7d7e3b393ccba858abf4a0d76875dbeb68ae5232b031ed85\""
Sep  4 17:52:26.073556 containerd[1583]: time="2024-09-04T17:52:26.073511932Z" level=info msg="StartContainer for \"109b54cae5011abf7d7e3b393ccba858abf4a0d76875dbeb68ae5232b031ed85\" returns successfully"
Sep  4 17:52:26.898773 kubelet[2698]: E0904 17:52:26.898738    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:26.908938 kubelet[2698]: I0904 17:52:26.908892    2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6b597897f9-b5dgf" podStartSLOduration=1.7502716 podCreationTimestamp="2024-09-04 17:52:22 +0000 UTC" firstStartedPulling="2024-09-04 17:52:22.677713823 +0000 UTC m=+19.926656372" lastFinishedPulling="2024-09-04 17:52:25.83556766 +0000 UTC m=+23.084510210" observedRunningTime="2024-09-04 17:52:26.907388706 +0000 UTC m=+24.156331256" watchObservedRunningTime="2024-09-04 17:52:26.908125438 +0000 UTC m=+24.157067987"
Sep  4 17:52:27.838563 kubelet[2698]: E0904 17:52:27.838505    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22jch" podUID="520ef3bc-9622-4072-8027-438b0db6b0ef"
Sep  4 17:52:27.906313 kubelet[2698]: I0904 17:52:27.906286    2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Sep  4 17:52:27.906939 kubelet[2698]: E0904 17:52:27.906927    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:28.957010 containerd[1583]: time="2024-09-04T17:52:28.956954173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:28.957669 containerd[1583]: time="2024-09-04T17:52:28.957617284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736"
Sep  4 17:52:28.958731 containerd[1583]: time="2024-09-04T17:52:28.958704707Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:28.967303 containerd[1583]: time="2024-09-04T17:52:28.967254997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:28.967878 containerd[1583]: time="2024-09-04T17:52:28.967836334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 3.131740045s"
Sep  4 17:52:28.967925 containerd[1583]: time="2024-09-04T17:52:28.967879085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\""
Sep  4 17:52:28.969484 containerd[1583]: time="2024-09-04T17:52:28.969455370Z" level=info msg="CreateContainer within sandbox \"2f7e76e6b1caed0e8f1b4b22f0d6089a90fdec255d9c1595b70d1fbb768fa02f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Sep  4 17:52:28.985349 containerd[1583]: time="2024-09-04T17:52:28.985297676Z" level=info msg="CreateContainer within sandbox \"2f7e76e6b1caed0e8f1b4b22f0d6089a90fdec255d9c1595b70d1fbb768fa02f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b0599768e5b72cb974b4239d74f6e63906c2584a6beba14dad5b311734a32dad\""
Sep  4 17:52:28.985782 containerd[1583]: time="2024-09-04T17:52:28.985745931Z" level=info msg="StartContainer for \"b0599768e5b72cb974b4239d74f6e63906c2584a6beba14dad5b311734a32dad\""
Sep  4 17:52:29.012038 systemd[1]: run-containerd-runc-k8s.io-b0599768e5b72cb974b4239d74f6e63906c2584a6beba14dad5b311734a32dad-runc.LVFErK.mount: Deactivated successfully.
Sep  4 17:52:29.074488 containerd[1583]: time="2024-09-04T17:52:29.074418325Z" level=info msg="StartContainer for \"b0599768e5b72cb974b4239d74f6e63906c2584a6beba14dad5b311734a32dad\" returns successfully"
Sep  4 17:52:29.838910 kubelet[2698]: E0904 17:52:29.838881    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22jch" podUID="520ef3bc-9622-4072-8027-438b0db6b0ef"
Sep  4 17:52:29.842686 containerd[1583]: time="2024-09-04T17:52:29.842609466Z" level=info msg="shim disconnected" id=b0599768e5b72cb974b4239d74f6e63906c2584a6beba14dad5b311734a32dad namespace=k8s.io
Sep  4 17:52:29.842686 containerd[1583]: time="2024-09-04T17:52:29.842678456Z" level=warning msg="cleaning up after shim disconnected" id=b0599768e5b72cb974b4239d74f6e63906c2584a6beba14dad5b311734a32dad namespace=k8s.io
Sep  4 17:52:29.842686 containerd[1583]: time="2024-09-04T17:52:29.842686882Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:52:29.902387 kubelet[2698]: I0904 17:52:29.902351    2698 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Sep  4 17:52:29.910794 kubelet[2698]: E0904 17:52:29.910658    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:29.911711 containerd[1583]: time="2024-09-04T17:52:29.911678929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\""
Sep  4 17:52:29.920466 kubelet[2698]: I0904 17:52:29.920424    2698 topology_manager.go:215] "Topology Admit Handler" podUID="a094ec3c-d81c-474b-b6c7-8209bd24a732" podNamespace="kube-system" podName="coredns-5dd5756b68-wckr9"
Sep  4 17:52:29.923072 kubelet[2698]: I0904 17:52:29.921312    2698 topology_manager.go:215] "Topology Admit Handler" podUID="92f82df8-66ef-4892-866f-ff21ef05099e" podNamespace="kube-system" podName="coredns-5dd5756b68-4wmwz"
Sep  4 17:52:29.923072 kubelet[2698]: I0904 17:52:29.921757    2698 topology_manager.go:215] "Topology Admit Handler" podUID="97263c85-7a7c-4dd5-bb45-86c5230fb6f6" podNamespace="calico-system" podName="calico-kube-controllers-5f5c9f4dcd-zx9vp"
Sep  4 17:52:29.983095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0599768e5b72cb974b4239d74f6e63906c2584a6beba14dad5b311734a32dad-rootfs.mount: Deactivated successfully.
Sep  4 17:52:30.035724 kubelet[2698]: I0904 17:52:30.035678    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92f82df8-66ef-4892-866f-ff21ef05099e-config-volume\") pod \"coredns-5dd5756b68-4wmwz\" (UID: \"92f82df8-66ef-4892-866f-ff21ef05099e\") " pod="kube-system/coredns-5dd5756b68-4wmwz"
Sep  4 17:52:30.035724 kubelet[2698]: I0904 17:52:30.035726    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97263c85-7a7c-4dd5-bb45-86c5230fb6f6-tigera-ca-bundle\") pod \"calico-kube-controllers-5f5c9f4dcd-zx9vp\" (UID: \"97263c85-7a7c-4dd5-bb45-86c5230fb6f6\") " pod="calico-system/calico-kube-controllers-5f5c9f4dcd-zx9vp"
Sep  4 17:52:30.035724 kubelet[2698]: I0904 17:52:30.035834    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl77n\" (UniqueName: \"kubernetes.io/projected/92f82df8-66ef-4892-866f-ff21ef05099e-kube-api-access-xl77n\") pod \"coredns-5dd5756b68-4wmwz\" (UID: \"92f82df8-66ef-4892-866f-ff21ef05099e\") " pod="kube-system/coredns-5dd5756b68-4wmwz"
Sep  4 17:52:30.035724 kubelet[2698]: I0904 17:52:30.035872    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a094ec3c-d81c-474b-b6c7-8209bd24a732-config-volume\") pod \"coredns-5dd5756b68-wckr9\" (UID: \"a094ec3c-d81c-474b-b6c7-8209bd24a732\") " pod="kube-system/coredns-5dd5756b68-wckr9"
Sep  4 17:52:30.035724 kubelet[2698]: I0904 17:52:30.035927    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rjnv\" (UniqueName: \"kubernetes.io/projected/97263c85-7a7c-4dd5-bb45-86c5230fb6f6-kube-api-access-9rjnv\") pod \"calico-kube-controllers-5f5c9f4dcd-zx9vp\" (UID: \"97263c85-7a7c-4dd5-bb45-86c5230fb6f6\") " pod="calico-system/calico-kube-controllers-5f5c9f4dcd-zx9vp"
Sep  4 17:52:30.036601 kubelet[2698]: I0904 17:52:30.035958    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsmc7\" (UniqueName: \"kubernetes.io/projected/a094ec3c-d81c-474b-b6c7-8209bd24a732-kube-api-access-zsmc7\") pod \"coredns-5dd5756b68-wckr9\" (UID: \"a094ec3c-d81c-474b-b6c7-8209bd24a732\") " pod="kube-system/coredns-5dd5756b68-wckr9"
Sep  4 17:52:30.228743 containerd[1583]: time="2024-09-04T17:52:30.228706168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5c9f4dcd-zx9vp,Uid:97263c85-7a7c-4dd5-bb45-86c5230fb6f6,Namespace:calico-system,Attempt:0,}"
Sep  4 17:52:30.233987 kubelet[2698]: E0904 17:52:30.233958    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:30.234438 containerd[1583]: time="2024-09-04T17:52:30.234398546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wckr9,Uid:a094ec3c-d81c-474b-b6c7-8209bd24a732,Namespace:kube-system,Attempt:0,}"
Sep  4 17:52:30.235692 kubelet[2698]: E0904 17:52:30.235657    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:30.236141 containerd[1583]: time="2024-09-04T17:52:30.236025565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4wmwz,Uid:92f82df8-66ef-4892-866f-ff21ef05099e,Namespace:kube-system,Attempt:0,}"
Sep  4 17:52:30.342124 containerd[1583]: time="2024-09-04T17:52:30.342067065Z" level=error msg="Failed to destroy network for sandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.342492 containerd[1583]: time="2024-09-04T17:52:30.342453444Z" level=error msg="encountered an error cleaning up failed sandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.342613 containerd[1583]: time="2024-09-04T17:52:30.342576867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wckr9,Uid:a094ec3c-d81c-474b-b6c7-8209bd24a732,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.347804 containerd[1583]: time="2024-09-04T17:52:30.347671898Z" level=error msg="Failed to destroy network for sandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.348022 containerd[1583]: time="2024-09-04T17:52:30.347989387Z" level=error msg="encountered an error cleaning up failed sandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.348086 containerd[1583]: time="2024-09-04T17:52:30.348028051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4wmwz,Uid:92f82df8-66ef-4892-866f-ff21ef05099e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.348532 containerd[1583]: time="2024-09-04T17:52:30.348491244Z" level=error msg="Failed to destroy network for sandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.348836 containerd[1583]: time="2024-09-04T17:52:30.348810546Z" level=error msg="encountered an error cleaning up failed sandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.348879 containerd[1583]: time="2024-09-04T17:52:30.348842687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5c9f4dcd-zx9vp,Uid:97263c85-7a7c-4dd5-bb45-86c5230fb6f6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.354848 kubelet[2698]: E0904 17:52:30.354809    2698 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.354970 kubelet[2698]: E0904 17:52:30.354852    2698 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.354970 kubelet[2698]: E0904 17:52:30.354883    2698 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-wckr9"
Sep  4 17:52:30.354970 kubelet[2698]: E0904 17:52:30.354906    2698 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-wckr9"
Sep  4 17:52:30.354970 kubelet[2698]: E0904 17:52:30.354811    2698 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.355095 kubelet[2698]: E0904 17:52:30.354927    2698 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-4wmwz"
Sep  4 17:52:30.355095 kubelet[2698]: E0904 17:52:30.354949    2698 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f5c9f4dcd-zx9vp"
Sep  4 17:52:30.355095 kubelet[2698]: E0904 17:52:30.354955    2698 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-4wmwz"
Sep  4 17:52:30.355169 kubelet[2698]: E0904 17:52:30.354960    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-wckr9_kube-system(a094ec3c-d81c-474b-b6c7-8209bd24a732)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-wckr9_kube-system(a094ec3c-d81c-474b-b6c7-8209bd24a732)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-wckr9" podUID="a094ec3c-d81c-474b-b6c7-8209bd24a732"
Sep  4 17:52:30.355169 kubelet[2698]: E0904 17:52:30.354966    2698 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f5c9f4dcd-zx9vp"
Sep  4 17:52:30.355169 kubelet[2698]: E0904 17:52:30.355008    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f5c9f4dcd-zx9vp_calico-system(97263c85-7a7c-4dd5-bb45-86c5230fb6f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f5c9f4dcd-zx9vp_calico-system(97263c85-7a7c-4dd5-bb45-86c5230fb6f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f5c9f4dcd-zx9vp" podUID="97263c85-7a7c-4dd5-bb45-86c5230fb6f6"
Sep  4 17:52:30.355309 kubelet[2698]: E0904 17:52:30.355023    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-4wmwz_kube-system(92f82df8-66ef-4892-866f-ff21ef05099e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-4wmwz_kube-system(92f82df8-66ef-4892-866f-ff21ef05099e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-4wmwz" podUID="92f82df8-66ef-4892-866f-ff21ef05099e"
Sep  4 17:52:30.913243 kubelet[2698]: I0904 17:52:30.913213    2698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:52:30.913862 containerd[1583]: time="2024-09-04T17:52:30.913818435Z" level=info msg="StopPodSandbox for \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\""
Sep  4 17:52:30.914506 kubelet[2698]: I0904 17:52:30.914476    2698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:52:30.914987 containerd[1583]: time="2024-09-04T17:52:30.914958375Z" level=info msg="StopPodSandbox for \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\""
Sep  4 17:52:30.915829 kubelet[2698]: I0904 17:52:30.915784    2698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:52:30.916349 containerd[1583]: time="2024-09-04T17:52:30.916324863Z" level=info msg="StopPodSandbox for \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\""
Sep  4 17:52:30.917417 containerd[1583]: time="2024-09-04T17:52:30.917357651Z" level=info msg="Ensure that sandbox 6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701 in task-service has been cleanup successfully"
Sep  4 17:52:30.917417 containerd[1583]: time="2024-09-04T17:52:30.917381155Z" level=info msg="Ensure that sandbox 431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a in task-service has been cleanup successfully"
Sep  4 17:52:30.917585 containerd[1583]: time="2024-09-04T17:52:30.917359945Z" level=info msg="Ensure that sandbox 8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c in task-service has been cleanup successfully"
Sep  4 17:52:30.947928 containerd[1583]: time="2024-09-04T17:52:30.947871601Z" level=error msg="StopPodSandbox for \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\" failed" error="failed to destroy network for sandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.948165 kubelet[2698]: E0904 17:52:30.948145    2698 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:52:30.948242 kubelet[2698]: E0904 17:52:30.948203    2698 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"}
Sep  4 17:52:30.948242 kubelet[2698]: E0904 17:52:30.948233    2698 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92f82df8-66ef-4892-866f-ff21ef05099e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Sep  4 17:52:30.948327 kubelet[2698]: E0904 17:52:30.948259    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92f82df8-66ef-4892-866f-ff21ef05099e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-4wmwz" podUID="92f82df8-66ef-4892-866f-ff21ef05099e"
Sep  4 17:52:30.948939 containerd[1583]: time="2024-09-04T17:52:30.948887197Z" level=error msg="StopPodSandbox for \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\" failed" error="failed to destroy network for sandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.949122 kubelet[2698]: E0904 17:52:30.949100    2698 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:52:30.949122 kubelet[2698]: E0904 17:52:30.949123    2698 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"}
Sep  4 17:52:30.949190 kubelet[2698]: E0904 17:52:30.949147    2698 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a094ec3c-d81c-474b-b6c7-8209bd24a732\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Sep  4 17:52:30.949190 kubelet[2698]: E0904 17:52:30.949168    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a094ec3c-d81c-474b-b6c7-8209bd24a732\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-wckr9" podUID="a094ec3c-d81c-474b-b6c7-8209bd24a732"
Sep  4 17:52:30.949990 containerd[1583]: time="2024-09-04T17:52:30.949958106Z" level=error msg="StopPodSandbox for \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\" failed" error="failed to destroy network for sandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:30.950260 kubelet[2698]: E0904 17:52:30.950222    2698 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:52:30.950260 kubelet[2698]: E0904 17:52:30.950274    2698 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"}
Sep  4 17:52:30.950457 kubelet[2698]: E0904 17:52:30.950317    2698 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97263c85-7a7c-4dd5-bb45-86c5230fb6f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Sep  4 17:52:30.950457 kubelet[2698]: E0904 17:52:30.950346    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97263c85-7a7c-4dd5-bb45-86c5230fb6f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f5c9f4dcd-zx9vp" podUID="97263c85-7a7c-4dd5-bb45-86c5230fb6f6"
Sep  4 17:52:30.983479 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701-shm.mount: Deactivated successfully.
Sep  4 17:52:31.841041 containerd[1583]: time="2024-09-04T17:52:31.840989655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-22jch,Uid:520ef3bc-9622-4072-8027-438b0db6b0ef,Namespace:calico-system,Attempt:0,}"
Sep  4 17:52:31.948361 containerd[1583]: time="2024-09-04T17:52:31.948283735Z" level=error msg="Failed to destroy network for sandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:31.948987 containerd[1583]: time="2024-09-04T17:52:31.948780381Z" level=error msg="encountered an error cleaning up failed sandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:31.948987 containerd[1583]: time="2024-09-04T17:52:31.948830426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-22jch,Uid:520ef3bc-9622-4072-8027-438b0db6b0ef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:31.949206 kubelet[2698]: E0904 17:52:31.949126    2698 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:31.949206 kubelet[2698]: E0904 17:52:31.949182    2698 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-22jch"
Sep  4 17:52:31.949206 kubelet[2698]: E0904 17:52:31.949201    2698 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-22jch"
Sep  4 17:52:31.950359 kubelet[2698]: E0904 17:52:31.949264    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-22jch_calico-system(520ef3bc-9622-4072-8027-438b0db6b0ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-22jch_calico-system(520ef3bc-9622-4072-8027-438b0db6b0ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-22jch" podUID="520ef3bc-9622-4072-8027-438b0db6b0ef"
Sep  4 17:52:31.953464 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f-shm.mount: Deactivated successfully.
Sep  4 17:52:32.923826 kubelet[2698]: I0904 17:52:32.923793    2698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:52:32.924424 containerd[1583]: time="2024-09-04T17:52:32.924392780Z" level=info msg="StopPodSandbox for \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\""
Sep  4 17:52:32.924794 containerd[1583]: time="2024-09-04T17:52:32.924613697Z" level=info msg="Ensure that sandbox f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f in task-service has been cleanup successfully"
Sep  4 17:52:32.954681 containerd[1583]: time="2024-09-04T17:52:32.954582133Z" level=error msg="StopPodSandbox for \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\" failed" error="failed to destroy network for sandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:52:32.954869 kubelet[2698]: E0904 17:52:32.954844    2698 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:52:32.955328 kubelet[2698]: E0904 17:52:32.954901    2698 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"}
Sep  4 17:52:32.955328 kubelet[2698]: E0904 17:52:32.955215    2698 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"520ef3bc-9622-4072-8027-438b0db6b0ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Sep  4 17:52:32.955328 kubelet[2698]: E0904 17:52:32.955263    2698 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"520ef3bc-9622-4072-8027-438b0db6b0ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-22jch" podUID="520ef3bc-9622-4072-8027-438b0db6b0ef"
Sep  4 17:52:33.503211 kubelet[2698]: I0904 17:52:33.503165    2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Sep  4 17:52:33.504019 kubelet[2698]: E0904 17:52:33.503774    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:33.549345 systemd[1]: Started sshd@7-10.0.0.147:22-10.0.0.1:51936.service - OpenSSH per-connection server daemon (10.0.0.1:51936).
Sep  4 17:52:33.582783 sshd[3694]: Accepted publickey for core from 10.0.0.1 port 51936 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:52:33.584803 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:52:33.590716 systemd-logind[1557]: New session 8 of user core.
Sep  4 17:52:33.598384 systemd[1]: Started session-8.scope - Session 8 of User core.
Sep  4 17:52:33.680461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130335868.mount: Deactivated successfully.
Sep  4 17:52:33.926122 kubelet[2698]: E0904 17:52:33.925996    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:34.338867 containerd[1583]: time="2024-09-04T17:52:34.338785775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:34.339590 containerd[1583]: time="2024-09-04T17:52:34.339544144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564"
Sep  4 17:52:34.340059 sshd[3694]: pam_unix(sshd:session): session closed for user core
Sep  4 17:52:34.341301 containerd[1583]: time="2024-09-04T17:52:34.341250369Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:34.343168 containerd[1583]: time="2024-09-04T17:52:34.343123438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:34.344040 containerd[1583]: time="2024-09-04T17:52:34.344009959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 4.432287779s"
Sep  4 17:52:34.344098 containerd[1583]: time="2024-09-04T17:52:34.344079810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\""
Sep  4 17:52:34.344239 systemd[1]: sshd@7-10.0.0.147:22-10.0.0.1:51936.service: Deactivated successfully.
Sep  4 17:52:34.352284 containerd[1583]: time="2024-09-04T17:52:34.352250826Z" level=info msg="CreateContainer within sandbox \"2f7e76e6b1caed0e8f1b4b22f0d6089a90fdec255d9c1595b70d1fbb768fa02f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Sep  4 17:52:34.355216 systemd[1]: session-8.scope: Deactivated successfully.
Sep  4 17:52:34.355912 systemd-logind[1557]: Session 8 logged out. Waiting for processes to exit.
Sep  4 17:52:34.357025 systemd-logind[1557]: Removed session 8.
Sep  4 17:52:34.370937 containerd[1583]: time="2024-09-04T17:52:34.370894554Z" level=info msg="CreateContainer within sandbox \"2f7e76e6b1caed0e8f1b4b22f0d6089a90fdec255d9c1595b70d1fbb768fa02f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4bae9f6e825d87b84cbc6940e7662389054016301ddaf5f921ff5c64656de4af\""
Sep  4 17:52:34.371517 containerd[1583]: time="2024-09-04T17:52:34.371309095Z" level=info msg="StartContainer for \"4bae9f6e825d87b84cbc6940e7662389054016301ddaf5f921ff5c64656de4af\""
Sep  4 17:52:34.461981 containerd[1583]: time="2024-09-04T17:52:34.461936291Z" level=info msg="StartContainer for \"4bae9f6e825d87b84cbc6940e7662389054016301ddaf5f921ff5c64656de4af\" returns successfully"
Sep  4 17:52:34.526496 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Sep  4 17:52:34.526608 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Sep  4 17:52:34.929078 kubelet[2698]: E0904 17:52:34.929034    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:34.937346 kubelet[2698]: I0904 17:52:34.937324    2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-5vc62" podStartSLOduration=1.265772016 podCreationTimestamp="2024-09-04 17:52:22 +0000 UTC" firstStartedPulling="2024-09-04 17:52:22.673212564 +0000 UTC m=+19.922155113" lastFinishedPulling="2024-09-04 17:52:34.344736076 +0000 UTC m=+31.593678615" observedRunningTime="2024-09-04 17:52:34.936933877 +0000 UTC m=+32.185876436" watchObservedRunningTime="2024-09-04 17:52:34.937295518 +0000 UTC m=+32.186238067"
Sep  4 17:52:35.886258 kernel: bpftool[3921]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Sep  4 17:52:35.929705 kubelet[2698]: I0904 17:52:35.929666    2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Sep  4 17:52:35.930402 kubelet[2698]: E0904 17:52:35.930380    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:36.131775 systemd-networkd[1243]: vxlan.calico: Link UP
Sep  4 17:52:36.131782 systemd-networkd[1243]: vxlan.calico: Gained carrier
Sep  4 17:52:37.936223 systemd-networkd[1243]: vxlan.calico: Gained IPv6LL
Sep  4 17:52:39.350267 systemd[1]: Started sshd@8-10.0.0.147:22-10.0.0.1:35376.service - OpenSSH per-connection server daemon (10.0.0.1:35376).
Sep  4 17:52:39.379342 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 35376 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:52:39.380983 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:52:39.384744 systemd-logind[1557]: New session 9 of user core.
Sep  4 17:52:39.392280 systemd[1]: Started session-9.scope - Session 9 of User core.
Sep  4 17:52:39.777676 sshd[3996]: pam_unix(sshd:session): session closed for user core
Sep  4 17:52:39.781317 systemd[1]: sshd@8-10.0.0.147:22-10.0.0.1:35376.service: Deactivated successfully.
Sep  4 17:52:39.783784 systemd[1]: session-9.scope: Deactivated successfully.
Sep  4 17:52:39.783832 systemd-logind[1557]: Session 9 logged out. Waiting for processes to exit.
Sep  4 17:52:39.785024 systemd-logind[1557]: Removed session 9.
Sep  4 17:52:40.639927 kubelet[2698]: I0904 17:52:40.639853    2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Sep  4 17:52:40.641268 kubelet[2698]: E0904 17:52:40.641243    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:40.938867 kubelet[2698]: E0904 17:52:40.938760    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:41.839703 containerd[1583]: time="2024-09-04T17:52:41.839479292Z" level=info msg="StopPodSandbox for \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\""
Sep  4 17:52:41.839703 containerd[1583]: time="2024-09-04T17:52:41.839544816Z" level=info msg="StopPodSandbox for \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\""
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.885 [INFO][4103] k8s.go 608: Cleaning up netns ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.887 [INFO][4103] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" iface="eth0" netns="/var/run/netns/cni-e218188d-b1da-d6a7-8bc2-2545d605cee3"
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.888 [INFO][4103] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" iface="eth0" netns="/var/run/netns/cni-e218188d-b1da-d6a7-8bc2-2545d605cee3"
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.888 [INFO][4103] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" iface="eth0" netns="/var/run/netns/cni-e218188d-b1da-d6a7-8bc2-2545d605cee3"
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.888 [INFO][4103] k8s.go 615: Releasing IP address(es) ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.888 [INFO][4103] utils.go 188: Calico CNI releasing IP address ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.958 [INFO][4112] ipam_plugin.go 417: Releasing address using handleID ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" HandleID="k8s-pod-network.8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" Workload="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.959 [INFO][4112] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.959 [INFO][4112] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.966 [WARNING][4112] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" HandleID="k8s-pod-network.8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" Workload="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.966 [INFO][4112] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" HandleID="k8s-pod-network.8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" Workload="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.967 [INFO][4112] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:52:42.004279 containerd[1583]: 2024-09-04 17:52:41.986 [INFO][4103] k8s.go 621: Teardown processing complete. ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.887 [INFO][4093] k8s.go 608: Cleaning up netns ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.887 [INFO][4093] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" iface="eth0" netns="/var/run/netns/cni-11fb7368-27ed-8d8f-0427-65c1b0d83f43"
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.888 [INFO][4093] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" iface="eth0" netns="/var/run/netns/cni-11fb7368-27ed-8d8f-0427-65c1b0d83f43"
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.888 [INFO][4093] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" iface="eth0" netns="/var/run/netns/cni-11fb7368-27ed-8d8f-0427-65c1b0d83f43"
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.888 [INFO][4093] k8s.go 615: Releasing IP address(es) ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.888 [INFO][4093] utils.go 188: Calico CNI releasing IP address ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.963 [INFO][4113] ipam_plugin.go 417: Releasing address using handleID ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" HandleID="k8s-pod-network.6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" Workload="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.963 [INFO][4113] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.967 [INFO][4113] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.975 [WARNING][4113] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" HandleID="k8s-pod-network.6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" Workload="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.975 [INFO][4113] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" HandleID="k8s-pod-network.6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" Workload="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.981 [INFO][4113] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:52:42.007136 containerd[1583]: 2024-09-04 17:52:41.995 [INFO][4093] k8s.go 621: Teardown processing complete. ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:52:42.009480 systemd[1]: run-netns-cni\x2d11fb7368\x2d27ed\x2d8d8f\x2d0427\x2d65c1b0d83f43.mount: Deactivated successfully.
Sep  4 17:52:42.012830 containerd[1583]: time="2024-09-04T17:52:42.010600542Z" level=info msg="TearDown network for sandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\" successfully"
Sep  4 17:52:42.012830 containerd[1583]: time="2024-09-04T17:52:42.010668349Z" level=info msg="StopPodSandbox for \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\" returns successfully"
Sep  4 17:52:42.012830 containerd[1583]: time="2024-09-04T17:52:42.010625809Z" level=info msg="TearDown network for sandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\" successfully"
Sep  4 17:52:42.012830 containerd[1583]: time="2024-09-04T17:52:42.010717823Z" level=info msg="StopPodSandbox for \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\" returns successfully"
Sep  4 17:52:42.012999 kubelet[2698]: E0904 17:52:42.011293    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:42.016405 containerd[1583]: time="2024-09-04T17:52:42.013582381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wckr9,Uid:a094ec3c-d81c-474b-b6c7-8209bd24a732,Namespace:kube-system,Attempt:1,}"
Sep  4 17:52:42.016405 containerd[1583]: time="2024-09-04T17:52:42.014036315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5c9f4dcd-zx9vp,Uid:97263c85-7a7c-4dd5-bb45-86c5230fb6f6,Namespace:calico-system,Attempt:1,}"
Sep  4 17:52:42.018313 systemd[1]: run-netns-cni\x2de218188d\x2db1da\x2dd6a7\x2d8bc2\x2d2545d605cee3.mount: Deactivated successfully.
Sep  4 17:52:42.374507 systemd-networkd[1243]: calib13a4ec6a6a: Link UP
Sep  4 17:52:42.375416 systemd-networkd[1243]: calib13a4ec6a6a: Gained carrier
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.313 [INFO][4138] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0 calico-kube-controllers-5f5c9f4dcd- calico-system  97263c85-7a7c-4dd5-bb45-86c5230fb6f6 812 0 2024-09-04 17:52:22 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f5c9f4dcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  localhost  calico-kube-controllers-5f5c9f4dcd-zx9vp eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] calib13a4ec6a6a  [] []}} ContainerID="e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" Namespace="calico-system" Pod="calico-kube-controllers-5f5c9f4dcd-zx9vp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-"
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.313 [INFO][4138] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" Namespace="calico-system" Pod="calico-kube-controllers-5f5c9f4dcd-zx9vp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.338 [INFO][4157] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" HandleID="k8s-pod-network.e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" Workload="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.345 [INFO][4157] ipam_plugin.go 270: Auto assigning IP ContainerID="e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" HandleID="k8s-pod-network.e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" Workload="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003662b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f5c9f4dcd-zx9vp", "timestamp":"2024-09-04 17:52:42.338392721 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.345 [INFO][4157] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.345 [INFO][4157] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.345 [INFO][4157] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.347 [INFO][4157] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" host="localhost"
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.353 [INFO][4157] ipam.go 372: Looking up existing affinities for host host="localhost"
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.356 [INFO][4157] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.357 [INFO][4157] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.359 [INFO][4157] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.359 [INFO][4157] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" host="localhost"
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.360 [INFO][4157] ipam.go 1685: Creating new handle: k8s-pod-network.e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.362 [INFO][4157] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" host="localhost"
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.366 [INFO][4157] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" host="localhost"
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.366 [INFO][4157] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" host="localhost"
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.366 [INFO][4157] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:52:42.383947 containerd[1583]: 2024-09-04 17:52:42.366 [INFO][4157] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" HandleID="k8s-pod-network.e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" Workload="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:52:42.384666 containerd[1583]: 2024-09-04 17:52:42.371 [INFO][4138] k8s.go 386: Populated endpoint ContainerID="e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" Namespace="calico-system" Pod="calico-kube-controllers-5f5c9f4dcd-zx9vp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0", GenerateName:"calico-kube-controllers-5f5c9f4dcd-", Namespace:"calico-system", SelfLink:"", UID:"97263c85-7a7c-4dd5-bb45-86c5230fb6f6", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5c9f4dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f5c9f4dcd-zx9vp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib13a4ec6a6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:52:42.384666 containerd[1583]: 2024-09-04 17:52:42.371 [INFO][4138] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" Namespace="calico-system" Pod="calico-kube-controllers-5f5c9f4dcd-zx9vp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:52:42.384666 containerd[1583]: 2024-09-04 17:52:42.371 [INFO][4138] dataplane_linux.go 68: Setting the host side veth name to calib13a4ec6a6a ContainerID="e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" Namespace="calico-system" Pod="calico-kube-controllers-5f5c9f4dcd-zx9vp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:52:42.384666 containerd[1583]: 2024-09-04 17:52:42.373 [INFO][4138] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" Namespace="calico-system" Pod="calico-kube-controllers-5f5c9f4dcd-zx9vp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:52:42.384666 containerd[1583]: 2024-09-04 17:52:42.373 [INFO][4138] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" Namespace="calico-system" Pod="calico-kube-controllers-5f5c9f4dcd-zx9vp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0", GenerateName:"calico-kube-controllers-5f5c9f4dcd-", Namespace:"calico-system", SelfLink:"", UID:"97263c85-7a7c-4dd5-bb45-86c5230fb6f6", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5c9f4dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8", Pod:"calico-kube-controllers-5f5c9f4dcd-zx9vp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib13a4ec6a6a", MAC:"22:79:4b:c2:c4:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:52:42.384666 containerd[1583]: 2024-09-04 17:52:42.380 [INFO][4138] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8" Namespace="calico-system" Pod="calico-kube-controllers-5f5c9f4dcd-zx9vp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:52:42.397932 systemd-networkd[1243]: cali1d30a9aecf1: Link UP
Sep  4 17:52:42.398695 systemd-networkd[1243]: cali1d30a9aecf1: Gained carrier
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.310 [INFO][4127] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--wckr9-eth0 coredns-5dd5756b68- kube-system  a094ec3c-d81c-474b-b6c7-8209bd24a732 811 0 2024-09-04 17:52:17 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  localhost  coredns-5dd5756b68-wckr9 eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali1d30a9aecf1  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" Namespace="kube-system" Pod="coredns-5dd5756b68-wckr9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wckr9-"
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.310 [INFO][4127] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" Namespace="kube-system" Pod="coredns-5dd5756b68-wckr9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.339 [INFO][4156] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" HandleID="k8s-pod-network.b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" Workload="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.347 [INFO][4156] ipam_plugin.go 270: Auto assigning IP ContainerID="b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" HandleID="k8s-pod-network.b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" Workload="localhost-k8s-coredns--5dd5756b68--wckr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ffe10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-wckr9", "timestamp":"2024-09-04 17:52:42.339662711 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.348 [INFO][4156] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.366 [INFO][4156] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.366 [INFO][4156] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.367 [INFO][4156] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" host="localhost"
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.370 [INFO][4156] ipam.go 372: Looking up existing affinities for host host="localhost"
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.374 [INFO][4156] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.376 [INFO][4156] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.379 [INFO][4156] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.379 [INFO][4156] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" host="localhost"
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.381 [INFO][4156] ipam.go 1685: Creating new handle: k8s-pod-network.b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.384 [INFO][4156] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" host="localhost"
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.389 [INFO][4156] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" host="localhost"
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.390 [INFO][4156] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" host="localhost"
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.390 [INFO][4156] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:52:42.410199 containerd[1583]: 2024-09-04 17:52:42.390 [INFO][4156] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" HandleID="k8s-pod-network.b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" Workload="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:52:42.411192 containerd[1583]: 2024-09-04 17:52:42.395 [INFO][4127] k8s.go 386: Populated endpoint ContainerID="b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" Namespace="kube-system" Pod="coredns-5dd5756b68-wckr9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wckr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--wckr9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a094ec3c-d81c-474b-b6c7-8209bd24a732", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-wckr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d30a9aecf1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:52:42.411192 containerd[1583]: 2024-09-04 17:52:42.395 [INFO][4127] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" Namespace="kube-system" Pod="coredns-5dd5756b68-wckr9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:52:42.411192 containerd[1583]: 2024-09-04 17:52:42.395 [INFO][4127] dataplane_linux.go 68: Setting the host side veth name to cali1d30a9aecf1 ContainerID="b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" Namespace="kube-system" Pod="coredns-5dd5756b68-wckr9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:52:42.411192 containerd[1583]: 2024-09-04 17:52:42.398 [INFO][4127] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" Namespace="kube-system" Pod="coredns-5dd5756b68-wckr9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:52:42.411192 containerd[1583]: 2024-09-04 17:52:42.399 [INFO][4127] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" Namespace="kube-system" Pod="coredns-5dd5756b68-wckr9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wckr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--wckr9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a094ec3c-d81c-474b-b6c7-8209bd24a732", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968", Pod:"coredns-5dd5756b68-wckr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d30a9aecf1", MAC:"26:cc:6e:ec:70:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:52:42.411467 containerd[1583]: 2024-09-04 17:52:42.405 [INFO][4127] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968" Namespace="kube-system" Pod="coredns-5dd5756b68-wckr9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:52:42.419134 containerd[1583]: time="2024-09-04T17:52:42.418958391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:52:42.419134 containerd[1583]: time="2024-09-04T17:52:42.419006912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:52:42.419134 containerd[1583]: time="2024-09-04T17:52:42.419023093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:42.420072 containerd[1583]: time="2024-09-04T17:52:42.419210786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:42.434512 containerd[1583]: time="2024-09-04T17:52:42.434402214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:52:42.434641 containerd[1583]: time="2024-09-04T17:52:42.434494537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:52:42.434641 containerd[1583]: time="2024-09-04T17:52:42.434507662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:42.434735 containerd[1583]: time="2024-09-04T17:52:42.434644299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:42.444312 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Sep  4 17:52:42.454277 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Sep  4 17:52:42.472455 containerd[1583]: time="2024-09-04T17:52:42.472417498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5c9f4dcd-zx9vp,Uid:97263c85-7a7c-4dd5-bb45-86c5230fb6f6,Namespace:calico-system,Attempt:1,} returns sandbox id \"e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8\""
Sep  4 17:52:42.475014 containerd[1583]: time="2024-09-04T17:52:42.474890691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\""
Sep  4 17:52:42.480542 containerd[1583]: time="2024-09-04T17:52:42.480511684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wckr9,Uid:a094ec3c-d81c-474b-b6c7-8209bd24a732,Namespace:kube-system,Attempt:1,} returns sandbox id \"b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968\""
Sep  4 17:52:42.481154 kubelet[2698]: E0904 17:52:42.481137    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:42.483307 containerd[1583]: time="2024-09-04T17:52:42.483261327Z" level=info msg="CreateContainer within sandbox \"b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Sep  4 17:52:42.500486 containerd[1583]: time="2024-09-04T17:52:42.500431136Z" level=info msg="CreateContainer within sandbox \"b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d6c48d440cf17814f30b2d18acfb449436d82bbb058e7baa25c1a59d82e4d60a\""
Sep  4 17:52:42.501284 containerd[1583]: time="2024-09-04T17:52:42.501236071Z" level=info msg="StartContainer for \"d6c48d440cf17814f30b2d18acfb449436d82bbb058e7baa25c1a59d82e4d60a\""
Sep  4 17:52:42.555355 containerd[1583]: time="2024-09-04T17:52:42.555309553Z" level=info msg="StartContainer for \"d6c48d440cf17814f30b2d18acfb449436d82bbb058e7baa25c1a59d82e4d60a\" returns successfully"
Sep  4 17:52:42.945114 kubelet[2698]: E0904 17:52:42.945084    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:42.953261 kubelet[2698]: I0904 17:52:42.953223    2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wckr9" podStartSLOduration=25.953162891 podCreationTimestamp="2024-09-04 17:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:52:42.95273135 +0000 UTC m=+40.201673899" watchObservedRunningTime="2024-09-04 17:52:42.953162891 +0000 UTC m=+40.202105440"
Sep  4 17:52:43.950545 kubelet[2698]: E0904 17:52:43.950514    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:44.177265 containerd[1583]: time="2024-09-04T17:52:44.177204368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:44.178202 containerd[1583]: time="2024-09-04T17:52:44.178152591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125"
Sep  4 17:52:44.179715 containerd[1583]: time="2024-09-04T17:52:44.179682007Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:44.182552 containerd[1583]: time="2024-09-04T17:52:44.182522228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:44.183139 containerd[1583]: time="2024-09-04T17:52:44.183091870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 1.708172866s"
Sep  4 17:52:44.183176 containerd[1583]: time="2024-09-04T17:52:44.183137887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\""
Sep  4 17:52:44.190475 containerd[1583]: time="2024-09-04T17:52:44.190432956Z" level=info msg="CreateContainer within sandbox \"e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Sep  4 17:52:44.204459 containerd[1583]: time="2024-09-04T17:52:44.204365699Z" level=info msg="CreateContainer within sandbox \"e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"939d91d18f05ce55fbe15909e484b2b1e0484b61951eb09265db0b0f93c58991\""
Sep  4 17:52:44.204968 containerd[1583]: time="2024-09-04T17:52:44.204870087Z" level=info msg="StartContainer for \"939d91d18f05ce55fbe15909e484b2b1e0484b61951eb09265db0b0f93c58991\""
Sep  4 17:52:44.209249 systemd-networkd[1243]: calib13a4ec6a6a: Gained IPv6LL
Sep  4 17:52:44.336363 systemd-networkd[1243]: cali1d30a9aecf1: Gained IPv6LL
Sep  4 17:52:44.464792 containerd[1583]: time="2024-09-04T17:52:44.464662588Z" level=info msg="StartContainer for \"939d91d18f05ce55fbe15909e484b2b1e0484b61951eb09265db0b0f93c58991\" returns successfully"
Sep  4 17:52:44.790304 systemd[1]: Started sshd@9-10.0.0.147:22-10.0.0.1:35378.service - OpenSSH per-connection server daemon (10.0.0.1:35378).
Sep  4 17:52:44.821898 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 35378 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:52:44.823704 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:52:44.827659 systemd-logind[1557]: New session 10 of user core.
Sep  4 17:52:44.834308 systemd[1]: Started session-10.scope - Session 10 of User core.
Sep  4 17:52:44.841603 containerd[1583]: time="2024-09-04T17:52:44.841287421Z" level=info msg="StopPodSandbox for \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\""
Sep  4 17:52:44.841603 containerd[1583]: time="2024-09-04T17:52:44.841413407Z" level=info msg="StopPodSandbox for \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\""
Sep  4 17:52:44.953269 kubelet[2698]: E0904 17:52:44.953201    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:44.994924 sshd[4367]: pam_unix(sshd:session): session closed for user core
Sep  4 17:52:45.002326 systemd[1]: Started sshd@10-10.0.0.147:22-10.0.0.1:35388.service - OpenSSH per-connection server daemon (10.0.0.1:35388).
Sep  4 17:52:45.002811 systemd[1]: sshd@9-10.0.0.147:22-10.0.0.1:35378.service: Deactivated successfully.
Sep  4 17:52:45.006571 systemd[1]: session-10.scope: Deactivated successfully.
Sep  4 17:52:45.012475 systemd-logind[1557]: Session 10 logged out. Waiting for processes to exit.
Sep  4 17:52:45.014792 systemd-logind[1557]: Removed session 10.
Sep  4 17:52:45.018603 kubelet[2698]: I0904 17:52:45.017611    2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f5c9f4dcd-zx9vp" podStartSLOduration=21.308642527 podCreationTimestamp="2024-09-04 17:52:22 +0000 UTC" firstStartedPulling="2024-09-04 17:52:42.474502059 +0000 UTC m=+39.723444608" lastFinishedPulling="2024-09-04 17:52:44.183428012 +0000 UTC m=+41.432370561" observedRunningTime="2024-09-04 17:52:45.009473639 +0000 UTC m=+42.258416198" watchObservedRunningTime="2024-09-04 17:52:45.01756848 +0000 UTC m=+42.266511019"
Sep  4 17:52:45.034566 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 35388 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:52:45.036434 sshd[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:52:45.040822 systemd-logind[1557]: New session 11 of user core.
Sep  4 17:52:45.047622 systemd[1]: Started session-11.scope - Session 11 of User core.
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.015 [INFO][4404] k8s.go 608: Cleaning up netns ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.016 [INFO][4404] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" iface="eth0" netns="/var/run/netns/cni-2ce52a38-7d0b-02fb-da39-ef1a8330f439"
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.017 [INFO][4404] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" iface="eth0" netns="/var/run/netns/cni-2ce52a38-7d0b-02fb-da39-ef1a8330f439"
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.017 [INFO][4404] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" iface="eth0" netns="/var/run/netns/cni-2ce52a38-7d0b-02fb-da39-ef1a8330f439"
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.017 [INFO][4404] k8s.go 615: Releasing IP address(es) ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.018 [INFO][4404] utils.go 188: Calico CNI releasing IP address ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.042 [INFO][4434] ipam_plugin.go 417: Releasing address using handleID ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" HandleID="k8s-pod-network.431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" Workload="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.042 [INFO][4434] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.042 [INFO][4434] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.047 [WARNING][4434] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" HandleID="k8s-pod-network.431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" Workload="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.047 [INFO][4434] ipam_plugin.go 445: Releasing address using workloadID ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" HandleID="k8s-pod-network.431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" Workload="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.048 [INFO][4434] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:52:45.054422 containerd[1583]: 2024-09-04 17:52:45.051 [INFO][4404] k8s.go 621: Teardown processing complete. ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:52:45.054803 containerd[1583]: time="2024-09-04T17:52:45.054614813Z" level=info msg="TearDown network for sandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\" successfully"
Sep  4 17:52:45.054803 containerd[1583]: time="2024-09-04T17:52:45.054654627Z" level=info msg="StopPodSandbox for \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\" returns successfully"
Sep  4 17:52:45.055368 kubelet[2698]: E0904 17:52:45.055155    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:45.055658 containerd[1583]: time="2024-09-04T17:52:45.055630222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4wmwz,Uid:92f82df8-66ef-4892-866f-ff21ef05099e,Namespace:kube-system,Attempt:1,}"
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.020 [INFO][4398] k8s.go 608: Cleaning up netns ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.021 [INFO][4398] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" iface="eth0" netns="/var/run/netns/cni-6afe1cc7-5692-fa0f-aea1-92058918361c"
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.022 [INFO][4398] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" iface="eth0" netns="/var/run/netns/cni-6afe1cc7-5692-fa0f-aea1-92058918361c"
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.022 [INFO][4398] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" iface="eth0" netns="/var/run/netns/cni-6afe1cc7-5692-fa0f-aea1-92058918361c"
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.022 [INFO][4398] k8s.go 615: Releasing IP address(es) ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.022 [INFO][4398] utils.go 188: Calico CNI releasing IP address ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.042 [INFO][4440] ipam_plugin.go 417: Releasing address using handleID ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" HandleID="k8s-pod-network.f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" Workload="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.043 [INFO][4440] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.049 [INFO][4440] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.056 [WARNING][4440] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" HandleID="k8s-pod-network.f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" Workload="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.056 [INFO][4440] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" HandleID="k8s-pod-network.f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" Workload="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.058 [INFO][4440] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:52:45.063878 containerd[1583]: 2024-09-04 17:52:45.060 [INFO][4398] k8s.go 621: Teardown processing complete. ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:52:45.064325 containerd[1583]: time="2024-09-04T17:52:45.064099028Z" level=info msg="TearDown network for sandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\" successfully"
Sep  4 17:52:45.064325 containerd[1583]: time="2024-09-04T17:52:45.064129986Z" level=info msg="StopPodSandbox for \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\" returns successfully"
Sep  4 17:52:45.064803 containerd[1583]: time="2024-09-04T17:52:45.064766332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-22jch,Uid:520ef3bc-9622-4072-8027-438b0db6b0ef,Namespace:calico-system,Attempt:1,}"
Sep  4 17:52:45.172185 systemd-networkd[1243]: calic6c7d530a3e: Link UP
Sep  4 17:52:45.172466 systemd-networkd[1243]: calic6c7d530a3e: Gained carrier
Sep  4 17:52:45.199666 systemd[1]: run-netns-cni\x2d6afe1cc7\x2d5692\x2dfa0f\x2daea1\x2d92058918361c.mount: Deactivated successfully.
Sep  4 17:52:45.199837 systemd[1]: run-netns-cni\x2d2ce52a38\x2d7d0b\x2d02fb\x2dda39\x2def1a8330f439.mount: Deactivated successfully.
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.104 [INFO][4464] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--22jch-eth0 csi-node-driver- calico-system  520ef3bc-9622-4072-8027-438b0db6b0ef 863 0 2024-09-04 17:52:22 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  localhost  csi-node-driver-22jch eth0 default [] []   [kns.calico-system ksa.calico-system.default] calic6c7d530a3e  [] []}} ContainerID="eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" Namespace="calico-system" Pod="csi-node-driver-22jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--22jch-"
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.104 [INFO][4464] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" Namespace="calico-system" Pod="csi-node-driver-22jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.134 [INFO][4485] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" HandleID="k8s-pod-network.eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" Workload="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.142 [INFO][4485] ipam_plugin.go 270: Auto assigning IP ContainerID="eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" HandleID="k8s-pod-network.eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" Workload="localhost-k8s-csi--node--driver--22jch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030bc70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-22jch", "timestamp":"2024-09-04 17:52:45.134218091 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.142 [INFO][4485] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.142 [INFO][4485] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.142 [INFO][4485] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.143 [INFO][4485] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" host="localhost"
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.146 [INFO][4485] ipam.go 372: Looking up existing affinities for host host="localhost"
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.150 [INFO][4485] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.151 [INFO][4485] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.153 [INFO][4485] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.153 [INFO][4485] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" host="localhost"
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.154 [INFO][4485] ipam.go 1685: Creating new handle: k8s-pod-network.eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.157 [INFO][4485] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" host="localhost"
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.161 [INFO][4485] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" host="localhost"
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.161 [INFO][4485] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" host="localhost"
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.161 [INFO][4485] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:52:45.202524 containerd[1583]: 2024-09-04 17:52:45.161 [INFO][4485] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" HandleID="k8s-pod-network.eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" Workload="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:52:45.204406 containerd[1583]: 2024-09-04 17:52:45.164 [INFO][4464] k8s.go 386: Populated endpoint ContainerID="eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" Namespace="calico-system" Pod="csi-node-driver-22jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--22jch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--22jch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"520ef3bc-9622-4072-8027-438b0db6b0ef", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-22jch", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic6c7d530a3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:52:45.204406 containerd[1583]: 2024-09-04 17:52:45.165 [INFO][4464] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" Namespace="calico-system" Pod="csi-node-driver-22jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:52:45.204406 containerd[1583]: 2024-09-04 17:52:45.167 [INFO][4464] dataplane_linux.go 68: Setting the host side veth name to calic6c7d530a3e ContainerID="eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" Namespace="calico-system" Pod="csi-node-driver-22jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:52:45.204406 containerd[1583]: 2024-09-04 17:52:45.171 [INFO][4464] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" Namespace="calico-system" Pod="csi-node-driver-22jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:52:45.204406 containerd[1583]: 2024-09-04 17:52:45.174 [INFO][4464] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" Namespace="calico-system" Pod="csi-node-driver-22jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--22jch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--22jch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"520ef3bc-9622-4072-8027-438b0db6b0ef", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568", Pod:"csi-node-driver-22jch", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic6c7d530a3e", MAC:"26:47:52:e4:5b:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:52:45.204406 containerd[1583]: 2024-09-04 17:52:45.193 [INFO][4464] k8s.go 500: Wrote updated endpoint to datastore ContainerID="eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568" Namespace="calico-system" Pod="csi-node-driver-22jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:52:45.218539 systemd-networkd[1243]: calif0337c27e11: Link UP
Sep  4 17:52:45.218759 systemd-networkd[1243]: calif0337c27e11: Gained carrier
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.105 [INFO][4453] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--4wmwz-eth0 coredns-5dd5756b68- kube-system  92f82df8-66ef-4892-866f-ff21ef05099e 862 0 2024-09-04 17:52:17 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  localhost  coredns-5dd5756b68-4wmwz eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] calif0337c27e11  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" Namespace="kube-system" Pod="coredns-5dd5756b68-4wmwz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4wmwz-"
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.105 [INFO][4453] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" Namespace="kube-system" Pod="coredns-5dd5756b68-4wmwz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.134 [INFO][4486] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" HandleID="k8s-pod-network.e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" Workload="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.144 [INFO][4486] ipam_plugin.go 270: Auto assigning IP ContainerID="e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" HandleID="k8s-pod-network.e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" Workload="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027de30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-4wmwz", "timestamp":"2024-09-04 17:52:45.134761733 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.144 [INFO][4486] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.161 [INFO][4486] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.161 [INFO][4486] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.163 [INFO][4486] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" host="localhost"
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.166 [INFO][4486] ipam.go 372: Looking up existing affinities for host host="localhost"
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.170 [INFO][4486] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.172 [INFO][4486] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.183 [INFO][4486] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.183 [INFO][4486] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" host="localhost"
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.192 [INFO][4486] ipam.go 1685: Creating new handle: k8s-pod-network.e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.203 [INFO][4486] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" host="localhost"
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.210 [INFO][4486] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" host="localhost"
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.210 [INFO][4486] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" host="localhost"
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.210 [INFO][4486] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:52:45.235010 containerd[1583]: 2024-09-04 17:52:45.211 [INFO][4486] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" HandleID="k8s-pod-network.e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" Workload="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:52:45.237329 containerd[1583]: 2024-09-04 17:52:45.214 [INFO][4453] k8s.go 386: Populated endpoint ContainerID="e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" Namespace="kube-system" Pod="coredns-5dd5756b68-4wmwz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--4wmwz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"92f82df8-66ef-4892-866f-ff21ef05099e", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-4wmwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0337c27e11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:52:45.237329 containerd[1583]: 2024-09-04 17:52:45.214 [INFO][4453] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" Namespace="kube-system" Pod="coredns-5dd5756b68-4wmwz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:52:45.237329 containerd[1583]: 2024-09-04 17:52:45.214 [INFO][4453] dataplane_linux.go 68: Setting the host side veth name to calif0337c27e11 ContainerID="e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" Namespace="kube-system" Pod="coredns-5dd5756b68-4wmwz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:52:45.237329 containerd[1583]: 2024-09-04 17:52:45.218 [INFO][4453] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" Namespace="kube-system" Pod="coredns-5dd5756b68-4wmwz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:52:45.237329 containerd[1583]: 2024-09-04 17:52:45.219 [INFO][4453] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" Namespace="kube-system" Pod="coredns-5dd5756b68-4wmwz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--4wmwz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"92f82df8-66ef-4892-866f-ff21ef05099e", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94", Pod:"coredns-5dd5756b68-4wmwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0337c27e11", MAC:"72:92:e2:c2:21:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:52:45.237509 containerd[1583]: 2024-09-04 17:52:45.230 [INFO][4453] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94" Namespace="kube-system" Pod="coredns-5dd5756b68-4wmwz" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:52:45.243572 containerd[1583]: time="2024-09-04T17:52:45.243449501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:52:45.243572 containerd[1583]: time="2024-09-04T17:52:45.243538047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:52:45.243661 containerd[1583]: time="2024-09-04T17:52:45.243594123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:45.243808 containerd[1583]: time="2024-09-04T17:52:45.243761948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:45.278353 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Sep  4 17:52:45.285789 containerd[1583]: time="2024-09-04T17:52:45.285690442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:52:45.285789 containerd[1583]: time="2024-09-04T17:52:45.285763128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:52:45.286001 containerd[1583]: time="2024-09-04T17:52:45.285777625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:45.286001 containerd[1583]: time="2024-09-04T17:52:45.285893052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:45.299400 containerd[1583]: time="2024-09-04T17:52:45.299277571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-22jch,Uid:520ef3bc-9622-4072-8027-438b0db6b0ef,Namespace:calico-system,Attempt:1,} returns sandbox id \"eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568\""
Sep  4 17:52:45.302787 containerd[1583]: time="2024-09-04T17:52:45.302538823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\""
Sep  4 17:52:45.327516 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Sep  4 17:52:45.354750 sshd[4429]: pam_unix(sshd:session): session closed for user core
Sep  4 17:52:45.365583 systemd[1]: Started sshd@11-10.0.0.147:22-10.0.0.1:35392.service - OpenSSH per-connection server daemon (10.0.0.1:35392).
Sep  4 17:52:45.366495 systemd[1]: sshd@10-10.0.0.147:22-10.0.0.1:35388.service: Deactivated successfully.
Sep  4 17:52:45.371887 containerd[1583]: time="2024-09-04T17:52:45.371830430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4wmwz,Uid:92f82df8-66ef-4892-866f-ff21ef05099e,Namespace:kube-system,Attempt:1,} returns sandbox id \"e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94\""
Sep  4 17:52:45.374101 kubelet[2698]: E0904 17:52:45.373303    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:45.373612 systemd[1]: session-11.scope: Deactivated successfully.
Sep  4 17:52:45.375788 systemd-logind[1557]: Session 11 logged out. Waiting for processes to exit.
Sep  4 17:52:45.379000 containerd[1583]: time="2024-09-04T17:52:45.378643542Z" level=info msg="CreateContainer within sandbox \"e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Sep  4 17:52:45.379396 systemd-logind[1557]: Removed session 11.
Sep  4 17:52:45.397578 containerd[1583]: time="2024-09-04T17:52:45.397540756Z" level=info msg="CreateContainer within sandbox \"e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db0cea0f9960d6c7fa30ad18827b9cf7588490529eada2bf75c542c8f6d4d5f9\""
Sep  4 17:52:45.398676 containerd[1583]: time="2024-09-04T17:52:45.398024616Z" level=info msg="StartContainer for \"db0cea0f9960d6c7fa30ad18827b9cf7588490529eada2bf75c542c8f6d4d5f9\""
Sep  4 17:52:45.408484 sshd[4606]: Accepted publickey for core from 10.0.0.1 port 35392 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:52:45.410228 sshd[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:52:45.415240 systemd-logind[1557]: New session 12 of user core.
Sep  4 17:52:45.424313 systemd[1]: Started session-12.scope - Session 12 of User core.
Sep  4 17:52:45.451395 containerd[1583]: time="2024-09-04T17:52:45.451367513Z" level=info msg="StartContainer for \"db0cea0f9960d6c7fa30ad18827b9cf7588490529eada2bf75c542c8f6d4d5f9\" returns successfully"
Sep  4 17:52:45.542731 sshd[4606]: pam_unix(sshd:session): session closed for user core
Sep  4 17:52:45.546969 systemd[1]: sshd@11-10.0.0.147:22-10.0.0.1:35392.service: Deactivated successfully.
Sep  4 17:52:45.549611 systemd[1]: session-12.scope: Deactivated successfully.
Sep  4 17:52:45.551207 systemd-logind[1557]: Session 12 logged out. Waiting for processes to exit.
Sep  4 17:52:45.552421 systemd-logind[1557]: Removed session 12.
Sep  4 17:52:45.957779 kubelet[2698]: E0904 17:52:45.957660    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:45.985353 kubelet[2698]: I0904 17:52:45.985305    2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4wmwz" podStartSLOduration=28.985247089 podCreationTimestamp="2024-09-04 17:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:52:45.983806079 +0000 UTC m=+43.232748638" watchObservedRunningTime="2024-09-04 17:52:45.985247089 +0000 UTC m=+43.234189638"
Sep  4 17:52:46.256626 systemd-networkd[1243]: calic6c7d530a3e: Gained IPv6LL
Sep  4 17:52:46.576187 systemd-networkd[1243]: calif0337c27e11: Gained IPv6LL
Sep  4 17:52:46.960566 kubelet[2698]: E0904 17:52:46.959296    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:46.980688 containerd[1583]: time="2024-09-04T17:52:46.980624109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:47.046621 containerd[1583]: time="2024-09-04T17:52:47.046540244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081"
Sep  4 17:52:47.121380 containerd[1583]: time="2024-09-04T17:52:47.121300061Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:47.178286 containerd[1583]: time="2024-09-04T17:52:47.178178029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:47.181206 containerd[1583]: time="2024-09-04T17:52:47.180947436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.878369049s"
Sep  4 17:52:47.181206 containerd[1583]: time="2024-09-04T17:52:47.181191866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\""
Sep  4 17:52:47.185514 containerd[1583]: time="2024-09-04T17:52:47.185367856Z" level=info msg="CreateContainer within sandbox \"eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Sep  4 17:52:47.852686 containerd[1583]: time="2024-09-04T17:52:47.852623481Z" level=info msg="CreateContainer within sandbox \"eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"87f9722be6cd01f7621427c376dc434ca07975eea5272b78e43712af8eb11b13\""
Sep  4 17:52:47.853389 containerd[1583]: time="2024-09-04T17:52:47.853207299Z" level=info msg="StartContainer for \"87f9722be6cd01f7621427c376dc434ca07975eea5272b78e43712af8eb11b13\""
Sep  4 17:52:48.029503 containerd[1583]: time="2024-09-04T17:52:48.029455155Z" level=info msg="StartContainer for \"87f9722be6cd01f7621427c376dc434ca07975eea5272b78e43712af8eb11b13\" returns successfully"
Sep  4 17:52:48.030578 containerd[1583]: time="2024-09-04T17:52:48.030393378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\""
Sep  4 17:52:48.032664 kubelet[2698]: E0904 17:52:48.032637    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:49.034108 kubelet[2698]: E0904 17:52:49.034079    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:52:49.750516 containerd[1583]: time="2024-09-04T17:52:49.750459997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:49.751204 containerd[1583]: time="2024-09-04T17:52:49.751146989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822"
Sep  4 17:52:49.752295 containerd[1583]: time="2024-09-04T17:52:49.752257556Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:49.754397 containerd[1583]: time="2024-09-04T17:52:49.754362533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:52:49.755080 containerd[1583]: time="2024-09-04T17:52:49.755036149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.724605421s"
Sep  4 17:52:49.755121 containerd[1583]: time="2024-09-04T17:52:49.755086473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\""
Sep  4 17:52:49.756920 containerd[1583]: time="2024-09-04T17:52:49.756883501Z" level=info msg="CreateContainer within sandbox \"eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Sep  4 17:52:49.770034 containerd[1583]: time="2024-09-04T17:52:49.769996558Z" level=info msg="CreateContainer within sandbox \"eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8f8fe0692563835e51bc77e8d104211fd21910d1c514431933b88d027b51276b\""
Sep  4 17:52:49.770556 containerd[1583]: time="2024-09-04T17:52:49.770523189Z" level=info msg="StartContainer for \"8f8fe0692563835e51bc77e8d104211fd21910d1c514431933b88d027b51276b\""
Sep  4 17:52:49.824752 containerd[1583]: time="2024-09-04T17:52:49.824713364Z" level=info msg="StartContainer for \"8f8fe0692563835e51bc77e8d104211fd21910d1c514431933b88d027b51276b\" returns successfully"
Sep  4 17:52:49.937429 kubelet[2698]: I0904 17:52:49.937389    2698 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Sep  4 17:52:49.937429 kubelet[2698]: I0904 17:52:49.937428    2698 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Sep  4 17:52:50.047960 kubelet[2698]: I0904 17:52:50.047275    2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-22jch" podStartSLOduration=23.593407407 podCreationTimestamp="2024-09-04 17:52:22 +0000 UTC" firstStartedPulling="2024-09-04 17:52:45.301502524 +0000 UTC m=+42.550445063" lastFinishedPulling="2024-09-04 17:52:49.755325563 +0000 UTC m=+47.004268112" observedRunningTime="2024-09-04 17:52:50.045928819 +0000 UTC m=+47.294871369" watchObservedRunningTime="2024-09-04 17:52:50.047230456 +0000 UTC m=+47.296173005"
Sep  4 17:52:50.557289 systemd[1]: Started sshd@12-10.0.0.147:22-10.0.0.1:57312.service - OpenSSH per-connection server daemon (10.0.0.1:57312).
Sep  4 17:52:50.589232 sshd[4789]: Accepted publickey for core from 10.0.0.1 port 57312 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:52:50.591217 sshd[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:52:50.597515 systemd-logind[1557]: New session 13 of user core.
Sep  4 17:52:50.605336 systemd[1]: Started session-13.scope - Session 13 of User core.
Sep  4 17:52:50.728726 sshd[4789]: pam_unix(sshd:session): session closed for user core
Sep  4 17:52:50.733566 systemd[1]: sshd@12-10.0.0.147:22-10.0.0.1:57312.service: Deactivated successfully.
Sep  4 17:52:50.736177 systemd[1]: session-13.scope: Deactivated successfully.
Sep  4 17:52:50.736977 systemd-logind[1557]: Session 13 logged out. Waiting for processes to exit.
Sep  4 17:52:50.737846 systemd-logind[1557]: Removed session 13.
Sep  4 17:52:55.739274 systemd[1]: Started sshd@13-10.0.0.147:22-10.0.0.1:57314.service - OpenSSH per-connection server daemon (10.0.0.1:57314).
Sep  4 17:52:55.764812 sshd[4807]: Accepted publickey for core from 10.0.0.1 port 57314 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:52:55.766377 sshd[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:52:55.770298 systemd-logind[1557]: New session 14 of user core.
Sep  4 17:52:55.779384 systemd[1]: Started session-14.scope - Session 14 of User core.
Sep  4 17:52:55.893289 sshd[4807]: pam_unix(sshd:session): session closed for user core
Sep  4 17:52:55.896878 systemd[1]: sshd@13-10.0.0.147:22-10.0.0.1:57314.service: Deactivated successfully.
Sep  4 17:52:55.899028 systemd-logind[1557]: Session 14 logged out. Waiting for processes to exit.
Sep  4 17:52:55.899122 systemd[1]: session-14.scope: Deactivated successfully.
Sep  4 17:52:55.900069 systemd-logind[1557]: Removed session 14.
Sep  4 17:52:57.697555 kubelet[2698]: I0904 17:52:57.697075    2698 topology_manager.go:215] "Topology Admit Handler" podUID="9b13132f-8566-45e5-aae0-c0b256a1c5c7" podNamespace="calico-apiserver" podName="calico-apiserver-554d65cfbc-vrghk"
Sep  4 17:52:57.782883 kubelet[2698]: I0904 17:52:57.782812    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9b13132f-8566-45e5-aae0-c0b256a1c5c7-calico-apiserver-certs\") pod \"calico-apiserver-554d65cfbc-vrghk\" (UID: \"9b13132f-8566-45e5-aae0-c0b256a1c5c7\") " pod="calico-apiserver/calico-apiserver-554d65cfbc-vrghk"
Sep  4 17:52:57.782883 kubelet[2698]: I0904 17:52:57.782859    2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzkjg\" (UniqueName: \"kubernetes.io/projected/9b13132f-8566-45e5-aae0-c0b256a1c5c7-kube-api-access-bzkjg\") pod \"calico-apiserver-554d65cfbc-vrghk\" (UID: \"9b13132f-8566-45e5-aae0-c0b256a1c5c7\") " pod="calico-apiserver/calico-apiserver-554d65cfbc-vrghk"
Sep  4 17:52:57.884191 kubelet[2698]: E0904 17:52:57.884139    2698 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found
Sep  4 17:52:57.884795 kubelet[2698]: E0904 17:52:57.884767    2698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b13132f-8566-45e5-aae0-c0b256a1c5c7-calico-apiserver-certs podName:9b13132f-8566-45e5-aae0-c0b256a1c5c7 nodeName:}" failed. No retries permitted until 2024-09-04 17:52:58.384199723 +0000 UTC m=+55.633142272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9b13132f-8566-45e5-aae0-c0b256a1c5c7-calico-apiserver-certs") pod "calico-apiserver-554d65cfbc-vrghk" (UID: "9b13132f-8566-45e5-aae0-c0b256a1c5c7") : secret "calico-apiserver-certs" not found
Sep  4 17:52:58.385986 kubelet[2698]: E0904 17:52:58.385949    2698 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found
Sep  4 17:52:58.386235 kubelet[2698]: E0904 17:52:58.386027    2698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b13132f-8566-45e5-aae0-c0b256a1c5c7-calico-apiserver-certs podName:9b13132f-8566-45e5-aae0-c0b256a1c5c7 nodeName:}" failed. No retries permitted until 2024-09-04 17:52:59.386012147 +0000 UTC m=+56.634954696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9b13132f-8566-45e5-aae0-c0b256a1c5c7-calico-apiserver-certs") pod "calico-apiserver-554d65cfbc-vrghk" (UID: "9b13132f-8566-45e5-aae0-c0b256a1c5c7") : secret "calico-apiserver-certs" not found
Sep  4 17:52:59.521114 containerd[1583]: time="2024-09-04T17:52:59.521069254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554d65cfbc-vrghk,Uid:9b13132f-8566-45e5-aae0-c0b256a1c5c7,Namespace:calico-apiserver,Attempt:0,}"
Sep  4 17:52:59.818069 systemd-networkd[1243]: califd7705198c8: Link UP
Sep  4 17:52:59.819696 systemd-networkd[1243]: califd7705198c8: Gained carrier
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.688 [INFO][4840] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0 calico-apiserver-554d65cfbc- calico-apiserver  9b13132f-8566-45e5-aae0-c0b256a1c5c7 1008 0 2024-09-04 17:52:57 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:554d65cfbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  localhost  calico-apiserver-554d65cfbc-vrghk eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califd7705198c8  [] []}} ContainerID="add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" Namespace="calico-apiserver" Pod="calico-apiserver-554d65cfbc-vrghk" WorkloadEndpoint="localhost-k8s-calico--apiserver--554d65cfbc--vrghk-"
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.688 [INFO][4840] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" Namespace="calico-apiserver" Pod="calico-apiserver-554d65cfbc-vrghk" WorkloadEndpoint="localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0"
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.714 [INFO][4852] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" HandleID="k8s-pod-network.add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" Workload="localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0"
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.790 [INFO][4852] ipam_plugin.go 270: Auto assigning IP ContainerID="add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" HandleID="k8s-pod-network.add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" Workload="localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00069b340), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-554d65cfbc-vrghk", "timestamp":"2024-09-04 17:52:59.714381204 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.791 [INFO][4852] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.791 [INFO][4852] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.791 [INFO][4852] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.792 [INFO][4852] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" host="localhost"
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.795 [INFO][4852] ipam.go 372: Looking up existing affinities for host host="localhost"
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.800 [INFO][4852] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.801 [INFO][4852] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.803 [INFO][4852] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.803 [INFO][4852] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" host="localhost"
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.804 [INFO][4852] ipam.go 1685: Creating new handle: k8s-pod-network.add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.806 [INFO][4852] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" host="localhost"
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.811 [INFO][4852] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" host="localhost"
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.811 [INFO][4852] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" host="localhost"
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.811 [INFO][4852] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:52:59.830403 containerd[1583]: 2024-09-04 17:52:59.811 [INFO][4852] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" HandleID="k8s-pod-network.add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" Workload="localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0"
Sep  4 17:52:59.831119 containerd[1583]: 2024-09-04 17:52:59.814 [INFO][4840] k8s.go 386: Populated endpoint ContainerID="add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" Namespace="calico-apiserver" Pod="calico-apiserver-554d65cfbc-vrghk" WorkloadEndpoint="localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0", GenerateName:"calico-apiserver-554d65cfbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b13132f-8566-45e5-aae0-c0b256a1c5c7", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554d65cfbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-554d65cfbc-vrghk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd7705198c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:52:59.831119 containerd[1583]: 2024-09-04 17:52:59.815 [INFO][4840] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" Namespace="calico-apiserver" Pod="calico-apiserver-554d65cfbc-vrghk" WorkloadEndpoint="localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0"
Sep  4 17:52:59.831119 containerd[1583]: 2024-09-04 17:52:59.815 [INFO][4840] dataplane_linux.go 68: Setting the host side veth name to califd7705198c8 ContainerID="add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" Namespace="calico-apiserver" Pod="calico-apiserver-554d65cfbc-vrghk" WorkloadEndpoint="localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0"
Sep  4 17:52:59.831119 containerd[1583]: 2024-09-04 17:52:59.819 [INFO][4840] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" Namespace="calico-apiserver" Pod="calico-apiserver-554d65cfbc-vrghk" WorkloadEndpoint="localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0"
Sep  4 17:52:59.831119 containerd[1583]: 2024-09-04 17:52:59.820 [INFO][4840] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" Namespace="calico-apiserver" Pod="calico-apiserver-554d65cfbc-vrghk" WorkloadEndpoint="localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0", GenerateName:"calico-apiserver-554d65cfbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b13132f-8566-45e5-aae0-c0b256a1c5c7", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554d65cfbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa", Pod:"calico-apiserver-554d65cfbc-vrghk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd7705198c8", MAC:"3e:2d:8b:2b:84:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:52:59.831119 containerd[1583]: 2024-09-04 17:52:59.826 [INFO][4840] k8s.go 500: Wrote updated endpoint to datastore ContainerID="add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa" Namespace="calico-apiserver" Pod="calico-apiserver-554d65cfbc-vrghk" WorkloadEndpoint="localhost-k8s-calico--apiserver--554d65cfbc--vrghk-eth0"
Sep  4 17:52:59.854926 containerd[1583]: time="2024-09-04T17:52:59.854819764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:52:59.854926 containerd[1583]: time="2024-09-04T17:52:59.854890196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:52:59.854926 containerd[1583]: time="2024-09-04T17:52:59.854902359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:59.855184 containerd[1583]: time="2024-09-04T17:52:59.855005693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:52:59.884136 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Sep  4 17:52:59.909753 containerd[1583]: time="2024-09-04T17:52:59.909710513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554d65cfbc-vrghk,Uid:9b13132f-8566-45e5-aae0-c0b256a1c5c7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa\""
Sep  4 17:52:59.911511 containerd[1583]: time="2024-09-04T17:52:59.911321538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\""
Sep  4 17:53:00.913899 systemd[1]: Started sshd@14-10.0.0.147:22-10.0.0.1:60236.service - OpenSSH per-connection server daemon (10.0.0.1:60236).
Sep  4 17:53:00.945577 sshd[4919]: Accepted publickey for core from 10.0.0.1 port 60236 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:53:00.947301 sshd[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:53:00.951579 systemd-logind[1557]: New session 15 of user core.
Sep  4 17:53:00.959326 systemd[1]: Started session-15.scope - Session 15 of User core.
Sep  4 17:53:01.079208 sshd[4919]: pam_unix(sshd:session): session closed for user core
Sep  4 17:53:01.082894 systemd[1]: sshd@14-10.0.0.147:22-10.0.0.1:60236.service: Deactivated successfully.
Sep  4 17:53:01.085649 systemd-logind[1557]: Session 15 logged out. Waiting for processes to exit.
Sep  4 17:53:01.085750 systemd[1]: session-15.scope: Deactivated successfully.
Sep  4 17:53:01.086655 systemd-logind[1557]: Removed session 15.
Sep  4 17:53:01.488201 systemd-networkd[1243]: califd7705198c8: Gained IPv6LL
Sep  4 17:53:02.347499 containerd[1583]: time="2024-09-04T17:53:02.347455523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:53:02.348160 containerd[1583]: time="2024-09-04T17:53:02.348098110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849"
Sep  4 17:53:02.349308 containerd[1583]: time="2024-09-04T17:53:02.349268629Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:53:02.351285 containerd[1583]: time="2024-09-04T17:53:02.351257884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Sep  4 17:53:02.351924 containerd[1583]: time="2024-09-04T17:53:02.351879432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.440531815s"
Sep  4 17:53:02.351924 containerd[1583]: time="2024-09-04T17:53:02.351916912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\""
Sep  4 17:53:02.353648 containerd[1583]: time="2024-09-04T17:53:02.353612356Z" level=info msg="CreateContainer within sandbox \"add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Sep  4 17:53:02.364304 containerd[1583]: time="2024-09-04T17:53:02.364269049Z" level=info msg="CreateContainer within sandbox \"add2c83bd73d45c943d2dff620deadae1fbb663bfa47edef42e1b8fb83833daa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a814d49fdcca83222b3e6ee733908de49b0f0a75c18ea5b2c1a2c6b1daa1b7c3\""
Sep  4 17:53:02.365308 containerd[1583]: time="2024-09-04T17:53:02.365277964Z" level=info msg="StartContainer for \"a814d49fdcca83222b3e6ee733908de49b0f0a75c18ea5b2c1a2c6b1daa1b7c3\""
Sep  4 17:53:02.426632 containerd[1583]: time="2024-09-04T17:53:02.426594857Z" level=info msg="StartContainer for \"a814d49fdcca83222b3e6ee733908de49b0f0a75c18ea5b2c1a2c6b1daa1b7c3\" returns successfully"
Sep  4 17:53:02.831704 containerd[1583]: time="2024-09-04T17:53:02.831658365Z" level=info msg="StopPodSandbox for \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\""
Sep  4 17:53:02.903858 containerd[1583]: 2024-09-04 17:53:02.870 [WARNING][4994] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0", GenerateName:"calico-kube-controllers-5f5c9f4dcd-", Namespace:"calico-system", SelfLink:"", UID:"97263c85-7a7c-4dd5-bb45-86c5230fb6f6", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5c9f4dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8", Pod:"calico-kube-controllers-5f5c9f4dcd-zx9vp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib13a4ec6a6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:53:02.903858 containerd[1583]: 2024-09-04 17:53:02.870 [INFO][4994] k8s.go 608: Cleaning up netns ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:53:02.903858 containerd[1583]: 2024-09-04 17:53:02.870 [INFO][4994] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" iface="eth0" netns=""
Sep  4 17:53:02.903858 containerd[1583]: 2024-09-04 17:53:02.870 [INFO][4994] k8s.go 615: Releasing IP address(es) ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:53:02.903858 containerd[1583]: 2024-09-04 17:53:02.870 [INFO][4994] utils.go 188: Calico CNI releasing IP address ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:53:02.903858 containerd[1583]: 2024-09-04 17:53:02.892 [INFO][5004] ipam_plugin.go 417: Releasing address using handleID ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" HandleID="k8s-pod-network.6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" Workload="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:53:02.903858 containerd[1583]: 2024-09-04 17:53:02.893 [INFO][5004] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:53:02.903858 containerd[1583]: 2024-09-04 17:53:02.893 [INFO][5004] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:53:02.903858 containerd[1583]: 2024-09-04 17:53:02.897 [WARNING][5004] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" HandleID="k8s-pod-network.6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" Workload="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:53:02.903858 containerd[1583]: 2024-09-04 17:53:02.897 [INFO][5004] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" HandleID="k8s-pod-network.6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" Workload="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:53:02.903858 containerd[1583]: 2024-09-04 17:53:02.898 [INFO][5004] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:53:02.903858 containerd[1583]: 2024-09-04 17:53:02.901 [INFO][4994] k8s.go 621: Teardown processing complete. ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:53:02.904292 containerd[1583]: time="2024-09-04T17:53:02.903896438Z" level=info msg="TearDown network for sandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\" successfully"
Sep  4 17:53:02.904292 containerd[1583]: time="2024-09-04T17:53:02.903920744Z" level=info msg="StopPodSandbox for \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\" returns successfully"
Sep  4 17:53:02.904541 containerd[1583]: time="2024-09-04T17:53:02.904503869Z" level=info msg="RemovePodSandbox for \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\""
Sep  4 17:53:02.907720 containerd[1583]: time="2024-09-04T17:53:02.907689532Z" level=info msg="Forcibly stopping sandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\""
Sep  4 17:53:02.971108 containerd[1583]: 2024-09-04 17:53:02.941 [WARNING][5027] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0", GenerateName:"calico-kube-controllers-5f5c9f4dcd-", Namespace:"calico-system", SelfLink:"", UID:"97263c85-7a7c-4dd5-bb45-86c5230fb6f6", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5c9f4dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e404fd58756c03504fbd279b8cd4522f1f5618c4865bc0b1b523edf2b76eccb8", Pod:"calico-kube-controllers-5f5c9f4dcd-zx9vp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib13a4ec6a6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:53:02.971108 containerd[1583]: 2024-09-04 17:53:02.941 [INFO][5027] k8s.go 608: Cleaning up netns ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:53:02.971108 containerd[1583]: 2024-09-04 17:53:02.941 [INFO][5027] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" iface="eth0" netns=""
Sep  4 17:53:02.971108 containerd[1583]: 2024-09-04 17:53:02.941 [INFO][5027] k8s.go 615: Releasing IP address(es) ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:53:02.971108 containerd[1583]: 2024-09-04 17:53:02.941 [INFO][5027] utils.go 188: Calico CNI releasing IP address ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:53:02.971108 containerd[1583]: 2024-09-04 17:53:02.961 [INFO][5035] ipam_plugin.go 417: Releasing address using handleID ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" HandleID="k8s-pod-network.6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" Workload="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:53:02.971108 containerd[1583]: 2024-09-04 17:53:02.961 [INFO][5035] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:53:02.971108 containerd[1583]: 2024-09-04 17:53:02.961 [INFO][5035] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:53:02.971108 containerd[1583]: 2024-09-04 17:53:02.965 [WARNING][5035] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" HandleID="k8s-pod-network.6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" Workload="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:53:02.971108 containerd[1583]: 2024-09-04 17:53:02.965 [INFO][5035] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" HandleID="k8s-pod-network.6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701" Workload="localhost-k8s-calico--kube--controllers--5f5c9f4dcd--zx9vp-eth0"
Sep  4 17:53:02.971108 containerd[1583]: 2024-09-04 17:53:02.966 [INFO][5035] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:53:02.971108 containerd[1583]: 2024-09-04 17:53:02.968 [INFO][5027] k8s.go 621: Teardown processing complete. ContainerID="6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701"
Sep  4 17:53:02.971504 containerd[1583]: time="2024-09-04T17:53:02.971118381Z" level=info msg="TearDown network for sandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\" successfully"
Sep  4 17:53:02.988445 containerd[1583]: time="2024-09-04T17:53:02.988404534Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Sep  4 17:53:02.988495 containerd[1583]: time="2024-09-04T17:53:02.988483052Z" level=info msg="RemovePodSandbox \"6735b754ecdf6546a0facf6dc9946ef52441c503c8790610b38fdb972e971701\" returns successfully"
Sep  4 17:53:02.988980 containerd[1583]: time="2024-09-04T17:53:02.988960999Z" level=info msg="StopPodSandbox for \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\""
Sep  4 17:53:03.052325 containerd[1583]: 2024-09-04 17:53:03.021 [WARNING][5058] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--4wmwz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"92f82df8-66ef-4892-866f-ff21ef05099e", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94", Pod:"coredns-5dd5756b68-4wmwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0337c27e11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:53:03.052325 containerd[1583]: 2024-09-04 17:53:03.021 [INFO][5058] k8s.go 608: Cleaning up netns ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:53:03.052325 containerd[1583]: 2024-09-04 17:53:03.021 [INFO][5058] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" iface="eth0" netns=""
Sep  4 17:53:03.052325 containerd[1583]: 2024-09-04 17:53:03.021 [INFO][5058] k8s.go 615: Releasing IP address(es) ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:53:03.052325 containerd[1583]: 2024-09-04 17:53:03.021 [INFO][5058] utils.go 188: Calico CNI releasing IP address ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:53:03.052325 containerd[1583]: 2024-09-04 17:53:03.041 [INFO][5066] ipam_plugin.go 417: Releasing address using handleID ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" HandleID="k8s-pod-network.431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" Workload="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:53:03.052325 containerd[1583]: 2024-09-04 17:53:03.042 [INFO][5066] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:53:03.052325 containerd[1583]: 2024-09-04 17:53:03.042 [INFO][5066] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:53:03.052325 containerd[1583]: 2024-09-04 17:53:03.046 [WARNING][5066] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" HandleID="k8s-pod-network.431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" Workload="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:53:03.052325 containerd[1583]: 2024-09-04 17:53:03.046 [INFO][5066] ipam_plugin.go 445: Releasing address using workloadID ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" HandleID="k8s-pod-network.431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" Workload="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:53:03.052325 containerd[1583]: 2024-09-04 17:53:03.047 [INFO][5066] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:53:03.052325 containerd[1583]: 2024-09-04 17:53:03.049 [INFO][5058] k8s.go 621: Teardown processing complete. ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:53:03.052736 containerd[1583]: time="2024-09-04T17:53:03.052358655Z" level=info msg="TearDown network for sandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\" successfully"
Sep  4 17:53:03.052736 containerd[1583]: time="2024-09-04T17:53:03.052381899Z" level=info msg="StopPodSandbox for \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\" returns successfully"
Sep  4 17:53:03.053154 containerd[1583]: time="2024-09-04T17:53:03.053118533Z" level=info msg="RemovePodSandbox for \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\""
Sep  4 17:53:03.053154 containerd[1583]: time="2024-09-04T17:53:03.053153628Z" level=info msg="Forcibly stopping sandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\""
Sep  4 17:53:03.082381 kubelet[2698]: I0904 17:53:03.082272    2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-554d65cfbc-vrghk" podStartSLOduration=3.640904046 podCreationTimestamp="2024-09-04 17:52:57 +0000 UTC" firstStartedPulling="2024-09-04 17:52:59.910865792 +0000 UTC m=+57.159808341" lastFinishedPulling="2024-09-04 17:53:02.352196166 +0000 UTC m=+59.601138715" observedRunningTime="2024-09-04 17:53:03.081415271 +0000 UTC m=+60.330357830" watchObservedRunningTime="2024-09-04 17:53:03.08223442 +0000 UTC m=+60.331176959"
Sep  4 17:53:03.136010 containerd[1583]: 2024-09-04 17:53:03.103 [WARNING][5088] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--4wmwz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"92f82df8-66ef-4892-866f-ff21ef05099e", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e99a4718cac50cd080d71dd623b9ff5c3b228e53bf530bf27e41d23ac0d47c94", Pod:"coredns-5dd5756b68-4wmwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0337c27e11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:53:03.136010 containerd[1583]: 2024-09-04 17:53:03.104 [INFO][5088] k8s.go 608: Cleaning up netns ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:53:03.136010 containerd[1583]: 2024-09-04 17:53:03.104 [INFO][5088] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" iface="eth0" netns=""
Sep  4 17:53:03.136010 containerd[1583]: 2024-09-04 17:53:03.104 [INFO][5088] k8s.go 615: Releasing IP address(es) ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:53:03.136010 containerd[1583]: 2024-09-04 17:53:03.104 [INFO][5088] utils.go 188: Calico CNI releasing IP address ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:53:03.136010 containerd[1583]: 2024-09-04 17:53:03.124 [INFO][5098] ipam_plugin.go 417: Releasing address using handleID ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" HandleID="k8s-pod-network.431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" Workload="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:53:03.136010 containerd[1583]: 2024-09-04 17:53:03.124 [INFO][5098] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:53:03.136010 containerd[1583]: 2024-09-04 17:53:03.124 [INFO][5098] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:53:03.136010 containerd[1583]: 2024-09-04 17:53:03.129 [WARNING][5098] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" HandleID="k8s-pod-network.431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" Workload="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:53:03.136010 containerd[1583]: 2024-09-04 17:53:03.129 [INFO][5098] ipam_plugin.go 445: Releasing address using workloadID ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" HandleID="k8s-pod-network.431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a" Workload="localhost-k8s-coredns--5dd5756b68--4wmwz-eth0"
Sep  4 17:53:03.136010 containerd[1583]: 2024-09-04 17:53:03.130 [INFO][5098] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:53:03.136010 containerd[1583]: 2024-09-04 17:53:03.133 [INFO][5088] k8s.go 621: Teardown processing complete. ContainerID="431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a"
Sep  4 17:53:03.136444 containerd[1583]: time="2024-09-04T17:53:03.136062565Z" level=info msg="TearDown network for sandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\" successfully"
Sep  4 17:53:03.139664 containerd[1583]: time="2024-09-04T17:53:03.139640745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Sep  4 17:53:03.139724 containerd[1583]: time="2024-09-04T17:53:03.139681892Z" level=info msg="RemovePodSandbox \"431a93e0f0cf5b547f8820a40fdb734fdbf206ef58f96b2633d9d5c63b03a40a\" returns successfully"
Sep  4 17:53:03.140271 containerd[1583]: time="2024-09-04T17:53:03.140191308Z" level=info msg="StopPodSandbox for \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\""
Sep  4 17:53:03.204421 containerd[1583]: 2024-09-04 17:53:03.172 [WARNING][5121] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--wckr9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a094ec3c-d81c-474b-b6c7-8209bd24a732", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968", Pod:"coredns-5dd5756b68-wckr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d30a9aecf1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:53:03.204421 containerd[1583]: 2024-09-04 17:53:03.173 [INFO][5121] k8s.go 608: Cleaning up netns ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:53:03.204421 containerd[1583]: 2024-09-04 17:53:03.173 [INFO][5121] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" iface="eth0" netns=""
Sep  4 17:53:03.204421 containerd[1583]: 2024-09-04 17:53:03.173 [INFO][5121] k8s.go 615: Releasing IP address(es) ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:53:03.204421 containerd[1583]: 2024-09-04 17:53:03.173 [INFO][5121] utils.go 188: Calico CNI releasing IP address ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:53:03.204421 containerd[1583]: 2024-09-04 17:53:03.193 [INFO][5129] ipam_plugin.go 417: Releasing address using handleID ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" HandleID="k8s-pod-network.8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" Workload="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:53:03.204421 containerd[1583]: 2024-09-04 17:53:03.193 [INFO][5129] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:53:03.204421 containerd[1583]: 2024-09-04 17:53:03.193 [INFO][5129] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:53:03.204421 containerd[1583]: 2024-09-04 17:53:03.198 [WARNING][5129] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" HandleID="k8s-pod-network.8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" Workload="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:53:03.204421 containerd[1583]: 2024-09-04 17:53:03.198 [INFO][5129] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" HandleID="k8s-pod-network.8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" Workload="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:53:03.204421 containerd[1583]: 2024-09-04 17:53:03.199 [INFO][5129] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:53:03.204421 containerd[1583]: 2024-09-04 17:53:03.202 [INFO][5121] k8s.go 621: Teardown processing complete. ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:53:03.204835 containerd[1583]: time="2024-09-04T17:53:03.204452846Z" level=info msg="TearDown network for sandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\" successfully"
Sep  4 17:53:03.204835 containerd[1583]: time="2024-09-04T17:53:03.204476952Z" level=info msg="StopPodSandbox for \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\" returns successfully"
Sep  4 17:53:03.204958 containerd[1583]: time="2024-09-04T17:53:03.204931205Z" level=info msg="RemovePodSandbox for \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\""
Sep  4 17:53:03.204996 containerd[1583]: time="2024-09-04T17:53:03.204957555Z" level=info msg="Forcibly stopping sandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\""
Sep  4 17:53:03.265827 containerd[1583]: 2024-09-04 17:53:03.236 [WARNING][5151] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--wckr9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a094ec3c-d81c-474b-b6c7-8209bd24a732", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2e68f766be9c8530161feaf0f3c0831c96b4010d161c2faac110a0e2511a968", Pod:"coredns-5dd5756b68-wckr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d30a9aecf1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:53:03.265827 containerd[1583]: 2024-09-04 17:53:03.236 [INFO][5151] k8s.go 608: Cleaning up netns ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:53:03.265827 containerd[1583]: 2024-09-04 17:53:03.236 [INFO][5151] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" iface="eth0" netns=""
Sep  4 17:53:03.265827 containerd[1583]: 2024-09-04 17:53:03.236 [INFO][5151] k8s.go 615: Releasing IP address(es) ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:53:03.265827 containerd[1583]: 2024-09-04 17:53:03.236 [INFO][5151] utils.go 188: Calico CNI releasing IP address ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:53:03.265827 containerd[1583]: 2024-09-04 17:53:03.255 [INFO][5158] ipam_plugin.go 417: Releasing address using handleID ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" HandleID="k8s-pod-network.8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" Workload="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:53:03.265827 containerd[1583]: 2024-09-04 17:53:03.255 [INFO][5158] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:53:03.265827 containerd[1583]: 2024-09-04 17:53:03.255 [INFO][5158] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:53:03.265827 containerd[1583]: 2024-09-04 17:53:03.259 [WARNING][5158] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" HandleID="k8s-pod-network.8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" Workload="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:53:03.265827 containerd[1583]: 2024-09-04 17:53:03.259 [INFO][5158] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" HandleID="k8s-pod-network.8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c" Workload="localhost-k8s-coredns--5dd5756b68--wckr9-eth0"
Sep  4 17:53:03.265827 containerd[1583]: 2024-09-04 17:53:03.261 [INFO][5158] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:53:03.265827 containerd[1583]: 2024-09-04 17:53:03.263 [INFO][5151] k8s.go 621: Teardown processing complete. ContainerID="8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c"
Sep  4 17:53:03.266254 containerd[1583]: time="2024-09-04T17:53:03.265865314Z" level=info msg="TearDown network for sandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\" successfully"
Sep  4 17:53:03.273259 containerd[1583]: time="2024-09-04T17:53:03.273217411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Sep  4 17:53:03.273309 containerd[1583]: time="2024-09-04T17:53:03.273281782Z" level=info msg="RemovePodSandbox \"8d375e90d18ce13f9c74206f9ddc8bf5afba71ce849c944fbd94b3f190441d2c\" returns successfully"
Sep  4 17:53:03.273891 containerd[1583]: time="2024-09-04T17:53:03.273844899Z" level=info msg="StopPodSandbox for \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\""
Sep  4 17:53:03.334886 containerd[1583]: 2024-09-04 17:53:03.305 [WARNING][5182] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--22jch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"520ef3bc-9622-4072-8027-438b0db6b0ef", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568", Pod:"csi-node-driver-22jch", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic6c7d530a3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:53:03.334886 containerd[1583]: 2024-09-04 17:53:03.306 [INFO][5182] k8s.go 608: Cleaning up netns ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:53:03.334886 containerd[1583]: 2024-09-04 17:53:03.306 [INFO][5182] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" iface="eth0" netns=""
Sep  4 17:53:03.334886 containerd[1583]: 2024-09-04 17:53:03.306 [INFO][5182] k8s.go 615: Releasing IP address(es) ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:53:03.334886 containerd[1583]: 2024-09-04 17:53:03.306 [INFO][5182] utils.go 188: Calico CNI releasing IP address ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:53:03.334886 containerd[1583]: 2024-09-04 17:53:03.324 [INFO][5189] ipam_plugin.go 417: Releasing address using handleID ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" HandleID="k8s-pod-network.f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" Workload="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:53:03.334886 containerd[1583]: 2024-09-04 17:53:03.324 [INFO][5189] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:53:03.334886 containerd[1583]: 2024-09-04 17:53:03.324 [INFO][5189] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:53:03.334886 containerd[1583]: 2024-09-04 17:53:03.328 [WARNING][5189] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" HandleID="k8s-pod-network.f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" Workload="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:53:03.334886 containerd[1583]: 2024-09-04 17:53:03.329 [INFO][5189] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" HandleID="k8s-pod-network.f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" Workload="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:53:03.334886 containerd[1583]: 2024-09-04 17:53:03.330 [INFO][5189] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:53:03.334886 containerd[1583]: 2024-09-04 17:53:03.332 [INFO][5182] k8s.go 621: Teardown processing complete. ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:53:03.334886 containerd[1583]: time="2024-09-04T17:53:03.334839533Z" level=info msg="TearDown network for sandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\" successfully"
Sep  4 17:53:03.334886 containerd[1583]: time="2024-09-04T17:53:03.334861925Z" level=info msg="StopPodSandbox for \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\" returns successfully"
Sep  4 17:53:03.335591 containerd[1583]: time="2024-09-04T17:53:03.335380548Z" level=info msg="RemovePodSandbox for \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\""
Sep  4 17:53:03.335591 containerd[1583]: time="2024-09-04T17:53:03.335400456Z" level=info msg="Forcibly stopping sandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\""
Sep  4 17:53:03.361908 systemd[1]: run-containerd-runc-k8s.io-939d91d18f05ce55fbe15909e484b2b1e0484b61951eb09265db0b0f93c58991-runc.qEXn6H.mount: Deactivated successfully.
Sep  4 17:53:03.399863 containerd[1583]: 2024-09-04 17:53:03.371 [WARNING][5230] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--22jch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"520ef3bc-9622-4072-8027-438b0db6b0ef", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eebcb9e84df6fcff7c5e9d76ca8a9eea120d90093d3c9f90b6e0446a4be11568", Pod:"csi-node-driver-22jch", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic6c7d530a3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:53:03.399863 containerd[1583]: 2024-09-04 17:53:03.371 [INFO][5230] k8s.go 608: Cleaning up netns ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:53:03.399863 containerd[1583]: 2024-09-04 17:53:03.371 [INFO][5230] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" iface="eth0" netns=""
Sep  4 17:53:03.399863 containerd[1583]: 2024-09-04 17:53:03.372 [INFO][5230] k8s.go 615: Releasing IP address(es) ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:53:03.399863 containerd[1583]: 2024-09-04 17:53:03.372 [INFO][5230] utils.go 188: Calico CNI releasing IP address ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:53:03.399863 containerd[1583]: 2024-09-04 17:53:03.389 [INFO][5242] ipam_plugin.go 417: Releasing address using handleID ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" HandleID="k8s-pod-network.f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" Workload="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:53:03.399863 containerd[1583]: 2024-09-04 17:53:03.389 [INFO][5242] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:53:03.399863 containerd[1583]: 2024-09-04 17:53:03.389 [INFO][5242] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:53:03.399863 containerd[1583]: 2024-09-04 17:53:03.394 [WARNING][5242] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" HandleID="k8s-pod-network.f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" Workload="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:53:03.399863 containerd[1583]: 2024-09-04 17:53:03.394 [INFO][5242] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" HandleID="k8s-pod-network.f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f" Workload="localhost-k8s-csi--node--driver--22jch-eth0"
Sep  4 17:53:03.399863 containerd[1583]: 2024-09-04 17:53:03.395 [INFO][5242] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:53:03.399863 containerd[1583]: 2024-09-04 17:53:03.397 [INFO][5230] k8s.go 621: Teardown processing complete. ContainerID="f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f"
Sep  4 17:53:03.400580 containerd[1583]: time="2024-09-04T17:53:03.399896314Z" level=info msg="TearDown network for sandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\" successfully"
Sep  4 17:53:03.403394 containerd[1583]: time="2024-09-04T17:53:03.403364117Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Sep  4 17:53:03.403439 containerd[1583]: time="2024-09-04T17:53:03.403405074Z" level=info msg="RemovePodSandbox \"f45bda278449013be67c18a8eb1c2e5256765520d220690af870cae3c1bc2e3f\" returns successfully"
Sep  4 17:53:04.080449 kubelet[2698]: I0904 17:53:04.080416    2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Sep  4 17:53:05.121819 kubelet[2698]: I0904 17:53:05.121776    2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Sep  4 17:53:06.093668 systemd[1]: Started sshd@15-10.0.0.147:22-10.0.0.1:60244.service - OpenSSH per-connection server daemon (10.0.0.1:60244).
Sep  4 17:53:06.125515 sshd[5256]: Accepted publickey for core from 10.0.0.1 port 60244 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:53:06.127117 sshd[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:53:06.130935 systemd-logind[1557]: New session 16 of user core.
Sep  4 17:53:06.140292 systemd[1]: Started session-16.scope - Session 16 of User core.
Sep  4 17:53:06.256001 sshd[5256]: pam_unix(sshd:session): session closed for user core
Sep  4 17:53:06.266253 systemd[1]: Started sshd@16-10.0.0.147:22-10.0.0.1:46648.service - OpenSSH per-connection server daemon (10.0.0.1:46648).
Sep  4 17:53:06.266694 systemd[1]: sshd@15-10.0.0.147:22-10.0.0.1:60244.service: Deactivated successfully.
Sep  4 17:53:06.270016 systemd-logind[1557]: Session 16 logged out. Waiting for processes to exit.
Sep  4 17:53:06.270679 systemd[1]: session-16.scope: Deactivated successfully.
Sep  4 17:53:06.271982 systemd-logind[1557]: Removed session 16.
Sep  4 17:53:06.291926 sshd[5269]: Accepted publickey for core from 10.0.0.1 port 46648 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:53:06.293627 sshd[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:53:06.297448 systemd-logind[1557]: New session 17 of user core.
Sep  4 17:53:06.308376 systemd[1]: Started session-17.scope - Session 17 of User core.
Sep  4 17:53:06.489798 sshd[5269]: pam_unix(sshd:session): session closed for user core
Sep  4 17:53:06.496251 systemd[1]: Started sshd@17-10.0.0.147:22-10.0.0.1:46664.service - OpenSSH per-connection server daemon (10.0.0.1:46664).
Sep  4 17:53:06.496704 systemd[1]: sshd@16-10.0.0.147:22-10.0.0.1:46648.service: Deactivated successfully.
Sep  4 17:53:06.500260 systemd[1]: session-17.scope: Deactivated successfully.
Sep  4 17:53:06.500812 systemd-logind[1557]: Session 17 logged out. Waiting for processes to exit.
Sep  4 17:53:06.501800 systemd-logind[1557]: Removed session 17.
Sep  4 17:53:06.523455 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 46664 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:53:06.524897 sshd[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:53:06.528690 systemd-logind[1557]: New session 18 of user core.
Sep  4 17:53:06.536272 systemd[1]: Started session-18.scope - Session 18 of User core.
Sep  4 17:53:07.788310 sshd[5282]: pam_unix(sshd:session): session closed for user core
Sep  4 17:53:07.799165 systemd[1]: Started sshd@18-10.0.0.147:22-10.0.0.1:46670.service - OpenSSH per-connection server daemon (10.0.0.1:46670).
Sep  4 17:53:07.799665 systemd[1]: sshd@17-10.0.0.147:22-10.0.0.1:46664.service: Deactivated successfully.
Sep  4 17:53:07.807628 systemd[1]: session-18.scope: Deactivated successfully.
Sep  4 17:53:07.810564 systemd-logind[1557]: Session 18 logged out. Waiting for processes to exit.
Sep  4 17:53:07.811816 systemd-logind[1557]: Removed session 18.
Sep  4 17:53:07.838884 sshd[5306]: Accepted publickey for core from 10.0.0.1 port 46670 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:53:07.840724 sshd[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:53:07.845509 systemd-logind[1557]: New session 19 of user core.
Sep  4 17:53:07.851684 systemd[1]: Started session-19.scope - Session 19 of User core.
Sep  4 17:53:08.365205 sshd[5306]: pam_unix(sshd:session): session closed for user core
Sep  4 17:53:08.374377 systemd[1]: Started sshd@19-10.0.0.147:22-10.0.0.1:46686.service - OpenSSH per-connection server daemon (10.0.0.1:46686).
Sep  4 17:53:08.374997 systemd[1]: sshd@18-10.0.0.147:22-10.0.0.1:46670.service: Deactivated successfully.
Sep  4 17:53:08.377701 systemd[1]: session-19.scope: Deactivated successfully.
Sep  4 17:53:08.380101 systemd-logind[1557]: Session 19 logged out. Waiting for processes to exit.
Sep  4 17:53:08.381070 systemd-logind[1557]: Removed session 19.
Sep  4 17:53:08.404110 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 46686 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:53:08.405824 sshd[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:53:08.410035 systemd-logind[1557]: New session 20 of user core.
Sep  4 17:53:08.420272 systemd[1]: Started session-20.scope - Session 20 of User core.
Sep  4 17:53:08.531218 sshd[5322]: pam_unix(sshd:session): session closed for user core
Sep  4 17:53:08.535828 systemd[1]: sshd@19-10.0.0.147:22-10.0.0.1:46686.service: Deactivated successfully.
Sep  4 17:53:08.538086 systemd-logind[1557]: Session 20 logged out. Waiting for processes to exit.
Sep  4 17:53:08.538173 systemd[1]: session-20.scope: Deactivated successfully.
Sep  4 17:53:08.539094 systemd-logind[1557]: Removed session 20.
Sep  4 17:53:13.551304 systemd[1]: Started sshd@20-10.0.0.147:22-10.0.0.1:46694.service - OpenSSH per-connection server daemon (10.0.0.1:46694).
Sep  4 17:53:13.576781 sshd[5363]: Accepted publickey for core from 10.0.0.1 port 46694 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:53:13.578260 sshd[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:53:13.581840 systemd-logind[1557]: New session 21 of user core.
Sep  4 17:53:13.596266 systemd[1]: Started session-21.scope - Session 21 of User core.
Sep  4 17:53:13.692984 sshd[5363]: pam_unix(sshd:session): session closed for user core
Sep  4 17:53:13.697623 systemd[1]: sshd@20-10.0.0.147:22-10.0.0.1:46694.service: Deactivated successfully.
Sep  4 17:53:13.700015 systemd[1]: session-21.scope: Deactivated successfully.
Sep  4 17:53:13.700794 systemd-logind[1557]: Session 21 logged out. Waiting for processes to exit.
Sep  4 17:53:13.701711 systemd-logind[1557]: Removed session 21.
Sep  4 17:53:18.705270 systemd[1]: Started sshd@21-10.0.0.147:22-10.0.0.1:38376.service - OpenSSH per-connection server daemon (10.0.0.1:38376).
Sep  4 17:53:18.734298 sshd[5394]: Accepted publickey for core from 10.0.0.1 port 38376 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:53:18.735923 sshd[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:53:18.739977 systemd-logind[1557]: New session 22 of user core.
Sep  4 17:53:18.753304 systemd[1]: Started session-22.scope - Session 22 of User core.
Sep  4 17:53:18.852461 sshd[5394]: pam_unix(sshd:session): session closed for user core
Sep  4 17:53:18.856402 systemd[1]: sshd@21-10.0.0.147:22-10.0.0.1:38376.service: Deactivated successfully.
Sep  4 17:53:18.858868 systemd-logind[1557]: Session 22 logged out. Waiting for processes to exit.
Sep  4 17:53:18.858957 systemd[1]: session-22.scope: Deactivated successfully.
Sep  4 17:53:18.859937 systemd-logind[1557]: Removed session 22.
Sep  4 17:53:23.839065 kubelet[2698]: E0904 17:53:23.838992    2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:53:23.864271 systemd[1]: Started sshd@22-10.0.0.147:22-10.0.0.1:38384.service - OpenSSH per-connection server daemon (10.0.0.1:38384).
Sep  4 17:53:23.889291 sshd[5411]: Accepted publickey for core from 10.0.0.1 port 38384 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:53:23.890843 sshd[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:53:23.894478 systemd-logind[1557]: New session 23 of user core.
Sep  4 17:53:23.902289 systemd[1]: Started session-23.scope - Session 23 of User core.
Sep  4 17:53:24.009525 sshd[5411]: pam_unix(sshd:session): session closed for user core
Sep  4 17:53:24.014180 systemd[1]: sshd@22-10.0.0.147:22-10.0.0.1:38384.service: Deactivated successfully.
Sep  4 17:53:24.017168 systemd[1]: session-23.scope: Deactivated successfully.
Sep  4 17:53:24.017956 systemd-logind[1557]: Session 23 logged out. Waiting for processes to exit.
Sep  4 17:53:24.018840 systemd-logind[1557]: Removed session 23.
Sep  4 17:53:29.020255 systemd[1]: Started sshd@23-10.0.0.147:22-10.0.0.1:36408.service - OpenSSH per-connection server daemon (10.0.0.1:36408).
Sep  4 17:53:29.053659 sshd[5433]: Accepted publickey for core from 10.0.0.1 port 36408 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU
Sep  4 17:53:29.055126 sshd[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Sep  4 17:53:29.059184 systemd-logind[1557]: New session 24 of user core.
Sep  4 17:53:29.071396 systemd[1]: Started session-24.scope - Session 24 of User core.
Sep  4 17:53:29.175768 sshd[5433]: pam_unix(sshd:session): session closed for user core
Sep  4 17:53:29.180343 systemd[1]: sshd@23-10.0.0.147:22-10.0.0.1:36408.service: Deactivated successfully.
Sep  4 17:53:29.183291 systemd[1]: session-24.scope: Deactivated successfully.
Sep  4 17:53:29.184138 systemd-logind[1557]: Session 24 logged out. Waiting for processes to exit.
Sep  4 17:53:29.185011 systemd-logind[1557]: Removed session 24.