Aug  5 22:31:28.911314 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug  5 20:36:22 -00 2024
Aug  5 22:31:28.911339 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695
Aug  5 22:31:28.911353 kernel: BIOS-provided physical RAM map:
Aug  5 22:31:28.911361 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Aug  5 22:31:28.911370 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Aug  5 22:31:28.911378 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Aug  5 22:31:28.911388 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable
Aug  5 22:31:28.911397 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved
Aug  5 22:31:28.911405 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Aug  5 22:31:28.911416 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Aug  5 22:31:28.911425 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved
Aug  5 22:31:28.911433 kernel: NX (Execute Disable) protection: active
Aug  5 22:31:28.911442 kernel: APIC: Static calls initialized
Aug  5 22:31:28.911451 kernel: SMBIOS 2.8 present.
Aug  5 22:31:28.911461 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Aug  5 22:31:28.911473 kernel: Hypervisor detected: KVM
Aug  5 22:31:28.911482 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Aug  5 22:31:28.911491 kernel: kvm-clock: using sched offset of 2236315160 cycles
Aug  5 22:31:28.911500 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Aug  5 22:31:28.911510 kernel: tsc: Detected 2794.748 MHz processor
Aug  5 22:31:28.911520 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Aug  5 22:31:28.911530 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Aug  5 22:31:28.911539 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000
Aug  5 22:31:28.911549 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Aug  5 22:31:28.911561 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Aug  5 22:31:28.911571 kernel: Using GB pages for direct mapping
Aug  5 22:31:28.911580 kernel: ACPI: Early table checksum verification disabled
Aug  5 22:31:28.911590 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS )
Aug  5 22:31:28.911599 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Aug  5 22:31:28.911609 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Aug  5 22:31:28.911618 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Aug  5 22:31:28.911627 kernel: ACPI: FACS 0x000000009CFE0000 000040
Aug  5 22:31:28.911637 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Aug  5 22:31:28.911649 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Aug  5 22:31:28.911671 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Aug  5 22:31:28.911681 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec]
Aug  5 22:31:28.911690 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78]
Aug  5 22:31:28.911700 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f]
Aug  5 22:31:28.911709 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c]
Aug  5 22:31:28.911719 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4]
Aug  5 22:31:28.911733 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc]
Aug  5 22:31:28.911746 kernel: No NUMA configuration found
Aug  5 22:31:28.911755 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff]
Aug  5 22:31:28.911766 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff]
Aug  5 22:31:28.911775 kernel: Zone ranges:
Aug  5 22:31:28.911785 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Aug  5 22:31:28.911795 kernel:   DMA32    [mem 0x0000000001000000-0x000000009cfdcfff]
Aug  5 22:31:28.911807 kernel:   Normal   empty
Aug  5 22:31:28.911817 kernel: Movable zone start for each node
Aug  5 22:31:28.911827 kernel: Early memory node ranges
Aug  5 22:31:28.911837 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Aug  5 22:31:28.911846 kernel:   node   0: [mem 0x0000000000100000-0x000000009cfdcfff]
Aug  5 22:31:28.911856 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff]
Aug  5 22:31:28.911866 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Aug  5 22:31:28.911876 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Aug  5 22:31:28.911886 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges
Aug  5 22:31:28.911896 kernel: ACPI: PM-Timer IO Port: 0x608
Aug  5 22:31:28.911909 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Aug  5 22:31:28.911919 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Aug  5 22:31:28.911929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Aug  5 22:31:28.911939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Aug  5 22:31:28.911949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Aug  5 22:31:28.911958 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Aug  5 22:31:28.911968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Aug  5 22:31:28.911978 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Aug  5 22:31:28.911991 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Aug  5 22:31:28.912001 kernel: TSC deadline timer available
Aug  5 22:31:28.912011 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs
Aug  5 22:31:28.912021 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Aug  5 22:31:28.912031 kernel: kvm-guest: KVM setup pv remote TLB flush
Aug  5 22:31:28.912041 kernel: kvm-guest: setup PV sched yield
Aug  5 22:31:28.912051 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices
Aug  5 22:31:28.912061 kernel: Booting paravirtualized kernel on KVM
Aug  5 22:31:28.912071 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Aug  5 22:31:28.912081 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Aug  5 22:31:28.912093 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288
Aug  5 22:31:28.912103 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152
Aug  5 22:31:28.912113 kernel: pcpu-alloc: [0] 0 1 2 3 
Aug  5 22:31:28.912122 kernel: kvm-guest: PV spinlocks enabled
Aug  5 22:31:28.912132 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Aug  5 22:31:28.912143 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695
Aug  5 22:31:28.912154 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Aug  5 22:31:28.912163 kernel: random: crng init done
Aug  5 22:31:28.912176 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Aug  5 22:31:28.912196 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Aug  5 22:31:28.912206 kernel: Fallback order for Node 0: 0 
Aug  5 22:31:28.912216 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 632733
Aug  5 22:31:28.912226 kernel: Policy zone: DMA32
Aug  5 22:31:28.912236 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Aug  5 22:31:28.912246 kernel: Memory: 2428452K/2571756K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 143044K reserved, 0K cma-reserved)
Aug  5 22:31:28.912256 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Aug  5 22:31:28.912266 kernel: ftrace: allocating 37659 entries in 148 pages
Aug  5 22:31:28.912278 kernel: ftrace: allocated 148 pages with 3 groups
Aug  5 22:31:28.912288 kernel: Dynamic Preempt: voluntary
Aug  5 22:31:28.912298 kernel: rcu: Preemptible hierarchical RCU implementation.
Aug  5 22:31:28.912308 kernel: rcu:         RCU event tracing is enabled.
Aug  5 22:31:28.912319 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Aug  5 22:31:28.912329 kernel:         Trampoline variant of Tasks RCU enabled.
Aug  5 22:31:28.912339 kernel:         Rude variant of Tasks RCU enabled.
Aug  5 22:31:28.912349 kernel:         Tracing variant of Tasks RCU enabled.
Aug  5 22:31:28.912359 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Aug  5 22:31:28.912372 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Aug  5 22:31:28.912382 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16
Aug  5 22:31:28.912392 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Aug  5 22:31:28.912402 kernel: Console: colour VGA+ 80x25
Aug  5 22:31:28.912412 kernel: printk: console [ttyS0] enabled
Aug  5 22:31:28.912421 kernel: ACPI: Core revision 20230628
Aug  5 22:31:28.912432 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
Aug  5 22:31:28.912442 kernel: APIC: Switch to symmetric I/O mode setup
Aug  5 22:31:28.912452 kernel: x2apic enabled
Aug  5 22:31:28.912465 kernel: APIC: Switched APIC routing to: physical x2apic
Aug  5 22:31:28.912476 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Aug  5 22:31:28.912487 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Aug  5 22:31:28.912498 kernel: kvm-guest: setup PV IPIs
Aug  5 22:31:28.912508 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Aug  5 22:31:28.912518 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Aug  5 22:31:28.912528 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748)
Aug  5 22:31:28.912539 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Aug  5 22:31:28.912561 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Aug  5 22:31:28.912572 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Aug  5 22:31:28.912581 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Aug  5 22:31:28.912588 kernel: Spectre V2 : Mitigation: Retpolines
Aug  5 22:31:28.912599 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Aug  5 22:31:28.912610 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Aug  5 22:31:28.912620 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Aug  5 22:31:28.912631 kernel: RETBleed: Mitigation: untrained return thunk
Aug  5 22:31:28.912642 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Aug  5 22:31:28.912681 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Aug  5 22:31:28.912689 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Aug  5 22:31:28.912700 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Aug  5 22:31:28.912711 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Aug  5 22:31:28.912722 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Aug  5 22:31:28.912733 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Aug  5 22:31:28.912743 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Aug  5 22:31:28.912754 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Aug  5 22:31:28.912778 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Aug  5 22:31:28.912790 kernel: Freeing SMP alternatives memory: 32K
Aug  5 22:31:28.912810 kernel: pid_max: default: 32768 minimum: 301
Aug  5 22:31:28.912837 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity
Aug  5 22:31:28.912857 kernel: SELinux:  Initializing.
Aug  5 22:31:28.912876 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Aug  5 22:31:28.912904 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Aug  5 22:31:28.912930 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0)
Aug  5 22:31:28.912961 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1.
Aug  5 22:31:28.912973 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1.
Aug  5 22:31:28.912983 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1.
Aug  5 22:31:28.912994 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Aug  5 22:31:28.913004 kernel: ... version:                0
Aug  5 22:31:28.913015 kernel: ... bit width:              48
Aug  5 22:31:28.913025 kernel: ... generic registers:      6
Aug  5 22:31:28.913036 kernel: ... value mask:             0000ffffffffffff
Aug  5 22:31:28.913047 kernel: ... max period:             00007fffffffffff
Aug  5 22:31:28.913057 kernel: ... fixed-purpose events:   0
Aug  5 22:31:28.913071 kernel: ... event mask:             000000000000003f
Aug  5 22:31:28.913082 kernel: signal: max sigframe size: 1776
Aug  5 22:31:28.913093 kernel: rcu: Hierarchical SRCU implementation.
Aug  5 22:31:28.913104 kernel: rcu:         Max phase no-delay instances is 400.
Aug  5 22:31:28.913114 kernel: smp: Bringing up secondary CPUs ...
Aug  5 22:31:28.913124 kernel: smpboot: x86: Booting SMP configuration:
Aug  5 22:31:28.913134 kernel: .... node  #0, CPUs:      #1 #2 #3
Aug  5 22:31:28.913144 kernel: smp: Brought up 1 node, 4 CPUs
Aug  5 22:31:28.913155 kernel: smpboot: Max logical packages: 1
Aug  5 22:31:28.913169 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS)
Aug  5 22:31:28.913180 kernel: devtmpfs: initialized
Aug  5 22:31:28.913200 kernel: x86/mm: Memory block size: 128MB
Aug  5 22:31:28.913211 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Aug  5 22:31:28.913222 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Aug  5 22:31:28.913233 kernel: pinctrl core: initialized pinctrl subsystem
Aug  5 22:31:28.913243 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Aug  5 22:31:28.913254 kernel: audit: initializing netlink subsys (disabled)
Aug  5 22:31:28.913265 kernel: audit: type=2000 audit(1722897087.934:1): state=initialized audit_enabled=0 res=1
Aug  5 22:31:28.913279 kernel: thermal_sys: Registered thermal governor 'step_wise'
Aug  5 22:31:28.913290 kernel: thermal_sys: Registered thermal governor 'user_space'
Aug  5 22:31:28.913301 kernel: cpuidle: using governor menu
Aug  5 22:31:28.913311 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Aug  5 22:31:28.913322 kernel: dca service started, version 1.12.1
Aug  5 22:31:28.913332 kernel: PCI: Using configuration type 1 for base access
Aug  5 22:31:28.913343 kernel: PCI: Using configuration type 1 for extended access
Aug  5 22:31:28.913354 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Aug  5 22:31:28.913364 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Aug  5 22:31:28.913379 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Aug  5 22:31:28.913390 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Aug  5 22:31:28.913401 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Aug  5 22:31:28.913412 kernel: ACPI: Added _OSI(Module Device)
Aug  5 22:31:28.913422 kernel: ACPI: Added _OSI(Processor Device)
Aug  5 22:31:28.913432 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Aug  5 22:31:28.913442 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Aug  5 22:31:28.913453 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Aug  5 22:31:28.913464 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Aug  5 22:31:28.913478 kernel: ACPI: Interpreter enabled
Aug  5 22:31:28.913489 kernel: ACPI: PM: (supports S0 S3 S5)
Aug  5 22:31:28.913500 kernel: ACPI: Using IOAPIC for interrupt routing
Aug  5 22:31:28.913510 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Aug  5 22:31:28.913521 kernel: PCI: Using E820 reservations for host bridge windows
Aug  5 22:31:28.913532 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Aug  5 22:31:28.913543 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Aug  5 22:31:28.913807 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Aug  5 22:31:28.913831 kernel: acpiphp: Slot [3] registered
Aug  5 22:31:28.913842 kernel: acpiphp: Slot [4] registered
Aug  5 22:31:28.913853 kernel: acpiphp: Slot [5] registered
Aug  5 22:31:28.913863 kernel: acpiphp: Slot [6] registered
Aug  5 22:31:28.913874 kernel: acpiphp: Slot [7] registered
Aug  5 22:31:28.913884 kernel: acpiphp: Slot [8] registered
Aug  5 22:31:28.913894 kernel: acpiphp: Slot [9] registered
Aug  5 22:31:28.913905 kernel: acpiphp: Slot [10] registered
Aug  5 22:31:28.913915 kernel: acpiphp: Slot [11] registered
Aug  5 22:31:28.913929 kernel: acpiphp: Slot [12] registered
Aug  5 22:31:28.913939 kernel: acpiphp: Slot [13] registered
Aug  5 22:31:28.913950 kernel: acpiphp: Slot [14] registered
Aug  5 22:31:28.913960 kernel: acpiphp: Slot [15] registered
Aug  5 22:31:28.913970 kernel: acpiphp: Slot [16] registered
Aug  5 22:31:28.913980 kernel: acpiphp: Slot [17] registered
Aug  5 22:31:28.913991 kernel: acpiphp: Slot [18] registered
Aug  5 22:31:28.914001 kernel: acpiphp: Slot [19] registered
Aug  5 22:31:28.914011 kernel: acpiphp: Slot [20] registered
Aug  5 22:31:28.914022 kernel: acpiphp: Slot [21] registered
Aug  5 22:31:28.914035 kernel: acpiphp: Slot [22] registered
Aug  5 22:31:28.914045 kernel: acpiphp: Slot [23] registered
Aug  5 22:31:28.914055 kernel: acpiphp: Slot [24] registered
Aug  5 22:31:28.914066 kernel: acpiphp: Slot [25] registered
Aug  5 22:31:28.914076 kernel: acpiphp: Slot [26] registered
Aug  5 22:31:28.914086 kernel: acpiphp: Slot [27] registered
Aug  5 22:31:28.914096 kernel: acpiphp: Slot [28] registered
Aug  5 22:31:28.914107 kernel: acpiphp: Slot [29] registered
Aug  5 22:31:28.914117 kernel: acpiphp: Slot [30] registered
Aug  5 22:31:28.914130 kernel: acpiphp: Slot [31] registered
Aug  5 22:31:28.914145 kernel: PCI host bridge to bus 0000:00
Aug  5 22:31:28.914331 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Aug  5 22:31:28.914474 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Aug  5 22:31:28.914669 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Aug  5 22:31:28.914826 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window]
Aug  5 22:31:28.914968 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window]
Aug  5 22:31:28.915112 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Aug  5 22:31:28.915321 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Aug  5 22:31:28.915496 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Aug  5 22:31:28.915691 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Aug  5 22:31:28.915902 kernel: pci 0000:00:01.1: reg 0x20: [io  0xc0c0-0xc0cf]
Aug  5 22:31:28.916075 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Aug  5 22:31:28.916273 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Aug  5 22:31:28.916447 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Aug  5 22:31:28.916602 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Aug  5 22:31:28.916801 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Aug  5 22:31:28.916959 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Aug  5 22:31:28.917117 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Aug  5 22:31:28.917317 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000
Aug  5 22:31:28.917517 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref]
Aug  5 22:31:28.917777 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff]
Aug  5 22:31:28.917941 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref]
Aug  5 22:31:28.918098 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Aug  5 22:31:28.918265 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00
Aug  5 22:31:28.918389 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc080-0xc09f]
Aug  5 22:31:28.918511 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff]
Aug  5 22:31:28.918676 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref]
Aug  5 22:31:28.918830 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
Aug  5 22:31:28.918961 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
Aug  5 22:31:28.919085 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff]
Aug  5 22:31:28.919219 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref]
Aug  5 22:31:28.919353 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000
Aug  5 22:31:28.919475 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc0a0-0xc0bf]
Aug  5 22:31:28.919603 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff]
Aug  5 22:31:28.919752 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref]
Aug  5 22:31:28.919873 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref]
Aug  5 22:31:28.919883 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Aug  5 22:31:28.919891 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Aug  5 22:31:28.919900 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Aug  5 22:31:28.919907 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Aug  5 22:31:28.919915 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Aug  5 22:31:28.919927 kernel: iommu: Default domain type: Translated
Aug  5 22:31:28.919938 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Aug  5 22:31:28.919949 kernel: PCI: Using ACPI for IRQ routing
Aug  5 22:31:28.919960 kernel: PCI: pci_cache_line_size set to 64 bytes
Aug  5 22:31:28.919968 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Aug  5 22:31:28.919976 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff]
Aug  5 22:31:28.920101 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Aug  5 22:31:28.920232 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Aug  5 22:31:28.920353 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Aug  5 22:31:28.920367 kernel: vgaarb: loaded
Aug  5 22:31:28.920375 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Aug  5 22:31:28.920383 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter
Aug  5 22:31:28.920390 kernel: clocksource: Switched to clocksource kvm-clock
Aug  5 22:31:28.920398 kernel: VFS: Disk quotas dquot_6.6.0
Aug  5 22:31:28.920406 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Aug  5 22:31:28.920414 kernel: pnp: PnP ACPI init
Aug  5 22:31:28.920558 kernel: pnp 00:02: [dma 2]
Aug  5 22:31:28.920574 kernel: pnp: PnP ACPI: found 6 devices
Aug  5 22:31:28.920582 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Aug  5 22:31:28.920590 kernel: NET: Registered PF_INET protocol family
Aug  5 22:31:28.920598 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Aug  5 22:31:28.920606 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Aug  5 22:31:28.920614 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Aug  5 22:31:28.920622 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Aug  5 22:31:28.920630 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Aug  5 22:31:28.920638 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Aug  5 22:31:28.920648 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Aug  5 22:31:28.920671 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Aug  5 22:31:28.920679 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Aug  5 22:31:28.920687 kernel: NET: Registered PF_XDP protocol family
Aug  5 22:31:28.920801 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Aug  5 22:31:28.920912 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Aug  5 22:31:28.921033 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Aug  5 22:31:28.921145 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window]
Aug  5 22:31:28.921268 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window]
Aug  5 22:31:28.921390 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Aug  5 22:31:28.921512 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Aug  5 22:31:28.921522 kernel: PCI: CLS 0 bytes, default 64
Aug  5 22:31:28.921530 kernel: Initialise system trusted keyrings
Aug  5 22:31:28.921538 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Aug  5 22:31:28.921546 kernel: Key type asymmetric registered
Aug  5 22:31:28.921554 kernel: Asymmetric key parser 'x509' registered
Aug  5 22:31:28.921565 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Aug  5 22:31:28.921572 kernel: io scheduler mq-deadline registered
Aug  5 22:31:28.921580 kernel: io scheduler kyber registered
Aug  5 22:31:28.921588 kernel: io scheduler bfq registered
Aug  5 22:31:28.921596 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Aug  5 22:31:28.921604 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Aug  5 22:31:28.921612 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10
Aug  5 22:31:28.921620 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Aug  5 22:31:28.921628 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Aug  5 22:31:28.921636 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Aug  5 22:31:28.921646 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Aug  5 22:31:28.921667 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Aug  5 22:31:28.921675 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Aug  5 22:31:28.921813 kernel: rtc_cmos 00:05: RTC can wake from S4
Aug  5 22:31:28.921824 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Aug  5 22:31:28.921939 kernel: rtc_cmos 00:05: registered as rtc0
Aug  5 22:31:28.922063 kernel: rtc_cmos 00:05: setting system clock to 2024-08-05T22:31:28 UTC (1722897088)
Aug  5 22:31:28.922180 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs
Aug  5 22:31:28.922201 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Aug  5 22:31:28.922209 kernel: NET: Registered PF_INET6 protocol family
Aug  5 22:31:28.922216 kernel: Segment Routing with IPv6
Aug  5 22:31:28.922224 kernel: In-situ OAM (IOAM) with IPv6
Aug  5 22:31:28.922232 kernel: NET: Registered PF_PACKET protocol family
Aug  5 22:31:28.922240 kernel: Key type dns_resolver registered
Aug  5 22:31:28.922248 kernel: IPI shorthand broadcast: enabled
Aug  5 22:31:28.922256 kernel: sched_clock: Marking stable (916003545, 110098111)->(1150228429, -124126773)
Aug  5 22:31:28.922267 kernel: registered taskstats version 1
Aug  5 22:31:28.922275 kernel: Loading compiled-in X.509 certificates
Aug  5 22:31:28.922304 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532'
Aug  5 22:31:28.922312 kernel: Key type .fscrypt registered
Aug  5 22:31:28.922319 kernel: Key type fscrypt-provisioning registered
Aug  5 22:31:28.922327 kernel: ima: No TPM chip found, activating TPM-bypass!
Aug  5 22:31:28.922335 kernel: ima: Allocated hash algorithm: sha1
Aug  5 22:31:28.922342 kernel: ima: No architecture policies found
Aug  5 22:31:28.922350 kernel: clk: Disabling unused clocks
Aug  5 22:31:28.922360 kernel: Freeing unused kernel image (initmem) memory: 49372K
Aug  5 22:31:28.922368 kernel: Write protecting the kernel read-only data: 36864k
Aug  5 22:31:28.922376 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K
Aug  5 22:31:28.922384 kernel: Run /init as init process
Aug  5 22:31:28.922391 kernel:   with arguments:
Aug  5 22:31:28.922399 kernel:     /init
Aug  5 22:31:28.922407 kernel:   with environment:
Aug  5 22:31:28.922414 kernel:     HOME=/
Aug  5 22:31:28.922438 kernel:     TERM=linux
Aug  5 22:31:28.922450 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Aug  5 22:31:28.922460 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Aug  5 22:31:28.922471 systemd[1]: Detected virtualization kvm.
Aug  5 22:31:28.922479 systemd[1]: Detected architecture x86-64.
Aug  5 22:31:28.922492 systemd[1]: Running in initrd.
Aug  5 22:31:28.922500 systemd[1]: No hostname configured, using default hostname.
Aug  5 22:31:28.922508 systemd[1]: Hostname set to <localhost>.
Aug  5 22:31:28.922520 systemd[1]: Initializing machine ID from VM UUID.
Aug  5 22:31:28.922528 systemd[1]: Queued start job for default target initrd.target.
Aug  5 22:31:28.922537 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Aug  5 22:31:28.922546 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Aug  5 22:31:28.922555 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Aug  5 22:31:28.922563 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Aug  5 22:31:28.922572 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Aug  5 22:31:28.922580 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Aug  5 22:31:28.922593 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Aug  5 22:31:28.922602 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Aug  5 22:31:28.922610 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Aug  5 22:31:28.922619 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Aug  5 22:31:28.922627 systemd[1]: Reached target paths.target - Path Units.
Aug  5 22:31:28.922636 systemd[1]: Reached target slices.target - Slice Units.
Aug  5 22:31:28.922644 systemd[1]: Reached target swap.target - Swaps.
Aug  5 22:31:28.922667 systemd[1]: Reached target timers.target - Timer Units.
Aug  5 22:31:28.922676 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Aug  5 22:31:28.922684 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Aug  5 22:31:28.922693 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Aug  5 22:31:28.922701 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Aug  5 22:31:28.922710 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Aug  5 22:31:28.922718 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Aug  5 22:31:28.922727 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Aug  5 22:31:28.922738 systemd[1]: Reached target sockets.target - Socket Units.
Aug  5 22:31:28.922749 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Aug  5 22:31:28.922757 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Aug  5 22:31:28.922766 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Aug  5 22:31:28.922774 systemd[1]: Starting systemd-fsck-usr.service...
Aug  5 22:31:28.922783 systemd[1]: Starting systemd-journald.service - Journal Service...
Aug  5 22:31:28.922794 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Aug  5 22:31:28.922803 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Aug  5 22:31:28.922811 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Aug  5 22:31:28.922820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Aug  5 22:31:28.922828 systemd[1]: Finished systemd-fsck-usr.service.
Aug  5 22:31:28.922856 systemd-journald[193]: Collecting audit messages is disabled.
Aug  5 22:31:28.922879 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Aug  5 22:31:28.922888 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Aug  5 22:31:28.922899 systemd-journald[193]: Journal started
Aug  5 22:31:28.922917 systemd-journald[193]: Runtime Journal (/run/log/journal/4dff3f2d160f4223a7f34c827e957967) is 6.0M, max 48.4M, 42.3M free.
Aug  5 22:31:28.916216 systemd-modules-load[194]: Inserted module 'overlay'
Aug  5 22:31:28.955359 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Aug  5 22:31:28.955382 kernel: Bridge firewalling registered
Aug  5 22:31:28.949534 systemd-modules-load[194]: Inserted module 'br_netfilter'
Aug  5 22:31:28.957686 systemd[1]: Started systemd-journald.service - Journal Service.
Aug  5 22:31:28.959112 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Aug  5 22:31:28.961776 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Aug  5 22:31:28.972798 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Aug  5 22:31:28.976575 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Aug  5 22:31:28.979605 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Aug  5 22:31:28.982076 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories...
Aug  5 22:31:28.991944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Aug  5 22:31:28.996742 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Aug  5 22:31:28.997543 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Aug  5 22:31:29.015857 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Aug  5 22:31:29.016585 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Aug  5 22:31:29.021249 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Aug  5 22:31:29.040221 dracut-cmdline[234]: dracut-dracut-053
Aug  5 22:31:29.043840 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695
Aug  5 22:31:29.055926 systemd-resolved[227]: Positive Trust Anchors:
Aug  5 22:31:29.055944 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Aug  5 22:31:29.055974 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test
Aug  5 22:31:29.058586 systemd-resolved[227]: Defaulting to hostname 'linux'.
Aug  5 22:31:29.059619 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Aug  5 22:31:29.065006 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Aug  5 22:31:29.155698 kernel: SCSI subsystem initialized
Aug  5 22:31:29.169688 kernel: Loading iSCSI transport class v2.0-870.
Aug  5 22:31:29.186713 kernel: iscsi: registered transport (tcp)
Aug  5 22:31:29.212703 kernel: iscsi: registered transport (qla4xxx)
Aug  5 22:31:29.212782 kernel: QLogic iSCSI HBA Driver
Aug  5 22:31:29.267042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Aug  5 22:31:29.284835 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Aug  5 22:31:29.311707 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Aug  5 22:31:29.311773 kernel: device-mapper: uevent: version 1.0.3
Aug  5 22:31:29.311785 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Aug  5 22:31:29.358706 kernel: raid6: avx2x4   gen() 29884 MB/s
Aug  5 22:31:29.375691 kernel: raid6: avx2x2   gen() 30874 MB/s
Aug  5 22:31:29.392832 kernel: raid6: avx2x1   gen() 25647 MB/s
Aug  5 22:31:29.392875 kernel: raid6: using algorithm avx2x2 gen() 30874 MB/s
Aug  5 22:31:29.411022 kernel: raid6: .... xor() 17095 MB/s, rmw enabled
Aug  5 22:31:29.411097 kernel: raid6: using avx2x2 recovery algorithm
Aug  5 22:31:29.436683 kernel: xor: automatically using best checksumming function   avx       
Aug  5 22:31:29.632697 kernel: Btrfs loaded, zoned=no, fsverity=no
Aug  5 22:31:29.646601 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Aug  5 22:31:29.662919 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Aug  5 22:31:29.675892 systemd-udevd[417]: Using default interface naming scheme 'v255'.
Aug  5 22:31:29.680563 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Aug  5 22:31:29.692839 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Aug  5 22:31:29.707920 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation
Aug  5 22:31:29.744538 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Aug  5 22:31:29.757835 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Aug  5 22:31:29.846290 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Aug  5 22:31:29.856898 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Aug  5 22:31:29.874772 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Aug  5 22:31:29.876733 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Aug  5 22:31:29.879675 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Aug  5 22:31:29.880924 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Aug  5 22:31:29.890831 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Aug  5 22:31:29.901680 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues
Aug  5 22:31:29.920143 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Aug  5 22:31:29.920523 kernel: cryptd: max_cpu_qlen set to 1000
Aug  5 22:31:29.920555 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Aug  5 22:31:29.920587 kernel: GPT:9289727 != 19775487
Aug  5 22:31:29.920613 kernel: GPT:Alternate GPT header not at the end of the disk.
Aug  5 22:31:29.920648 kernel: GPT:9289727 != 19775487
Aug  5 22:31:29.920864 kernel: GPT: Use GNU Parted to correct GPT errors.
Aug  5 22:31:29.920890 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Aug  5 22:31:29.917913 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Aug  5 22:31:29.918053 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Aug  5 22:31:29.921495 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Aug  5 22:31:29.935452 kernel: libata version 3.00 loaded.
Aug  5 22:31:29.935478 kernel: ata_piix 0000:00:01.1: version 2.13
Aug  5 22:31:29.951519 kernel: AVX2 version of gcm_enc/dec engaged.
Aug  5 22:31:29.951540 kernel: AES CTR mode by8 optimization enabled
Aug  5 22:31:29.951553 kernel: scsi host0: ata_piix
Aug  5 22:31:29.951806 kernel: scsi host1: ata_piix
Aug  5 22:31:29.951972 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14
Aug  5 22:31:29.955548 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15
Aug  5 22:31:29.928740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Aug  5 22:31:29.928932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Aug  5 22:31:29.930546 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Aug  5 22:31:29.940170 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Aug  5 22:31:29.961086 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462)
Aug  5 22:31:29.945188 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Aug  5 22:31:29.967708 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (476)
Aug  5 22:31:29.967942 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Aug  5 22:31:29.984594 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Aug  5 22:31:30.014912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Aug  5 22:31:30.021762 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Aug  5 22:31:30.027033 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Aug  5 22:31:30.028344 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Aug  5 22:31:30.042929 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Aug  5 22:31:30.045257 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Aug  5 22:31:30.070942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Aug  5 22:31:30.113750 kernel: ata2: found unknown device (class 0)
Aug  5 22:31:30.115691 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Aug  5 22:31:30.116734 kernel: scsi 1:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Aug  5 22:31:30.161758 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Aug  5 22:31:30.178602 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Aug  5 22:31:30.178628 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0
Aug  5 22:31:30.496412 disk-uuid[547]: Primary Header is updated.
Aug  5 22:31:30.496412 disk-uuid[547]: Secondary Entries is updated.
Aug  5 22:31:30.496412 disk-uuid[547]: Secondary Header is updated.
Aug  5 22:31:30.522991 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Aug  5 22:31:30.526683 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Aug  5 22:31:31.577525 disk-uuid[572]: The operation has completed successfully.
Aug  5 22:31:31.578911 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Aug  5 22:31:31.602430 systemd[1]: disk-uuid.service: Deactivated successfully.
Aug  5 22:31:31.602566 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Aug  5 22:31:31.636784 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Aug  5 22:31:31.642250 sh[583]: Success
Aug  5 22:31:31.670692 kernel: device-mapper: verity: sha256 using implementation "sha256-ni"
Aug  5 22:31:31.715097 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Aug  5 22:31:31.735389 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Aug  5 22:31:31.738352 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Aug  5 22:31:31.752563 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f
Aug  5 22:31:31.752632 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Aug  5 22:31:31.752650 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Aug  5 22:31:31.753825 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Aug  5 22:31:31.754747 kernel: BTRFS info (device dm-0): using free space tree
Aug  5 22:31:31.762020 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Aug  5 22:31:31.763768 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Aug  5 22:31:31.768950 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Aug  5 22:31:31.772052 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Aug  5 22:31:31.783228 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b
Aug  5 22:31:31.783270 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Aug  5 22:31:31.783282 kernel: BTRFS info (device vda6): using free space tree
Aug  5 22:31:31.786688 kernel: BTRFS info (device vda6): auto enabling async discard
Aug  5 22:31:31.797076 systemd[1]: mnt-oem.mount: Deactivated successfully.
Aug  5 22:31:31.799164 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b
Aug  5 22:31:31.814844 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Aug  5 22:31:31.823044 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Aug  5 22:31:32.025913 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Aug  5 22:31:32.035821 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Aug  5 22:31:32.044220 ignition[687]: Ignition 2.19.0
Aug  5 22:31:32.044232 ignition[687]: Stage: fetch-offline
Aug  5 22:31:32.044305 ignition[687]: no configs at "/usr/lib/ignition/base.d"
Aug  5 22:31:32.044317 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Aug  5 22:31:32.044488 ignition[687]: parsed url from cmdline: ""
Aug  5 22:31:32.044494 ignition[687]: no config URL provided
Aug  5 22:31:32.044500 ignition[687]: reading system config file "/usr/lib/ignition/user.ign"
Aug  5 22:31:32.044510 ignition[687]: no config at "/usr/lib/ignition/user.ign"
Aug  5 22:31:32.045167 ignition[687]: op(1): [started]  loading QEMU firmware config module
Aug  5 22:31:32.045173 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg"
Aug  5 22:31:32.061979 systemd-networkd[770]: lo: Link UP
Aug  5 22:31:32.061989 systemd-networkd[770]: lo: Gained carrier
Aug  5 22:31:32.063707 systemd-networkd[770]: Enumeration completed
Aug  5 22:31:32.064091 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Aug  5 22:31:32.064094 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Aug  5 22:31:32.064950 systemd[1]: Started systemd-networkd.service - Network Configuration.
Aug  5 22:31:32.064964 systemd-networkd[770]: eth0: Link UP
Aug  5 22:31:32.064968 systemd-networkd[770]: eth0: Gained carrier
Aug  5 22:31:32.064974 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Aug  5 22:31:32.071297 systemd[1]: Reached target network.target - Network.
Aug  5 22:31:32.076283 ignition[687]: op(1): [finished] loading QEMU firmware config module
Aug  5 22:31:32.080721 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1
Aug  5 22:31:32.118535 ignition[687]: parsing config with SHA512: fc4a84191524e0b418f1f3f6daa23bf761468141bdf5dd58535814d0e4a8d3f35b2ca60f594b1a171adb09ac2dd07942f03ef3c8e7afc0937c83c2259e4712b4
Aug  5 22:31:32.123671 unknown[687]: fetched base config from "system"
Aug  5 22:31:32.123684 unknown[687]: fetched user config from "qemu"
Aug  5 22:31:32.124318 ignition[687]: fetch-offline: fetch-offline passed
Aug  5 22:31:32.124382 ignition[687]: Ignition finished successfully
Aug  5 22:31:32.127909 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Aug  5 22:31:32.129703 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Aug  5 22:31:32.137956 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Aug  5 22:31:32.159241 ignition[777]: Ignition 2.19.0
Aug  5 22:31:32.159256 ignition[777]: Stage: kargs
Aug  5 22:31:32.159532 ignition[777]: no configs at "/usr/lib/ignition/base.d"
Aug  5 22:31:32.159544 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Aug  5 22:31:32.164011 ignition[777]: kargs: kargs passed
Aug  5 22:31:32.164103 ignition[777]: Ignition finished successfully
Aug  5 22:31:32.169405 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Aug  5 22:31:32.183024 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Aug  5 22:31:32.212887 ignition[785]: Ignition 2.19.0
Aug  5 22:31:32.212902 ignition[785]: Stage: disks
Aug  5 22:31:32.214557 ignition[785]: no configs at "/usr/lib/ignition/base.d"
Aug  5 22:31:32.214578 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Aug  5 22:31:32.217489 ignition[785]: disks: disks passed
Aug  5 22:31:32.217546 ignition[785]: Ignition finished successfully
Aug  5 22:31:32.221588 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Aug  5 22:31:32.223934 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Aug  5 22:31:32.224459 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Aug  5 22:31:32.224968 systemd[1]: Reached target local-fs.target - Local File Systems.
Aug  5 22:31:32.225317 systemd[1]: Reached target sysinit.target - System Initialization.
Aug  5 22:31:32.225649 systemd[1]: Reached target basic.target - Basic System.
Aug  5 22:31:32.247975 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Aug  5 22:31:32.271991 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Aug  5 22:31:32.606571 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Aug  5 22:31:32.614781 systemd[1]: Mounting sysroot.mount - /sysroot...
Aug  5 22:31:32.740684 kernel: EXT4-fs (vda9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none.
Aug  5 22:31:32.740951 systemd[1]: Mounted sysroot.mount - /sysroot.
Aug  5 22:31:32.743330 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Aug  5 22:31:32.756808 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Aug  5 22:31:32.759713 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Aug  5 22:31:32.762824 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Aug  5 22:31:32.762904 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Aug  5 22:31:32.764958 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Aug  5 22:31:32.767677 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804)
Aug  5 22:31:32.770897 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b
Aug  5 22:31:32.770919 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Aug  5 22:31:32.770935 kernel: BTRFS info (device vda6): using free space tree
Aug  5 22:31:32.774788 kernel: BTRFS info (device vda6): auto enabling async discard
Aug  5 22:31:32.775646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Aug  5 22:31:32.777574 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Aug  5 22:31:32.781291 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Aug  5 22:31:32.848308 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory
Aug  5 22:31:32.876508 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory
Aug  5 22:31:32.881792 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory
Aug  5 22:31:32.886584 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory
Aug  5 22:31:33.016468 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Aug  5 22:31:33.038845 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Aug  5 22:31:33.042719 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Aug  5 22:31:33.047087 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Aug  5 22:31:33.048339 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b
Aug  5 22:31:33.073445 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Aug  5 22:31:33.121224 ignition[919]: INFO     : Ignition 2.19.0
Aug  5 22:31:33.121224 ignition[919]: INFO     : Stage: mount
Aug  5 22:31:33.123485 ignition[919]: INFO     : no configs at "/usr/lib/ignition/base.d"
Aug  5 22:31:33.123485 ignition[919]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Aug  5 22:31:33.123485 ignition[919]: INFO     : mount: mount passed
Aug  5 22:31:33.123485 ignition[919]: INFO     : Ignition finished successfully
Aug  5 22:31:33.124672 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Aug  5 22:31:33.130757 systemd[1]: Starting ignition-files.service - Ignition (files)...
Aug  5 22:31:33.139480 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Aug  5 22:31:33.154609 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931)
Aug  5 22:31:33.154687 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b
Aug  5 22:31:33.154705 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Aug  5 22:31:33.156221 kernel: BTRFS info (device vda6): using free space tree
Aug  5 22:31:33.158682 kernel: BTRFS info (device vda6): auto enabling async discard
Aug  5 22:31:33.161617 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Aug  5 22:31:33.187034 ignition[948]: INFO     : Ignition 2.19.0
Aug  5 22:31:33.187034 ignition[948]: INFO     : Stage: files
Aug  5 22:31:33.188949 ignition[948]: INFO     : no configs at "/usr/lib/ignition/base.d"
Aug  5 22:31:33.188949 ignition[948]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Aug  5 22:31:33.188949 ignition[948]: DEBUG    : files: compiled without relabeling support, skipping
Aug  5 22:31:33.188949 ignition[948]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Aug  5 22:31:33.188949 ignition[948]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Aug  5 22:31:33.195507 ignition[948]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Aug  5 22:31:33.196867 ignition[948]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Aug  5 22:31:33.198566 unknown[948]: wrote ssh authorized keys file for user: core
Aug  5 22:31:33.199798 ignition[948]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Aug  5 22:31:33.202161 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Aug  5 22:31:33.204015 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Aug  5 22:31:33.270231 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Aug  5 22:31:33.352786 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Aug  5 22:31:33.355259 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Aug  5 22:31:33.357626 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Aug  5 22:31:33.359819 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Aug  5 22:31:33.362279 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Aug  5 22:31:33.364481 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Aug  5 22:31:33.367035 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Aug  5 22:31:33.369339 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Aug  5 22:31:33.371728 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Aug  5 22:31:33.374410 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Aug  5 22:31:33.376911 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Aug  5 22:31:33.379128 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Aug  5 22:31:33.382713 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Aug  5 22:31:33.382713 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Aug  5 22:31:33.389882 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1
Aug  5 22:31:33.722903 systemd-networkd[770]: eth0: Gained IPv6LL
Aug  5 22:31:33.775065 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Aug  5 22:31:34.380867 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Aug  5 22:31:34.380867 ignition[948]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Aug  5 22:31:34.385496 ignition[948]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Aug  5 22:31:34.385496 ignition[948]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Aug  5 22:31:34.385496 ignition[948]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Aug  5 22:31:34.385496 ignition[948]: INFO     : files: op(d): [started]  processing unit "coreos-metadata.service"
Aug  5 22:31:34.385496 ignition[948]: INFO     : files: op(d): op(e): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Aug  5 22:31:34.385496 ignition[948]: INFO     : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Aug  5 22:31:34.385496 ignition[948]: INFO     : files: op(d): [finished] processing unit "coreos-metadata.service"
Aug  5 22:31:34.385496 ignition[948]: INFO     : files: op(f): [started]  setting preset to disabled for "coreos-metadata.service"
Aug  5 22:31:34.439572 ignition[948]: INFO     : files: op(f): op(10): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Aug  5 22:31:34.445477 ignition[948]: INFO     : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Aug  5 22:31:34.447474 ignition[948]: INFO     : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service"
Aug  5 22:31:34.447474 ignition[948]: INFO     : files: op(11): [started]  setting preset to enabled for "prepare-helm.service"
Aug  5 22:31:34.447474 ignition[948]: INFO     : files: op(11): [finished] setting preset to enabled for "prepare-helm.service"
Aug  5 22:31:34.447474 ignition[948]: INFO     : files: createResultFile: createFiles: op(12): [started]  writing file "/sysroot/etc/.ignition-result.json"
Aug  5 22:31:34.447474 ignition[948]: INFO     : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json"
Aug  5 22:31:34.447474 ignition[948]: INFO     : files: files passed
Aug  5 22:31:34.447474 ignition[948]: INFO     : Ignition finished successfully
Aug  5 22:31:34.449606 systemd[1]: Finished ignition-files.service - Ignition (files).
Aug  5 22:31:34.461855 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Aug  5 22:31:34.469431 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Aug  5 22:31:34.472338 systemd[1]: ignition-quench.service: Deactivated successfully.
Aug  5 22:31:34.472493 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Aug  5 22:31:34.481512 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory
Aug  5 22:31:34.484863 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Aug  5 22:31:34.486682 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Aug  5 22:31:34.488264 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Aug  5 22:31:34.488310 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Aug  5 22:31:34.491326 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Aug  5 22:31:34.504888 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Aug  5 22:31:34.537183 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Aug  5 22:31:34.537385 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Aug  5 22:31:34.538511 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Aug  5 22:31:34.542604 systemd[1]: Reached target initrd.target - Initrd Default Target.
Aug  5 22:31:34.543293 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Aug  5 22:31:34.544496 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Aug  5 22:31:34.565287 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Aug  5 22:31:34.574954 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Aug  5 22:31:34.588197 systemd[1]: Stopped target network.target - Network.
Aug  5 22:31:34.589541 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Aug  5 22:31:34.591933 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Aug  5 22:31:34.594568 systemd[1]: Stopped target timers.target - Timer Units.
Aug  5 22:31:34.596884 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Aug  5 22:31:34.597150 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Aug  5 22:31:34.599547 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Aug  5 22:31:34.601566 systemd[1]: Stopped target basic.target - Basic System.
Aug  5 22:31:34.603800 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Aug  5 22:31:34.606009 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Aug  5 22:31:34.608187 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Aug  5 22:31:34.610497 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Aug  5 22:31:34.612758 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Aug  5 22:31:34.615220 systemd[1]: Stopped target sysinit.target - System Initialization.
Aug  5 22:31:34.617299 systemd[1]: Stopped target local-fs.target - Local File Systems.
Aug  5 22:31:34.619730 systemd[1]: Stopped target swap.target - Swaps.
Aug  5 22:31:34.621683 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Aug  5 22:31:34.621923 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Aug  5 22:31:34.624450 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Aug  5 22:31:34.626228 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Aug  5 22:31:34.628414 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Aug  5 22:31:34.628547 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Aug  5 22:31:34.630772 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Aug  5 22:31:34.630907 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Aug  5 22:31:34.633208 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Aug  5 22:31:34.633342 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Aug  5 22:31:34.635514 systemd[1]: Stopped target paths.target - Path Units.
Aug  5 22:31:34.637359 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Aug  5 22:31:34.637557 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Aug  5 22:31:34.659705 systemd[1]: Stopped target slices.target - Slice Units.
Aug  5 22:31:34.661696 systemd[1]: Stopped target sockets.target - Socket Units.
Aug  5 22:31:34.663872 systemd[1]: iscsid.socket: Deactivated successfully.
Aug  5 22:31:34.664016 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Aug  5 22:31:34.666159 systemd[1]: iscsiuio.socket: Deactivated successfully.
Aug  5 22:31:34.666287 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Aug  5 22:31:34.668559 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Aug  5 22:31:34.668739 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Aug  5 22:31:34.670897 systemd[1]: ignition-files.service: Deactivated successfully.
Aug  5 22:31:34.671057 systemd[1]: Stopped ignition-files.service - Ignition (files).
Aug  5 22:31:34.681893 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Aug  5 22:31:34.684687 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Aug  5 22:31:34.686203 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Aug  5 22:31:34.688591 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Aug  5 22:31:34.690630 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Aug  5 22:31:34.690934 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Aug  5 22:31:34.691756 systemd-networkd[770]: eth0: DHCPv6 lease lost
Aug  5 22:31:34.694429 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Aug  5 22:31:34.708347 ignition[1003]: INFO     : Ignition 2.19.0
Aug  5 22:31:34.708347 ignition[1003]: INFO     : Stage: umount
Aug  5 22:31:34.708347 ignition[1003]: INFO     : no configs at "/usr/lib/ignition/base.d"
Aug  5 22:31:34.708347 ignition[1003]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Aug  5 22:31:34.708347 ignition[1003]: INFO     : umount: umount passed
Aug  5 22:31:34.708347 ignition[1003]: INFO     : Ignition finished successfully
Aug  5 22:31:34.694648 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Aug  5 22:31:34.700817 systemd[1]: systemd-resolved.service: Deactivated successfully.
Aug  5 22:31:34.701060 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Aug  5 22:31:34.705679 systemd[1]: systemd-networkd.service: Deactivated successfully.
Aug  5 22:31:34.705888 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Aug  5 22:31:34.708590 systemd[1]: ignition-mount.service: Deactivated successfully.
Aug  5 22:31:34.708792 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Aug  5 22:31:34.714777 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Aug  5 22:31:34.714941 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Aug  5 22:31:34.717788 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Aug  5 22:31:34.717874 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Aug  5 22:31:34.720607 systemd[1]: ignition-disks.service: Deactivated successfully.
Aug  5 22:31:34.720738 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Aug  5 22:31:34.722758 systemd[1]: ignition-kargs.service: Deactivated successfully.
Aug  5 22:31:34.722823 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Aug  5 22:31:34.724947 systemd[1]: ignition-setup.service: Deactivated successfully.
Aug  5 22:31:34.725006 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Aug  5 22:31:34.726907 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Aug  5 22:31:34.726967 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Aug  5 22:31:34.737822 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Aug  5 22:31:34.740132 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Aug  5 22:31:34.740269 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Aug  5 22:31:34.743159 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Aug  5 22:31:34.743229 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Aug  5 22:31:34.745758 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Aug  5 22:31:34.745825 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Aug  5 22:31:34.747990 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Aug  5 22:31:34.748065 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Aug  5 22:31:34.750424 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Aug  5 22:31:34.754244 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Aug  5 22:31:34.766205 systemd[1]: network-cleanup.service: Deactivated successfully.
Aug  5 22:31:34.766347 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Aug  5 22:31:34.771816 systemd[1]: systemd-udevd.service: Deactivated successfully.
Aug  5 22:31:34.772093 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Aug  5 22:31:34.774593 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Aug  5 22:31:34.774696 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Aug  5 22:31:34.776730 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Aug  5 22:31:34.776787 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Aug  5 22:31:34.778926 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Aug  5 22:31:34.778997 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Aug  5 22:31:34.781683 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Aug  5 22:31:34.781746 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Aug  5 22:31:34.783519 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Aug  5 22:31:34.783570 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Aug  5 22:31:34.796058 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Aug  5 22:31:34.798431 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Aug  5 22:31:34.798527 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Aug  5 22:31:34.800864 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Aug  5 22:31:34.800934 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Aug  5 22:31:34.803390 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Aug  5 22:31:34.803459 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Aug  5 22:31:34.805786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Aug  5 22:31:34.805866 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Aug  5 22:31:34.808998 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Aug  5 22:31:34.809215 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Aug  5 22:31:34.985100 systemd[1]: sysroot-boot.service: Deactivated successfully.
Aug  5 22:31:34.985247 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Aug  5 22:31:34.987889 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Aug  5 22:31:34.989223 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Aug  5 22:31:34.989300 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Aug  5 22:31:34.995769 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Aug  5 22:31:35.004238 systemd[1]: Switching root.
Aug  5 22:31:35.036916 systemd-journald[193]: Journal stopped
Aug  5 22:31:36.833596 systemd-journald[193]: Received SIGTERM from PID 1 (systemd).
Aug  5 22:31:36.833685 kernel: SELinux:  policy capability network_peer_controls=1
Aug  5 22:31:36.833702 kernel: SELinux:  policy capability open_perms=1
Aug  5 22:31:36.833717 kernel: SELinux:  policy capability extended_socket_class=1
Aug  5 22:31:36.833732 kernel: SELinux:  policy capability always_check_network=0
Aug  5 22:31:36.833751 kernel: SELinux:  policy capability cgroup_seclabel=1
Aug  5 22:31:36.833766 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Aug  5 22:31:36.833780 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Aug  5 22:31:36.833795 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Aug  5 22:31:36.833810 kernel: audit: type=1403 audit(1722897095.780:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Aug  5 22:31:36.833826 systemd[1]: Successfully loaded SELinux policy in 50.289ms.
Aug  5 22:31:36.833857 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.101ms.
Aug  5 22:31:36.833874 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Aug  5 22:31:36.833893 systemd[1]: Detected virtualization kvm.
Aug  5 22:31:36.833909 systemd[1]: Detected architecture x86-64.
Aug  5 22:31:36.833924 systemd[1]: Detected first boot.
Aug  5 22:31:36.833940 systemd[1]: Initializing machine ID from VM UUID.
Aug  5 22:31:36.833956 zram_generator::config[1047]: No configuration found.
Aug  5 22:31:36.833983 systemd[1]: Populated /etc with preset unit settings.
Aug  5 22:31:36.833999 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Aug  5 22:31:36.834015 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Aug  5 22:31:36.834031 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Aug  5 22:31:36.834051 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Aug  5 22:31:36.834067 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Aug  5 22:31:36.834083 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Aug  5 22:31:36.834099 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Aug  5 22:31:36.834115 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Aug  5 22:31:36.834132 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Aug  5 22:31:36.834148 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Aug  5 22:31:36.834163 systemd[1]: Created slice user.slice - User and Session Slice.
Aug  5 22:31:36.834182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Aug  5 22:31:36.834198 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Aug  5 22:31:36.834216 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Aug  5 22:31:36.834232 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Aug  5 22:31:36.834248 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Aug  5 22:31:36.834264 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Aug  5 22:31:36.834284 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Aug  5 22:31:36.834300 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Aug  5 22:31:36.834322 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Aug  5 22:31:36.834340 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Aug  5 22:31:36.834356 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Aug  5 22:31:36.834373 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Aug  5 22:31:36.834389 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Aug  5 22:31:36.834404 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Aug  5 22:31:36.834420 systemd[1]: Reached target slices.target - Slice Units.
Aug  5 22:31:36.834436 systemd[1]: Reached target swap.target - Swaps.
Aug  5 22:31:36.834453 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Aug  5 22:31:36.834472 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Aug  5 22:31:36.834488 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Aug  5 22:31:36.834505 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Aug  5 22:31:36.834521 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Aug  5 22:31:36.834536 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Aug  5 22:31:36.834552 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Aug  5 22:31:36.834570 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Aug  5 22:31:36.834586 systemd[1]: Mounting media.mount - External Media Directory...
Aug  5 22:31:36.834602 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Aug  5 22:31:36.834621 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Aug  5 22:31:36.834637 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Aug  5 22:31:36.834679 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Aug  5 22:31:36.834697 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Aug  5 22:31:36.834713 systemd[1]: Reached target machines.target - Containers.
Aug  5 22:31:36.834729 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Aug  5 22:31:36.834746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Aug  5 22:31:36.834761 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Aug  5 22:31:36.834781 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Aug  5 22:31:36.834797 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Aug  5 22:31:36.834813 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Aug  5 22:31:36.834829 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Aug  5 22:31:36.834845 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Aug  5 22:31:36.834861 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Aug  5 22:31:36.834877 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Aug  5 22:31:36.834893 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Aug  5 22:31:36.834909 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Aug  5 22:31:36.834927 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Aug  5 22:31:36.834967 kernel: fuse: init (API version 7.39)
Aug  5 22:31:36.834993 systemd[1]: Stopped systemd-fsck-usr.service.
Aug  5 22:31:36.835008 systemd[1]: Starting systemd-journald.service - Journal Service...
Aug  5 22:31:36.835022 kernel: loop: module loaded
Aug  5 22:31:36.835037 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Aug  5 22:31:36.835052 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Aug  5 22:31:36.835069 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Aug  5 22:31:36.835085 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Aug  5 22:31:36.835103 systemd[1]: verity-setup.service: Deactivated successfully.
Aug  5 22:31:36.835119 systemd[1]: Stopped verity-setup.service.
Aug  5 22:31:36.835136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Aug  5 22:31:36.835152 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Aug  5 22:31:36.835167 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Aug  5 22:31:36.835183 systemd[1]: Mounted media.mount - External Media Directory.
Aug  5 22:31:36.835199 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Aug  5 22:31:36.835218 kernel: ACPI: bus type drm_connector registered
Aug  5 22:31:36.835255 systemd-journald[1130]: Collecting audit messages is disabled.
Aug  5 22:31:36.835289 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Aug  5 22:31:36.835304 systemd-journald[1130]: Journal started
Aug  5 22:31:36.835335 systemd-journald[1130]: Runtime Journal (/run/log/journal/4dff3f2d160f4223a7f34c827e957967) is 6.0M, max 48.4M, 42.3M free.
Aug  5 22:31:36.515569 systemd[1]: Queued start job for default target multi-user.target.
Aug  5 22:31:36.541106 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Aug  5 22:31:36.541695 systemd[1]: systemd-journald.service: Deactivated successfully.
Aug  5 22:31:36.837228 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Aug  5 22:31:36.839162 systemd[1]: Started systemd-journald.service - Journal Service.
Aug  5 22:31:36.840198 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Aug  5 22:31:36.841752 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Aug  5 22:31:36.843415 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Aug  5 22:31:36.843596 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Aug  5 22:31:36.845100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Aug  5 22:31:36.845274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Aug  5 22:31:36.847066 systemd[1]: modprobe@drm.service: Deactivated successfully.
Aug  5 22:31:36.847242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Aug  5 22:31:36.848697 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Aug  5 22:31:36.848875 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Aug  5 22:31:36.850452 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Aug  5 22:31:36.850629 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Aug  5 22:31:36.852336 systemd[1]: modprobe@loop.service: Deactivated successfully.
Aug  5 22:31:36.852598 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Aug  5 22:31:36.854424 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Aug  5 22:31:36.855914 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Aug  5 22:31:36.857563 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Aug  5 22:31:36.876597 systemd[1]: Reached target network-pre.target - Preparation for Network.
Aug  5 22:31:36.889874 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Aug  5 22:31:36.892899 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Aug  5 22:31:36.921895 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Aug  5 22:31:36.921960 systemd[1]: Reached target local-fs.target - Local File Systems.
Aug  5 22:31:36.924264 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Aug  5 22:31:36.927070 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Aug  5 22:31:36.932848 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Aug  5 22:31:36.934141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Aug  5 22:31:36.950928 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Aug  5 22:31:36.954908 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Aug  5 22:31:36.957227 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Aug  5 22:31:36.963919 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Aug  5 22:31:36.965356 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Aug  5 22:31:36.969993 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Aug  5 22:31:36.977545 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Aug  5 22:31:36.978899 systemd-journald[1130]: Time spent on flushing to /var/log/journal/4dff3f2d160f4223a7f34c827e957967 is 19.719ms for 946 entries.
Aug  5 22:31:36.978899 systemd-journald[1130]: System Journal (/var/log/journal/4dff3f2d160f4223a7f34c827e957967) is 8.0M, max 195.6M, 187.6M free.
Aug  5 22:31:37.049449 systemd-journald[1130]: Received client request to flush runtime journal.
Aug  5 22:31:37.049484 kernel: loop0: detected capacity change from 0 to 139760
Aug  5 22:31:37.049498 kernel: block loop0: the capability attribute has been deprecated.
Aug  5 22:31:37.008678 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Aug  5 22:31:37.012082 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Aug  5 22:31:37.013934 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Aug  5 22:31:37.015368 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Aug  5 22:31:37.018168 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Aug  5 22:31:37.022062 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Aug  5 22:31:37.032551 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Aug  5 22:31:37.044913 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Aug  5 22:31:37.048014 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Aug  5 22:31:37.051159 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Aug  5 22:31:37.063176 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Aug  5 22:31:37.080692 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Aug  5 22:31:37.080603 systemd-tmpfiles[1161]: ACLs are not supported, ignoring.
Aug  5 22:31:37.080624 systemd-tmpfiles[1161]: ACLs are not supported, ignoring.
Aug  5 22:31:37.085186 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Aug  5 22:31:37.086366 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Aug  5 22:31:37.093205 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Aug  5 22:31:37.106066 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Aug  5 22:31:37.108215 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Aug  5 22:31:37.109695 kernel: loop1: detected capacity change from 0 to 210664
Aug  5 22:31:37.141217 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Aug  5 22:31:37.156423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Aug  5 22:31:37.167686 kernel: loop2: detected capacity change from 0 to 80568
Aug  5 22:31:37.184381 systemd-tmpfiles[1183]: ACLs are not supported, ignoring.
Aug  5 22:31:37.184402 systemd-tmpfiles[1183]: ACLs are not supported, ignoring.
Aug  5 22:31:37.193160 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Aug  5 22:31:37.283692 kernel: loop3: detected capacity change from 0 to 139760
Aug  5 22:31:37.292679 kernel: loop4: detected capacity change from 0 to 210664
Aug  5 22:31:37.298680 kernel: loop5: detected capacity change from 0 to 80568
Aug  5 22:31:37.303086 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Aug  5 22:31:37.303691 (sd-merge)[1187]: Merged extensions into '/usr'.
Aug  5 22:31:37.308126 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)...
Aug  5 22:31:37.308143 systemd[1]: Reloading...
Aug  5 22:31:37.373227 zram_generator::config[1214]: No configuration found.
Aug  5 22:31:37.495202 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Aug  5 22:31:37.546560 systemd[1]: Reloading finished in 237 ms.
Aug  5 22:31:37.585760 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Aug  5 22:31:37.599414 systemd[1]: Starting ensure-sysext.service...
Aug  5 22:31:37.602251 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories...
Aug  5 22:31:37.650819 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)...
Aug  5 22:31:37.650857 systemd[1]: Reloading...
Aug  5 22:31:37.682316 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Aug  5 22:31:37.682711 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Aug  5 22:31:37.683694 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Aug  5 22:31:37.684006 systemd-tmpfiles[1248]: ACLs are not supported, ignoring.
Aug  5 22:31:37.684077 systemd-tmpfiles[1248]: ACLs are not supported, ignoring.
Aug  5 22:31:37.687293 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot.
Aug  5 22:31:37.687309 systemd-tmpfiles[1248]: Skipping /boot
Aug  5 22:31:37.700645 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot.
Aug  5 22:31:37.700677 systemd-tmpfiles[1248]: Skipping /boot
Aug  5 22:31:37.758724 zram_generator::config[1273]: No configuration found.
Aug  5 22:31:37.805854 ldconfig[1155]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Aug  5 22:31:37.904246 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Aug  5 22:31:37.966013 systemd[1]: Reloading finished in 314 ms.
Aug  5 22:31:38.007560 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Aug  5 22:31:38.037019 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Aug  5 22:31:38.040340 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Aug  5 22:31:38.043147 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Aug  5 22:31:38.049956 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Aug  5 22:31:38.070509 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Aug  5 22:31:38.074124 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Aug  5 22:31:38.100800 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Aug  5 22:31:38.109150 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Aug  5 22:31:38.117821 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Aug  5 22:31:38.118101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Aug  5 22:31:38.120905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Aug  5 22:31:38.126059 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Aug  5 22:31:38.131736 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Aug  5 22:31:38.133494 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Aug  5 22:31:38.133672 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Aug  5 22:31:38.134854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Aug  5 22:31:38.135123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Aug  5 22:31:38.140506 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Aug  5 22:31:38.142255 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Aug  5 22:31:38.151490 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Aug  5 22:31:38.154339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Aug  5 22:31:38.154513 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Aug  5 22:31:38.155539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Aug  5 22:31:38.155784 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Aug  5 22:31:38.158619 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Aug  5 22:31:38.161152 systemd[1]: modprobe@loop.service: Deactivated successfully.
Aug  5 22:31:38.161377 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Aug  5 22:31:38.164036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Aug  5 22:31:38.164242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Aug  5 22:31:38.166619 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Aug  5 22:31:38.174296 augenrules[1342]: No rules
Aug  5 22:31:38.176889 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Aug  5 22:31:38.179593 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Aug  5 22:31:38.180020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Aug  5 22:31:38.188989 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Aug  5 22:31:38.191951 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Aug  5 22:31:38.194968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Aug  5 22:31:38.200801 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Aug  5 22:31:38.202988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Aug  5 22:31:38.205373 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Aug  5 22:31:38.211784 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Aug  5 22:31:38.213284 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Aug  5 22:31:38.214734 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Aug  5 22:31:38.217356 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Aug  5 22:31:38.220012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Aug  5 22:31:38.220240 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Aug  5 22:31:38.222518 systemd[1]: modprobe@drm.service: Deactivated successfully.
Aug  5 22:31:38.222759 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Aug  5 22:31:38.225030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Aug  5 22:31:38.225251 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Aug  5 22:31:38.227645 systemd[1]: modprobe@loop.service: Deactivated successfully.
Aug  5 22:31:38.227877 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Aug  5 22:31:38.229808 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Aug  5 22:31:38.235096 systemd[1]: Finished ensure-sysext.service.
Aug  5 22:31:38.242091 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Aug  5 22:31:38.242190 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Aug  5 22:31:38.245060 systemd-udevd[1359]: Using default interface naming scheme 'v255'.
Aug  5 22:31:38.251882 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Aug  5 22:31:38.253570 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Aug  5 22:31:38.263796 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Aug  5 22:31:38.265411 systemd-resolved[1315]: Positive Trust Anchors:
Aug  5 22:31:38.265427 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Aug  5 22:31:38.265469 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test
Aug  5 22:31:38.275001 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Aug  5 22:31:38.278652 systemd-resolved[1315]: Defaulting to hostname 'linux'.
Aug  5 22:31:38.282737 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Aug  5 22:31:38.285267 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Aug  5 22:31:38.342716 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1382)
Aug  5 22:31:38.352199 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Aug  5 22:31:38.416861 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1373)
Aug  5 22:31:38.431710 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Aug  5 22:31:38.440690 kernel: ACPI: button: Power Button [PWRF]
Aug  5 22:31:38.442687 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Aug  5 22:31:38.444563 systemd[1]: Reached target time-set.target - System Time Set.
Aug  5 22:31:38.448693 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Aug  5 22:31:38.474686 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
Aug  5 22:31:38.477002 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Aug  5 22:31:38.486468 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Aug  5 22:31:38.502639 systemd-networkd[1375]: lo: Link UP
Aug  5 22:31:38.502665 systemd-networkd[1375]: lo: Gained carrier
Aug  5 22:31:38.509127 systemd-networkd[1375]: Enumeration completed
Aug  5 22:31:38.510050 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Aug  5 22:31:38.510502 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Aug  5 22:31:38.510506 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Aug  5 22:31:38.510535 systemd[1]: Started systemd-networkd.service - Network Configuration.
Aug  5 22:31:38.511711 systemd[1]: Reached target network.target - Network.
Aug  5 22:31:38.514313 systemd-networkd[1375]: eth0: Link UP
Aug  5 22:31:38.514323 systemd-networkd[1375]: eth0: Gained carrier
Aug  5 22:31:38.514339 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Aug  5 22:31:38.518026 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Aug  5 22:31:38.526306 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Aug  5 22:31:38.530818 kernel: mousedev: PS/2 mouse device common for all mice
Aug  5 22:31:38.532195 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1
Aug  5 22:31:38.534142 systemd-timesyncd[1368]: Network configuration changed, trying to establish connection.
Aug  5 22:31:38.948176 systemd-resolved[1315]: Clock change detected. Flushing caches.
Aug  5 22:31:38.948266 systemd-timesyncd[1368]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Aug  5 22:31:38.948310 systemd-timesyncd[1368]: Initial clock synchronization to Mon 2024-08-05 22:31:38.948146 UTC.
Aug  5 22:31:39.056655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Aug  5 22:31:39.057142 kernel: kvm_amd: TSC scaling supported
Aug  5 22:31:39.057195 kernel: kvm_amd: Nested Virtualization enabled
Aug  5 22:31:39.057212 kernel: kvm_amd: Nested Paging enabled
Aug  5 22:31:39.057243 kernel: kvm_amd: LBR virtualization supported
Aug  5 22:31:39.057258 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Aug  5 22:31:39.057273 kernel: kvm_amd: Virtual GIF supported
Aug  5 22:31:39.081139 kernel: EDAC MC: Ver: 3.0.0
Aug  5 22:31:39.128707 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Aug  5 22:31:39.142530 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Aug  5 22:31:39.163970 lvm[1414]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Aug  5 22:31:39.196446 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Aug  5 22:31:39.198024 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Aug  5 22:31:39.199188 systemd[1]: Reached target sysinit.target - System Initialization.
Aug  5 22:31:39.200476 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Aug  5 22:31:39.201716 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Aug  5 22:31:39.203140 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Aug  5 22:31:39.204338 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Aug  5 22:31:39.205751 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Aug  5 22:31:39.206997 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Aug  5 22:31:39.207028 systemd[1]: Reached target paths.target - Path Units.
Aug  5 22:31:39.207943 systemd[1]: Reached target timers.target - Timer Units.
Aug  5 22:31:39.209645 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Aug  5 22:31:39.212412 systemd[1]: Starting docker.socket - Docker Socket for the API...
Aug  5 22:31:39.219813 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Aug  5 22:31:39.222267 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Aug  5 22:31:39.223848 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Aug  5 22:31:39.225029 systemd[1]: Reached target sockets.target - Socket Units.
Aug  5 22:31:39.225992 systemd[1]: Reached target basic.target - Basic System.
Aug  5 22:31:39.226958 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Aug  5 22:31:39.226985 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Aug  5 22:31:39.228006 systemd[1]: Starting containerd.service - containerd container runtime...
Aug  5 22:31:39.230462 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Aug  5 22:31:39.233261 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Aug  5 22:31:39.234299 lvm[1418]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Aug  5 22:31:39.236269 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Aug  5 22:31:39.237370 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Aug  5 22:31:39.241340 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Aug  5 22:31:39.245931 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Aug  5 22:31:39.250291 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Aug  5 22:31:39.253399 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Aug  5 22:31:39.255045 jq[1421]: false
Aug  5 22:31:39.258416 systemd[1]: Starting systemd-logind.service - User Login Management...
Aug  5 22:31:39.260001 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Aug  5 22:31:39.260559 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Aug  5 22:31:39.263389 systemd[1]: Starting update-engine.service - Update Engine...
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found loop3
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found loop4
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found loop5
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found sr0
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found vda
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found vda1
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found vda2
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found vda3
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found usr
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found vda4
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found vda6
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found vda7
Aug  5 22:31:39.276566 extend-filesystems[1422]: Found vda9
Aug  5 22:31:39.276566 extend-filesystems[1422]: Checking size of /dev/vda9
Aug  5 22:31:39.303775 dbus-daemon[1420]: [system] SELinux support is enabled
Aug  5 22:31:39.296314 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Aug  5 22:31:39.313580 update_engine[1431]: I0805 22:31:39.307926  1431 main.cc:92] Flatcar Update Engine starting
Aug  5 22:31:39.313580 update_engine[1431]: I0805 22:31:39.310176  1431 update_check_scheduler.cc:74] Next update check in 6m13s
Aug  5 22:31:39.303440 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Aug  5 22:31:39.307452 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Aug  5 22:31:39.316428 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Aug  5 22:31:39.316671 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Aug  5 22:31:39.317031 systemd[1]: motdgen.service: Deactivated successfully.
Aug  5 22:31:39.317261 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Aug  5 22:31:39.318937 jq[1436]: true
Aug  5 22:31:39.319567 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Aug  5 22:31:39.319775 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Aug  5 22:31:39.332372 jq[1443]: true
Aug  5 22:31:39.334818 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Aug  5 22:31:39.346068 systemd-logind[1428]: Watching system buttons on /dev/input/event1 (Power Button)
Aug  5 22:31:39.346096 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Aug  5 22:31:39.349657 extend-filesystems[1422]: Resized partition /dev/vda9
Aug  5 22:31:39.348657 systemd-logind[1428]: New seat seat0.
Aug  5 22:31:39.351594 systemd[1]: Started systemd-logind.service - User Login Management.
Aug  5 22:31:39.356811 extend-filesystems[1458]: resize2fs 1.47.0 (5-Feb-2023)
Aug  5 22:31:39.361815 dbus-daemon[1420]: [system] Successfully activated service 'org.freedesktop.systemd1'
Aug  5 22:31:39.358982 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Aug  5 22:31:39.365431 tar[1442]: linux-amd64/helm
Aug  5 22:31:39.359009 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Aug  5 22:31:39.360475 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Aug  5 22:31:39.360492 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Aug  5 22:31:39.362084 systemd[1]: Started update-engine.service - Update Engine.
Aug  5 22:31:39.415470 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1377)
Aug  5 22:31:39.399091 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Aug  5 22:31:39.420640 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Aug  5 22:31:39.422850 sshd_keygen[1440]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Aug  5 22:31:39.458071 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Aug  5 22:31:39.467384 systemd[1]: Starting issuegen.service - Generate /run/issue...
Aug  5 22:31:39.477226 systemd[1]: issuegen.service: Deactivated successfully.
Aug  5 22:31:39.477533 systemd[1]: Finished issuegen.service - Generate /run/issue.
Aug  5 22:31:39.494393 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Aug  5 22:31:39.589032 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Aug  5 22:31:39.615734 systemd[1]: Started getty@tty1.service - Getty on tty1.
Aug  5 22:31:39.619515 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Aug  5 22:31:39.621434 systemd[1]: Reached target getty.target - Login Prompts.
Aug  5 22:31:39.661823 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Aug  5 22:31:39.663804 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Aug  5 22:31:39.674343 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:37412.service - OpenSSH per-connection server daemon (10.0.0.1:37412).
Aug  5 22:31:39.681145 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Aug  5 22:31:39.711848 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Aug  5 22:31:39.711848 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1
Aug  5 22:31:39.711848 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Aug  5 22:31:39.742945 extend-filesystems[1422]: Resized filesystem in /dev/vda9
Aug  5 22:31:39.714974 systemd[1]: extend-filesystems.service: Deactivated successfully.
Aug  5 22:31:39.716591 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Aug  5 22:31:39.755794 bash[1474]: Updated "/home/core/.ssh/authorized_keys"
Aug  5 22:31:39.757361 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Aug  5 22:31:39.760225 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Aug  5 22:31:39.782955 sshd[1497]: Accepted publickey for core from 10.0.0.1 port 37412 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:31:39.784988 sshd[1497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:31:39.794797 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Aug  5 22:31:39.803837 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Aug  5 22:31:39.808965 systemd-logind[1428]: New session 1 of user core.
Aug  5 22:31:39.823834 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Aug  5 22:31:39.833589 systemd[1]: Starting user@500.service - User Manager for UID 500...
Aug  5 22:31:39.837793 (systemd)[1509]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:31:39.895436 containerd[1444]: time="2024-08-05T22:31:39.895227828Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18
Aug  5 22:31:39.927751 containerd[1444]: time="2024-08-05T22:31:39.927678795Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Aug  5 22:31:39.927751 containerd[1444]: time="2024-08-05T22:31:39.927737145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Aug  5 22:31:39.929898 containerd[1444]: time="2024-08-05T22:31:39.929860166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Aug  5 22:31:39.929967 containerd[1444]: time="2024-08-05T22:31:39.929898748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Aug  5 22:31:39.930234 containerd[1444]: time="2024-08-05T22:31:39.930205925Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug  5 22:31:39.930234 containerd[1444]: time="2024-08-05T22:31:39.930228016Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Aug  5 22:31:39.930379 containerd[1444]: time="2024-08-05T22:31:39.930345015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Aug  5 22:31:39.930458 containerd[1444]: time="2024-08-05T22:31:39.930433271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Aug  5 22:31:39.930506 containerd[1444]: time="2024-08-05T22:31:39.930454240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Aug  5 22:31:39.930587 containerd[1444]: time="2024-08-05T22:31:39.930556171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Aug  5 22:31:39.930854 containerd[1444]: time="2024-08-05T22:31:39.930828061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Aug  5 22:31:39.930899 containerd[1444]: time="2024-08-05T22:31:39.930853699Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Aug  5 22:31:39.930899 containerd[1444]: time="2024-08-05T22:31:39.930867385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Aug  5 22:31:39.931063 containerd[1444]: time="2024-08-05T22:31:39.931035761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug  5 22:31:39.931063 containerd[1444]: time="2024-08-05T22:31:39.931057021Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Aug  5 22:31:39.931167 containerd[1444]: time="2024-08-05T22:31:39.931143383Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Aug  5 22:31:39.931167 containerd[1444]: time="2024-08-05T22:31:39.931161957Z" level=info msg="metadata content store policy set" policy=shared
Aug  5 22:31:39.938249 containerd[1444]: time="2024-08-05T22:31:39.938211107Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Aug  5 22:31:39.938303 containerd[1444]: time="2024-08-05T22:31:39.938252094Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Aug  5 22:31:39.938303 containerd[1444]: time="2024-08-05T22:31:39.938269497Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Aug  5 22:31:39.938350 containerd[1444]: time="2024-08-05T22:31:39.938306817Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Aug  5 22:31:39.938350 containerd[1444]: time="2024-08-05T22:31:39.938325782Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Aug  5 22:31:39.938350 containerd[1444]: time="2024-08-05T22:31:39.938338526Z" level=info msg="NRI interface is disabled by configuration."
Aug  5 22:31:39.938446 containerd[1444]: time="2024-08-05T22:31:39.938354526Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Aug  5 22:31:39.938530 containerd[1444]: time="2024-08-05T22:31:39.938507383Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Aug  5 22:31:39.938557 containerd[1444]: time="2024-08-05T22:31:39.938532139Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Aug  5 22:31:39.938557 containerd[1444]: time="2024-08-05T22:31:39.938550073Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Aug  5 22:31:39.938603 containerd[1444]: time="2024-08-05T22:31:39.938569149Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Aug  5 22:31:39.938603 containerd[1444]: time="2024-08-05T22:31:39.938586752Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Aug  5 22:31:39.938658 containerd[1444]: time="2024-08-05T22:31:39.938608542Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Aug  5 22:31:39.938658 containerd[1444]: time="2024-08-05T22:31:39.938626165Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Aug  5 22:31:39.938658 containerd[1444]: time="2024-08-05T22:31:39.938642366Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Aug  5 22:31:39.938726 containerd[1444]: time="2024-08-05T22:31:39.938660159Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Aug  5 22:31:39.938726 containerd[1444]: time="2024-08-05T22:31:39.938676861Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Aug  5 22:31:39.938726 containerd[1444]: time="2024-08-05T22:31:39.938692289Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Aug  5 22:31:39.938726 containerd[1444]: time="2024-08-05T22:31:39.938707217Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Aug  5 22:31:39.938865 containerd[1444]: time="2024-08-05T22:31:39.938841920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Aug  5 22:31:39.939209 containerd[1444]: time="2024-08-05T22:31:39.939176648Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Aug  5 22:31:39.939261 containerd[1444]: time="2024-08-05T22:31:39.939210411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939261 containerd[1444]: time="2024-08-05T22:31:39.939227774Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Aug  5 22:31:39.939261 containerd[1444]: time="2024-08-05T22:31:39.939253983Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Aug  5 22:31:39.939341 containerd[1444]: time="2024-08-05T22:31:39.939311220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939341 containerd[1444]: time="2024-08-05T22:31:39.939334694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939401 containerd[1444]: time="2024-08-05T22:31:39.939353409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939401 containerd[1444]: time="2024-08-05T22:31:39.939380871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939401 containerd[1444]: time="2024-08-05T22:31:39.939397282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939478 containerd[1444]: time="2024-08-05T22:31:39.939413111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939478 containerd[1444]: time="2024-08-05T22:31:39.939428330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939478 containerd[1444]: time="2024-08-05T22:31:39.939444520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939478 containerd[1444]: time="2024-08-05T22:31:39.939461763Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Aug  5 22:31:39.939663 containerd[1444]: time="2024-08-05T22:31:39.939639616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939699 containerd[1444]: time="2024-08-05T22:31:39.939683508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939725 containerd[1444]: time="2024-08-05T22:31:39.939700811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939725 containerd[1444]: time="2024-08-05T22:31:39.939717252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939772 containerd[1444]: time="2024-08-05T22:31:39.939732520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939772 containerd[1444]: time="2024-08-05T22:31:39.939749472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939772 containerd[1444]: time="2024-08-05T22:31:39.939764250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.939853 containerd[1444]: time="2024-08-05T22:31:39.939777775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Aug  5 22:31:39.940221 containerd[1444]: time="2024-08-05T22:31:39.940132360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Aug  5 22:31:39.940221 containerd[1444]: time="2024-08-05T22:31:39.940207762Z" level=info msg="Connect containerd service"
Aug  5 22:31:39.940433 containerd[1444]: time="2024-08-05T22:31:39.940242817Z" level=info msg="using legacy CRI server"
Aug  5 22:31:39.940433 containerd[1444]: time="2024-08-05T22:31:39.940251504Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Aug  5 22:31:39.940433 containerd[1444]: time="2024-08-05T22:31:39.940367090Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Aug  5 22:31:39.941516 containerd[1444]: time="2024-08-05T22:31:39.941484326Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Aug  5 22:31:39.941572 containerd[1444]: time="2024-08-05T22:31:39.941558625Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Aug  5 22:31:39.941670 containerd[1444]: time="2024-08-05T22:31:39.941582119Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Aug  5 22:31:39.941721 containerd[1444]: time="2024-08-05T22:31:39.941672368Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Aug  5 22:31:39.941721 containerd[1444]: time="2024-08-05T22:31:39.941690512Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Aug  5 22:31:39.941773 containerd[1444]: time="2024-08-05T22:31:39.941628686Z" level=info msg="Start subscribing containerd event"
Aug  5 22:31:39.941798 containerd[1444]: time="2024-08-05T22:31:39.941785160Z" level=info msg="Start recovering state"
Aug  5 22:31:39.941881 containerd[1444]: time="2024-08-05T22:31:39.941857385Z" level=info msg="Start event monitor"
Aug  5 22:31:39.941913 containerd[1444]: time="2024-08-05T22:31:39.941887111Z" level=info msg="Start snapshots syncer"
Aug  5 22:31:39.941913 containerd[1444]: time="2024-08-05T22:31:39.941899925Z" level=info msg="Start cni network conf syncer for default"
Aug  5 22:31:39.942033 containerd[1444]: time="2024-08-05T22:31:39.941910575Z" level=info msg="Start streaming server"
Aug  5 22:31:39.944712 containerd[1444]: time="2024-08-05T22:31:39.944549754Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Aug  5 22:31:39.944712 containerd[1444]: time="2024-08-05T22:31:39.944624264Z" level=info msg=serving... address=/run/containerd/containerd.sock
Aug  5 22:31:39.944712 containerd[1444]: time="2024-08-05T22:31:39.944689426Z" level=info msg="containerd successfully booted in 0.050995s"
Aug  5 22:31:39.944797 systemd[1]: Started containerd.service - containerd container runtime.
Aug  5 22:31:39.972845 systemd[1509]: Queued start job for default target default.target.
Aug  5 22:31:39.983752 systemd[1509]: Created slice app.slice - User Application Slice.
Aug  5 22:31:39.983782 systemd[1509]: Reached target paths.target - Paths.
Aug  5 22:31:39.983799 systemd[1509]: Reached target timers.target - Timers.
Aug  5 22:31:39.985704 systemd[1509]: Starting dbus.socket - D-Bus User Message Bus Socket...
Aug  5 22:31:39.998780 systemd[1509]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Aug  5 22:31:39.998918 systemd[1509]: Reached target sockets.target - Sockets.
Aug  5 22:31:39.998935 systemd[1509]: Reached target basic.target - Basic System.
Aug  5 22:31:39.998972 systemd[1509]: Reached target default.target - Main User Target.
Aug  5 22:31:39.999005 systemd[1509]: Startup finished in 153ms.
Aug  5 22:31:39.999628 systemd[1]: Started user@500.service - User Manager for UID 500.
Aug  5 22:31:40.225459 systemd[1]: Started session-1.scope - Session 1 of User core.
Aug  5 22:31:40.228286 tar[1442]: linux-amd64/LICENSE
Aug  5 22:31:40.228396 tar[1442]: linux-amd64/README.md
Aug  5 22:31:40.244448 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Aug  5 22:31:40.304639 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:37428.service - OpenSSH per-connection server daemon (10.0.0.1:37428).
Aug  5 22:31:40.354018 sshd[1527]: Accepted publickey for core from 10.0.0.1 port 37428 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:31:40.355922 sshd[1527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:31:40.361582 systemd-logind[1428]: New session 2 of user core.
Aug  5 22:31:40.378280 systemd[1]: Started session-2.scope - Session 2 of User core.
Aug  5 22:31:40.439270 sshd[1527]: pam_unix(sshd:session): session closed for user core
Aug  5 22:31:40.448879 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:37428.service: Deactivated successfully.
Aug  5 22:31:40.450597 systemd[1]: session-2.scope: Deactivated successfully.
Aug  5 22:31:40.452272 systemd-logind[1428]: Session 2 logged out. Waiting for processes to exit.
Aug  5 22:31:40.453423 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:37442.service - OpenSSH per-connection server daemon (10.0.0.1:37442).
Aug  5 22:31:40.455431 systemd-logind[1428]: Removed session 2.
Aug  5 22:31:40.472335 systemd-networkd[1375]: eth0: Gained IPv6LL
Aug  5 22:31:40.475996 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Aug  5 22:31:40.477913 systemd[1]: Reached target network-online.target - Network is Online.
Aug  5 22:31:40.493403 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Aug  5 22:31:40.495949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Aug  5 22:31:40.498246 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Aug  5 22:31:40.521445 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 37442 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:31:40.522998 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:31:40.525327 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Aug  5 22:31:40.527243 systemd[1]: coreos-metadata.service: Deactivated successfully.
Aug  5 22:31:40.527502 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Aug  5 22:31:40.530636 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Aug  5 22:31:40.533503 systemd-logind[1428]: New session 3 of user core.
Aug  5 22:31:40.544288 systemd[1]: Started session-3.scope - Session 3 of User core.
Aug  5 22:31:40.601686 sshd[1534]: pam_unix(sshd:session): session closed for user core
Aug  5 22:31:40.605740 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:37442.service: Deactivated successfully.
Aug  5 22:31:40.607882 systemd[1]: session-3.scope: Deactivated successfully.
Aug  5 22:31:40.608559 systemd-logind[1428]: Session 3 logged out. Waiting for processes to exit.
Aug  5 22:31:40.609568 systemd-logind[1428]: Removed session 3.
Aug  5 22:31:41.879481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Aug  5 22:31:41.881695 systemd[1]: Reached target multi-user.target - Multi-User System.
Aug  5 22:31:41.883400 systemd[1]: Startup finished in 1.057s (kernel) + 7.063s (initrd) + 5.737s (userspace) = 13.858s.
Aug  5 22:31:41.914211 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Aug  5 22:31:42.833643 kubelet[1562]: E0805 22:31:42.833555    1562 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Aug  5 22:31:42.838931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Aug  5 22:31:42.839229 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Aug  5 22:31:42.839596 systemd[1]: kubelet.service: Consumed 2.193s CPU time.
Aug  5 22:31:50.620474 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:56574.service - OpenSSH per-connection server daemon (10.0.0.1:56574).
Aug  5 22:31:50.651557 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 56574 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:31:50.653250 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:31:50.657636 systemd-logind[1428]: New session 4 of user core.
Aug  5 22:31:50.675307 systemd[1]: Started session-4.scope - Session 4 of User core.
Aug  5 22:31:50.732599 sshd[1576]: pam_unix(sshd:session): session closed for user core
Aug  5 22:31:50.743084 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:56574.service: Deactivated successfully.
Aug  5 22:31:50.745001 systemd[1]: session-4.scope: Deactivated successfully.
Aug  5 22:31:50.747060 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit.
Aug  5 22:31:50.748478 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:56576.service - OpenSSH per-connection server daemon (10.0.0.1:56576).
Aug  5 22:31:50.749568 systemd-logind[1428]: Removed session 4.
Aug  5 22:31:50.799588 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 56576 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:31:50.801389 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:31:50.805480 systemd-logind[1428]: New session 5 of user core.
Aug  5 22:31:50.825335 systemd[1]: Started session-5.scope - Session 5 of User core.
Aug  5 22:31:50.877590 sshd[1583]: pam_unix(sshd:session): session closed for user core
Aug  5 22:31:50.895489 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:56576.service: Deactivated successfully.
Aug  5 22:31:50.897448 systemd[1]: session-5.scope: Deactivated successfully.
Aug  5 22:31:50.899565 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit.
Aug  5 22:31:50.900824 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:56578.service - OpenSSH per-connection server daemon (10.0.0.1:56578).
Aug  5 22:31:50.901599 systemd-logind[1428]: Removed session 5.
Aug  5 22:31:50.945648 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 56578 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:31:50.947214 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:31:50.951538 systemd-logind[1428]: New session 6 of user core.
Aug  5 22:31:50.965420 systemd[1]: Started session-6.scope - Session 6 of User core.
Aug  5 22:31:51.023520 sshd[1590]: pam_unix(sshd:session): session closed for user core
Aug  5 22:31:51.044571 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:56578.service: Deactivated successfully.
Aug  5 22:31:51.046465 systemd[1]: session-6.scope: Deactivated successfully.
Aug  5 22:31:51.048149 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit.
Aug  5 22:31:51.057440 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:56594.service - OpenSSH per-connection server daemon (10.0.0.1:56594).
Aug  5 22:31:51.058497 systemd-logind[1428]: Removed session 6.
Aug  5 22:31:51.086007 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 56594 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:31:51.087469 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:31:51.092013 systemd-logind[1428]: New session 7 of user core.
Aug  5 22:31:51.108478 systemd[1]: Started session-7.scope - Session 7 of User core.
Aug  5 22:31:51.275655 sudo[1600]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Aug  5 22:31:51.276030 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Aug  5 22:31:51.292270 sudo[1600]: pam_unix(sudo:session): session closed for user root
Aug  5 22:31:51.294395 sshd[1597]: pam_unix(sshd:session): session closed for user core
Aug  5 22:31:51.309913 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:56594.service: Deactivated successfully.
Aug  5 22:31:51.311608 systemd[1]: session-7.scope: Deactivated successfully.
Aug  5 22:31:51.312972 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit.
Aug  5 22:31:51.328536 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:56604.service - OpenSSH per-connection server daemon (10.0.0.1:56604).
Aug  5 22:31:51.329874 systemd-logind[1428]: Removed session 7.
Aug  5 22:31:51.355452 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 56604 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:31:51.356986 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:31:51.361069 systemd-logind[1428]: New session 8 of user core.
Aug  5 22:31:51.370262 systemd[1]: Started session-8.scope - Session 8 of User core.
Aug  5 22:31:51.426918 sudo[1609]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Aug  5 22:31:51.427338 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Aug  5 22:31:51.432467 sudo[1609]: pam_unix(sudo:session): session closed for user root
Aug  5 22:31:51.439509 sudo[1608]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Aug  5 22:31:51.439825 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Aug  5 22:31:51.457385 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Aug  5 22:31:51.459157 auditctl[1612]: No rules
Aug  5 22:31:51.459670 systemd[1]: audit-rules.service: Deactivated successfully.
Aug  5 22:31:51.459924 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Aug  5 22:31:51.462791 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Aug  5 22:31:51.502992 augenrules[1630]: No rules
Aug  5 22:31:51.505135 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Aug  5 22:31:51.506684 sudo[1608]: pam_unix(sudo:session): session closed for user root
Aug  5 22:31:51.508694 sshd[1605]: pam_unix(sshd:session): session closed for user core
Aug  5 22:31:51.527466 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:56604.service: Deactivated successfully.
Aug  5 22:31:51.529619 systemd[1]: session-8.scope: Deactivated successfully.
Aug  5 22:31:51.531323 systemd-logind[1428]: Session 8 logged out. Waiting for processes to exit.
Aug  5 22:31:51.545457 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:56610.service - OpenSSH per-connection server daemon (10.0.0.1:56610).
Aug  5 22:31:51.546566 systemd-logind[1428]: Removed session 8.
Aug  5 22:31:51.573942 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 56610 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:31:51.575547 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:31:51.579772 systemd-logind[1428]: New session 9 of user core.
Aug  5 22:31:51.589310 systemd[1]: Started session-9.scope - Session 9 of User core.
Aug  5 22:31:51.641776 sudo[1641]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Aug  5 22:31:51.642075 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Aug  5 22:31:51.777357 systemd[1]: Starting docker.service - Docker Application Container Engine...
Aug  5 22:31:51.777772 (dockerd)[1651]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Aug  5 22:31:52.394062 dockerd[1651]: time="2024-08-05T22:31:52.393968999Z" level=info msg="Starting up"
Aug  5 22:31:52.491047 dockerd[1651]: time="2024-08-05T22:31:52.490962437Z" level=info msg="Loading containers: start."
Aug  5 22:31:52.649147 kernel: Initializing XFRM netlink socket
Aug  5 22:31:52.737172 systemd-networkd[1375]: docker0: Link UP
Aug  5 22:31:52.898910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Aug  5 22:31:52.903816 dockerd[1651]: time="2024-08-05T22:31:52.903681833Z" level=info msg="Loading containers: done."
Aug  5 22:31:52.907487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Aug  5 22:31:53.027651 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4182299906-merged.mount: Deactivated successfully.
Aug  5 22:31:53.138462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Aug  5 22:31:53.143469 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Aug  5 22:31:53.329593 dockerd[1651]: time="2024-08-05T22:31:53.329425925Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Aug  5 22:31:53.329925 dockerd[1651]: time="2024-08-05T22:31:53.329808502Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9
Aug  5 22:31:53.330180 dockerd[1651]: time="2024-08-05T22:31:53.329992127Z" level=info msg="Daemon has completed initialization"
Aug  5 22:31:53.360377 kubelet[1760]: E0805 22:31:53.360257    1760 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Aug  5 22:31:53.369377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Aug  5 22:31:53.369624 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Aug  5 22:31:53.458104 dockerd[1651]: time="2024-08-05T22:31:53.458021838Z" level=info msg="API listen on /run/docker.sock"
Aug  5 22:31:53.458347 systemd[1]: Started docker.service - Docker Application Container Engine.
Aug  5 22:31:54.319435 containerd[1444]: time="2024-08-05T22:31:54.319377387Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.3\""
Aug  5 22:31:55.125742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383417067.mount: Deactivated successfully.
Aug  5 22:31:56.290841 containerd[1444]: time="2024-08-05T22:31:56.290761686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:31:56.291484 containerd[1444]: time="2024-08-05T22:31:56.291432865Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.3: active requests=0, bytes read=32773238"
Aug  5 22:31:56.292822 containerd[1444]: time="2024-08-05T22:31:56.292781574Z" level=info msg="ImageCreate event name:\"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:31:56.296397 containerd[1444]: time="2024-08-05T22:31:56.296355517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:31:56.297690 containerd[1444]: time="2024-08-05T22:31:56.297644725Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.3\" with image id \"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c\", size \"32770038\" in 1.978219377s"
Aug  5 22:31:56.297845 containerd[1444]: time="2024-08-05T22:31:56.297687705Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.3\" returns image reference \"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\""
Aug  5 22:31:56.323307 containerd[1444]: time="2024-08-05T22:31:56.323244113Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.3\""
Aug  5 22:31:58.585045 containerd[1444]: time="2024-08-05T22:31:58.584944398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:31:58.586178 containerd[1444]: time="2024-08-05T22:31:58.586100696Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.3: active requests=0, bytes read=29589535"
Aug  5 22:31:58.587742 containerd[1444]: time="2024-08-05T22:31:58.587689937Z" level=info msg="ImageCreate event name:\"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:31:58.593184 containerd[1444]: time="2024-08-05T22:31:58.593097839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:31:58.594605 containerd[1444]: time="2024-08-05T22:31:58.594534713Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.3\" with image id \"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7\", size \"31139481\" in 2.271247029s"
Aug  5 22:31:58.594605 containerd[1444]: time="2024-08-05T22:31:58.594587773Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.3\" returns image reference \"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\""
Aug  5 22:31:58.622959 containerd[1444]: time="2024-08-05T22:31:58.622882175Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.3\""
Aug  5 22:32:00.191197 containerd[1444]: time="2024-08-05T22:32:00.191103129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:00.191971 containerd[1444]: time="2024-08-05T22:32:00.191868514Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.3: active requests=0, bytes read=17779544"
Aug  5 22:32:00.193159 containerd[1444]: time="2024-08-05T22:32:00.193106807Z" level=info msg="ImageCreate event name:\"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:00.196541 containerd[1444]: time="2024-08-05T22:32:00.196501062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:00.197959 containerd[1444]: time="2024-08-05T22:32:00.197915565Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.3\" with image id \"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4\", size \"19329508\" in 1.574976774s"
Aug  5 22:32:00.198041 containerd[1444]: time="2024-08-05T22:32:00.197960138Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.3\" returns image reference \"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\""
Aug  5 22:32:00.224907 containerd[1444]: time="2024-08-05T22:32:00.224863912Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.3\""
Aug  5 22:32:03.400482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Aug  5 22:32:03.433209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Aug  5 22:32:03.664291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Aug  5 22:32:03.676867 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Aug  5 22:32:03.813430 kubelet[1902]: E0805 22:32:03.813225    1902 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Aug  5 22:32:03.827827 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Aug  5 22:32:03.828097 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Aug  5 22:32:04.251657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134125518.mount: Deactivated successfully.
Aug  5 22:32:07.552709 containerd[1444]: time="2024-08-05T22:32:07.551674678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:07.579306 containerd[1444]: time="2024-08-05T22:32:07.578111476Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.3: active requests=0, bytes read=29036435"
Aug  5 22:32:07.603176 containerd[1444]: time="2024-08-05T22:32:07.600244514Z" level=info msg="ImageCreate event name:\"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:07.611978 containerd[1444]: time="2024-08-05T22:32:07.610715931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:07.611978 containerd[1444]: time="2024-08-05T22:32:07.611536430Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.3\" with image id \"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\", repo tag \"registry.k8s.io/kube-proxy:v1.30.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65\", size \"29035454\" in 7.38663143s"
Aug  5 22:32:07.611978 containerd[1444]: time="2024-08-05T22:32:07.611565925Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.3\" returns image reference \"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\""
Aug  5 22:32:07.739976 containerd[1444]: time="2024-08-05T22:32:07.735867886Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Aug  5 22:32:08.816918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1059028231.mount: Deactivated successfully.
Aug  5 22:32:10.137626 containerd[1444]: time="2024-08-05T22:32:10.137532314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:10.138962 containerd[1444]: time="2024-08-05T22:32:10.138906050Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761"
Aug  5 22:32:10.140393 containerd[1444]: time="2024-08-05T22:32:10.140358695Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:10.143451 containerd[1444]: time="2024-08-05T22:32:10.143408724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:10.144841 containerd[1444]: time="2024-08-05T22:32:10.144796777Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.408875451s"
Aug  5 22:32:10.144841 containerd[1444]: time="2024-08-05T22:32:10.144836091Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""
Aug  5 22:32:10.172718 containerd[1444]: time="2024-08-05T22:32:10.172674298Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Aug  5 22:32:10.863936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1997158066.mount: Deactivated successfully.
Aug  5 22:32:10.878309 containerd[1444]: time="2024-08-05T22:32:10.878237270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:10.879294 containerd[1444]: time="2024-08-05T22:32:10.879240341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290"
Aug  5 22:32:10.881201 containerd[1444]: time="2024-08-05T22:32:10.881156465Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:10.884524 containerd[1444]: time="2024-08-05T22:32:10.884467394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:10.885574 containerd[1444]: time="2024-08-05T22:32:10.885521080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 712.800856ms"
Aug  5 22:32:10.885636 containerd[1444]: time="2024-08-05T22:32:10.885571965Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Aug  5 22:32:10.911070 containerd[1444]: time="2024-08-05T22:32:10.911017455Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\""
Aug  5 22:32:11.701052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2928827085.mount: Deactivated successfully.
Aug  5 22:32:13.898773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Aug  5 22:32:13.909300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Aug  5 22:32:14.056479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Aug  5 22:32:14.061212 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Aug  5 22:32:14.165982 kubelet[2040]: E0805 22:32:14.165834    2040 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Aug  5 22:32:14.170568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Aug  5 22:32:14.170792 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Aug  5 22:32:14.337442 containerd[1444]: time="2024-08-05T22:32:14.337368248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:14.338489 containerd[1444]: time="2024-08-05T22:32:14.338418864Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571"
Aug  5 22:32:14.339915 containerd[1444]: time="2024-08-05T22:32:14.339847127Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:14.343448 containerd[1444]: time="2024-08-05T22:32:14.343405096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:14.344710 containerd[1444]: time="2024-08-05T22:32:14.344660365Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.433591023s"
Aug  5 22:32:14.344760 containerd[1444]: time="2024-08-05T22:32:14.344712185Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\""
Aug  5 22:32:16.922931 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Aug  5 22:32:16.934348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Aug  5 22:32:16.952178 systemd[1]: Reloading requested from client PID 2126 ('systemctl') (unit session-9.scope)...
Aug  5 22:32:16.952201 systemd[1]: Reloading...
Aug  5 22:32:17.062167 zram_generator::config[2166]: No configuration found.
Aug  5 22:32:17.612617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Aug  5 22:32:17.691505 systemd[1]: Reloading finished in 738 ms.
Aug  5 22:32:17.750147 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Aug  5 22:32:17.755492 systemd[1]: kubelet.service: Deactivated successfully.
Aug  5 22:32:17.755750 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Aug  5 22:32:17.757237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Aug  5 22:32:17.916481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Aug  5 22:32:17.921580 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Aug  5 22:32:17.966947 kubelet[2213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug  5 22:32:17.966947 kubelet[2213]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Aug  5 22:32:17.966947 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug  5 22:32:17.967379 kubelet[2213]: I0805 22:32:17.966981    2213 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Aug  5 22:32:18.347901 kubelet[2213]: I0805 22:32:18.347770    2213 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Aug  5 22:32:18.347901 kubelet[2213]: I0805 22:32:18.347808    2213 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Aug  5 22:32:18.348101 kubelet[2213]: I0805 22:32:18.348080    2213 server.go:927] "Client rotation is on, will bootstrap in background"
Aug  5 22:32:18.419976 kubelet[2213]: I0805 22:32:18.419908    2213 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Aug  5 22:32:18.459737 kubelet[2213]: E0805 22:32:18.459701    2213 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:18.506456 kubelet[2213]: I0805 22:32:18.506418    2213 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Aug  5 22:32:18.526091 kubelet[2213]: I0805 22:32:18.525721    2213 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Aug  5 22:32:18.526360 kubelet[2213]: I0805 22:32:18.526089    2213 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Aug  5 22:32:18.526553 kubelet[2213]: I0805 22:32:18.526373    2213 topology_manager.go:138] "Creating topology manager with none policy"
Aug  5 22:32:18.526553 kubelet[2213]: I0805 22:32:18.526385    2213 container_manager_linux.go:301] "Creating device plugin manager"
Aug  5 22:32:18.526610 kubelet[2213]: I0805 22:32:18.526556    2213 state_mem.go:36] "Initialized new in-memory state store"
Aug  5 22:32:18.529974 kubelet[2213]: I0805 22:32:18.529945    2213 kubelet.go:400] "Attempting to sync node with API server"
Aug  5 22:32:18.529974 kubelet[2213]: I0805 22:32:18.529965    2213 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Aug  5 22:32:18.530047 kubelet[2213]: I0805 22:32:18.529993    2213 kubelet.go:312] "Adding apiserver pod source"
Aug  5 22:32:18.530047 kubelet[2213]: I0805 22:32:18.530018    2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Aug  5 22:32:18.532536 kubelet[2213]: W0805 22:32:18.531352    2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:18.532536 kubelet[2213]: E0805 22:32:18.531424    2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:18.533387 kubelet[2213]: W0805 22:32:18.533337    2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:18.533387 kubelet[2213]: E0805 22:32:18.533383    2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:18.548767 kubelet[2213]: I0805 22:32:18.548731    2213 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1"
Aug  5 22:32:18.550899 kubelet[2213]: I0805 22:32:18.550864    2213 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Aug  5 22:32:18.550949 kubelet[2213]: W0805 22:32:18.550936    2213 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Aug  5 22:32:18.551653 kubelet[2213]: I0805 22:32:18.551635    2213 server.go:1264] "Started kubelet"
Aug  5 22:32:18.551830 kubelet[2213]: I0805 22:32:18.551779    2213 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Aug  5 22:32:18.551935 kubelet[2213]: I0805 22:32:18.551888    2213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Aug  5 22:32:18.552267 kubelet[2213]: I0805 22:32:18.552253    2213 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Aug  5 22:32:18.553851 kubelet[2213]: I0805 22:32:18.553816    2213 server.go:455] "Adding debug handlers to kubelet server"
Aug  5 22:32:18.554260 kubelet[2213]: I0805 22:32:18.554236    2213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Aug  5 22:32:18.557792 kubelet[2213]: E0805 22:32:18.557764    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:18.557838 kubelet[2213]: I0805 22:32:18.557815    2213 volume_manager.go:291] "Starting Kubelet Volume Manager"
Aug  5 22:32:18.558070 kubelet[2213]: I0805 22:32:18.557914    2213 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Aug  5 22:32:18.558070 kubelet[2213]: I0805 22:32:18.557987    2213 reconciler.go:26] "Reconciler: start to sync state"
Aug  5 22:32:18.558374 kubelet[2213]: W0805 22:32:18.558337    2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:18.558422 kubelet[2213]: E0805 22:32:18.558380    2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:18.558548 kubelet[2213]: E0805 22:32:18.558459    2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms"
Aug  5 22:32:18.558548 kubelet[2213]: E0805 22:32:18.558493    2213 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Aug  5 22:32:18.559025 kubelet[2213]: I0805 22:32:18.559014    2213 factory.go:221] Registration of the systemd container factory successfully
Aug  5 22:32:18.559109 kubelet[2213]: I0805 22:32:18.559095    2213 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Aug  5 22:32:18.562135 kubelet[2213]: I0805 22:32:18.559871    2213 factory.go:221] Registration of the containerd container factory successfully
Aug  5 22:32:18.571910 kubelet[2213]: I0805 22:32:18.571848    2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Aug  5 22:32:18.573309 kubelet[2213]: I0805 22:32:18.573278    2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Aug  5 22:32:18.573363 kubelet[2213]: I0805 22:32:18.573313    2213 status_manager.go:217] "Starting to sync pod status with apiserver"
Aug  5 22:32:18.573363 kubelet[2213]: I0805 22:32:18.573338    2213 kubelet.go:2337] "Starting kubelet main sync loop"
Aug  5 22:32:18.573414 kubelet[2213]: E0805 22:32:18.573383    2213 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Aug  5 22:32:18.574558 kubelet[2213]: W0805 22:32:18.574506    2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:18.574608 kubelet[2213]: E0805 22:32:18.574564    2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:18.576421 kubelet[2213]: E0805 22:32:18.576309    2213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17e8f5d1b7e7dcb2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 22:32:18.551610546 +0000 UTC m=+0.625847432,LastTimestamp:2024-08-05 22:32:18.551610546 +0000 UTC m=+0.625847432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Aug  5 22:32:18.581879 kubelet[2213]: I0805 22:32:18.581811    2213 cpu_manager.go:214] "Starting CPU manager" policy="none"
Aug  5 22:32:18.581879 kubelet[2213]: I0805 22:32:18.581830    2213 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Aug  5 22:32:18.581879 kubelet[2213]: I0805 22:32:18.581872    2213 state_mem.go:36] "Initialized new in-memory state store"
Aug  5 22:32:18.659057 kubelet[2213]: I0805 22:32:18.659014    2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Aug  5 22:32:18.659378 kubelet[2213]: E0805 22:32:18.659345    2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost"
Aug  5 22:32:18.673628 kubelet[2213]: E0805 22:32:18.673573    2213 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Aug  5 22:32:18.759377 kubelet[2213]: E0805 22:32:18.759317    2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms"
Aug  5 22:32:18.860865 kubelet[2213]: I0805 22:32:18.860818    2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Aug  5 22:32:18.861224 kubelet[2213]: E0805 22:32:18.861193    2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost"
Aug  5 22:32:18.874471 kubelet[2213]: E0805 22:32:18.874404    2213 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Aug  5 22:32:19.160202 kubelet[2213]: E0805 22:32:19.160146    2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms"
Aug  5 22:32:19.262833 kubelet[2213]: I0805 22:32:19.262791    2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Aug  5 22:32:19.263250 kubelet[2213]: E0805 22:32:19.263201    2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost"
Aug  5 22:32:19.275283 kubelet[2213]: E0805 22:32:19.275243    2213 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Aug  5 22:32:19.551530 kubelet[2213]: W0805 22:32:19.551340    2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:19.551530 kubelet[2213]: E0805 22:32:19.551426    2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:19.717306 kubelet[2213]: W0805 22:32:19.717216    2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:19.717306 kubelet[2213]: E0805 22:32:19.717299    2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:19.959339 kubelet[2213]: W0805 22:32:19.959256    2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:19.959339 kubelet[2213]: E0805 22:32:19.959336    2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:19.960732 kubelet[2213]: E0805 22:32:19.960686    2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="1.6s"
Aug  5 22:32:20.008619 kubelet[2213]: W0805 22:32:20.008512    2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:20.008619 kubelet[2213]: E0805 22:32:20.008591    2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:20.065835 kubelet[2213]: I0805 22:32:20.065783    2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Aug  5 22:32:20.066300 kubelet[2213]: E0805 22:32:20.066256    2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost"
Aug  5 22:32:20.075528 kubelet[2213]: E0805 22:32:20.075475    2213 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Aug  5 22:32:20.514934 kubelet[2213]: E0805 22:32:20.514881    2213 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:20.939827 kubelet[2213]: I0805 22:32:20.939754    2213 policy_none.go:49] "None policy: Start"
Aug  5 22:32:20.940626 kubelet[2213]: I0805 22:32:20.940576    2213 memory_manager.go:170] "Starting memorymanager" policy="None"
Aug  5 22:32:20.940626 kubelet[2213]: I0805 22:32:20.940600    2213 state_mem.go:35] "Initializing new in-memory state store"
Aug  5 22:32:21.080099 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Aug  5 22:32:21.104098 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Aug  5 22:32:21.107606 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Aug  5 22:32:21.126690 kubelet[2213]: I0805 22:32:21.126643    2213 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Aug  5 22:32:21.127030 kubelet[2213]: I0805 22:32:21.126972    2213 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Aug  5 22:32:21.127206 kubelet[2213]: I0805 22:32:21.127184    2213 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Aug  5 22:32:21.128794 kubelet[2213]: E0805 22:32:21.128754    2213 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Aug  5 22:32:21.302414 kubelet[2213]: W0805 22:32:21.302234    2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:21.302414 kubelet[2213]: E0805 22:32:21.302302    2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:21.561428 kubelet[2213]: W0805 22:32:21.561253    2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:21.561428 kubelet[2213]: E0805 22:32:21.561363    2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:21.561428 kubelet[2213]: E0805 22:32:21.561285    2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="3.2s"
Aug  5 22:32:21.668645 kubelet[2213]: I0805 22:32:21.668589    2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Aug  5 22:32:21.669069 kubelet[2213]: E0805 22:32:21.669035    2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost"
Aug  5 22:32:21.676210 kubelet[2213]: I0805 22:32:21.676161    2213 topology_manager.go:215] "Topology Admit Handler" podUID="4f49d7d0d334c18c22d8b4d9086c9ace" podNamespace="kube-system" podName="kube-apiserver-localhost"
Aug  5 22:32:21.677034 kubelet[2213]: I0805 22:32:21.677002    2213 topology_manager.go:215] "Topology Admit Handler" podUID="471a108742c0b3658d07e3bda7ae5d17" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Aug  5 22:32:21.678171 kubelet[2213]: I0805 22:32:21.677779    2213 topology_manager.go:215] "Topology Admit Handler" podUID="3b0306f30b5bc847ed1d56b34a56bbaf" podNamespace="kube-system" podName="kube-scheduler-localhost"
Aug  5 22:32:21.685837 systemd[1]: Created slice kubepods-burstable-pod4f49d7d0d334c18c22d8b4d9086c9ace.slice - libcontainer container kubepods-burstable-pod4f49d7d0d334c18c22d8b4d9086c9ace.slice.
Aug  5 22:32:21.701628 systemd[1]: Created slice kubepods-burstable-pod471a108742c0b3658d07e3bda7ae5d17.slice - libcontainer container kubepods-burstable-pod471a108742c0b3658d07e3bda7ae5d17.slice.
Aug  5 22:32:21.705198 systemd[1]: Created slice kubepods-burstable-pod3b0306f30b5bc847ed1d56b34a56bbaf.slice - libcontainer container kubepods-burstable-pod3b0306f30b5bc847ed1d56b34a56bbaf.slice.
Aug  5 22:32:21.776442 kubelet[2213]: I0805 22:32:21.776377    2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f49d7d0d334c18c22d8b4d9086c9ace-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f49d7d0d334c18c22d8b4d9086c9ace\") " pod="kube-system/kube-apiserver-localhost"
Aug  5 22:32:21.776442 kubelet[2213]: I0805 22:32:21.776439    2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost"
Aug  5 22:32:21.776442 kubelet[2213]: I0805 22:32:21.776464    2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost"
Aug  5 22:32:21.776714 kubelet[2213]: I0805 22:32:21.776489    2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost"
Aug  5 22:32:21.776714 kubelet[2213]: I0805 22:32:21.776512    2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost"
Aug  5 22:32:21.776714 kubelet[2213]: I0805 22:32:21.776531    2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b0306f30b5bc847ed1d56b34a56bbaf-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3b0306f30b5bc847ed1d56b34a56bbaf\") " pod="kube-system/kube-scheduler-localhost"
Aug  5 22:32:21.776714 kubelet[2213]: I0805 22:32:21.776550    2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f49d7d0d334c18c22d8b4d9086c9ace-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f49d7d0d334c18c22d8b4d9086c9ace\") " pod="kube-system/kube-apiserver-localhost"
Aug  5 22:32:21.776714 kubelet[2213]: I0805 22:32:21.776576    2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f49d7d0d334c18c22d8b4d9086c9ace-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4f49d7d0d334c18c22d8b4d9086c9ace\") " pod="kube-system/kube-apiserver-localhost"
Aug  5 22:32:21.776822 kubelet[2213]: I0805 22:32:21.776598    2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost"
Aug  5 22:32:21.981271 kubelet[2213]: W0805 22:32:21.981201    2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:21.981271 kubelet[2213]: E0805 22:32:21.981247    2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:22.001527 kubelet[2213]: E0805 22:32:22.001475    2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:22.002323 containerd[1444]: time="2024-08-05T22:32:22.002267684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4f49d7d0d334c18c22d8b4d9086c9ace,Namespace:kube-system,Attempt:0,}"
Aug  5 22:32:22.004562 kubelet[2213]: E0805 22:32:22.004541    2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:22.005191 containerd[1444]: time="2024-08-05T22:32:22.005081004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:471a108742c0b3658d07e3bda7ae5d17,Namespace:kube-system,Attempt:0,}"
Aug  5 22:32:22.007814 kubelet[2213]: E0805 22:32:22.007787    2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:22.008331 containerd[1444]: time="2024-08-05T22:32:22.008295958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3b0306f30b5bc847ed1d56b34a56bbaf,Namespace:kube-system,Attempt:0,}"
Aug  5 22:32:22.505835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034077215.mount: Deactivated successfully.
Aug  5 22:32:22.516148 containerd[1444]: time="2024-08-05T22:32:22.516049240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Aug  5 22:32:22.522699 containerd[1444]: time="2024-08-05T22:32:22.520434710Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Aug  5 22:32:22.523999 containerd[1444]: time="2024-08-05T22:32:22.523882667Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Aug  5 22:32:22.525492 containerd[1444]: time="2024-08-05T22:32:22.525441862Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Aug  5 22:32:22.526861 containerd[1444]: time="2024-08-05T22:32:22.526739509Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Aug  5 22:32:22.528146 containerd[1444]: time="2024-08-05T22:32:22.528056955Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056"
Aug  5 22:32:22.530260 containerd[1444]: time="2024-08-05T22:32:22.530208937Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Aug  5 22:32:22.531324 containerd[1444]: time="2024-08-05T22:32:22.531265746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Aug  5 22:32:22.532138 containerd[1444]: time="2024-08-05T22:32:22.532090064Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.857011ms"
Aug  5 22:32:22.535516 containerd[1444]: time="2024-08-05T22:32:22.535465183Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 533.09282ms"
Aug  5 22:32:22.538345 containerd[1444]: time="2024-08-05T22:32:22.538300314Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 529.903995ms"
Aug  5 22:32:22.690934 containerd[1444]: time="2024-08-05T22:32:22.690812122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug  5 22:32:22.690934 containerd[1444]: time="2024-08-05T22:32:22.690877326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:22.690934 containerd[1444]: time="2024-08-05T22:32:22.690901111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug  5 22:32:22.690934 containerd[1444]: time="2024-08-05T22:32:22.690918845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:22.691481 containerd[1444]: time="2024-08-05T22:32:22.691392455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug  5 22:32:22.691514 containerd[1444]: time="2024-08-05T22:32:22.691465584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:22.691540 containerd[1444]: time="2024-08-05T22:32:22.691506011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug  5 22:32:22.691580 containerd[1444]: time="2024-08-05T22:32:22.691526410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:22.692853 containerd[1444]: time="2024-08-05T22:32:22.692738245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug  5 22:32:22.692853 containerd[1444]: time="2024-08-05T22:32:22.692815893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:22.693157 containerd[1444]: time="2024-08-05T22:32:22.692840248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug  5 22:32:22.693157 containerd[1444]: time="2024-08-05T22:32:22.692959516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:22.725420 systemd[1]: Started cri-containerd-81a96f29a52afafa68f1ae4c49bff88e1cbe1b02aa5759d6bac9e77a703dddb8.scope - libcontainer container 81a96f29a52afafa68f1ae4c49bff88e1cbe1b02aa5759d6bac9e77a703dddb8.
Aug  5 22:32:22.730855 systemd[1]: Started cri-containerd-7f51cb2403ff35f72bc14f04e99521416c1040882dd8fdc0f58f399bcd619ade.scope - libcontainer container 7f51cb2403ff35f72bc14f04e99521416c1040882dd8fdc0f58f399bcd619ade.
Aug  5 22:32:22.734714 systemd[1]: Started cri-containerd-a8fb60db4fa3d7a12f7d15b7dbbf4092607abd6e9e019372cf3819037c4fea1e.scope - libcontainer container a8fb60db4fa3d7a12f7d15b7dbbf4092607abd6e9e019372cf3819037c4fea1e.
Aug  5 22:32:22.771892 containerd[1444]: time="2024-08-05T22:32:22.771591338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3b0306f30b5bc847ed1d56b34a56bbaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"81a96f29a52afafa68f1ae4c49bff88e1cbe1b02aa5759d6bac9e77a703dddb8\""
Aug  5 22:32:22.773458 kubelet[2213]: E0805 22:32:22.773432    2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:22.783660 containerd[1444]: time="2024-08-05T22:32:22.783604954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4f49d7d0d334c18c22d8b4d9086c9ace,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8fb60db4fa3d7a12f7d15b7dbbf4092607abd6e9e019372cf3819037c4fea1e\""
Aug  5 22:32:22.784025 containerd[1444]: time="2024-08-05T22:32:22.783976771Z" level=info msg="CreateContainer within sandbox \"81a96f29a52afafa68f1ae4c49bff88e1cbe1b02aa5759d6bac9e77a703dddb8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Aug  5 22:32:22.786017 kubelet[2213]: E0805 22:32:22.785991    2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:22.787531 containerd[1444]: time="2024-08-05T22:32:22.787212424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:471a108742c0b3658d07e3bda7ae5d17,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f51cb2403ff35f72bc14f04e99521416c1040882dd8fdc0f58f399bcd619ade\""
Aug  5 22:32:22.787734 containerd[1444]: time="2024-08-05T22:32:22.787709980Z" level=info msg="CreateContainer within sandbox \"a8fb60db4fa3d7a12f7d15b7dbbf4092607abd6e9e019372cf3819037c4fea1e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Aug  5 22:32:22.788055 kubelet[2213]: E0805 22:32:22.788023    2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:22.797787 containerd[1444]: time="2024-08-05T22:32:22.797736918Z" level=info msg="CreateContainer within sandbox \"7f51cb2403ff35f72bc14f04e99521416c1040882dd8fdc0f58f399bcd619ade\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Aug  5 22:32:22.840599 kubelet[2213]: W0805 22:32:22.840534    2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:22.840599 kubelet[2213]: E0805 22:32:22.840598    2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused
Aug  5 22:32:22.894947 containerd[1444]: time="2024-08-05T22:32:22.894865214Z" level=info msg="CreateContainer within sandbox \"81a96f29a52afafa68f1ae4c49bff88e1cbe1b02aa5759d6bac9e77a703dddb8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9d505bf8d479fc563c86d603580f6a5bb8d719ddc24400e252faa5c63047f341\""
Aug  5 22:32:22.895868 containerd[1444]: time="2024-08-05T22:32:22.895576828Z" level=info msg="StartContainer for \"9d505bf8d479fc563c86d603580f6a5bb8d719ddc24400e252faa5c63047f341\""
Aug  5 22:32:22.902093 containerd[1444]: time="2024-08-05T22:32:22.902014350Z" level=info msg="CreateContainer within sandbox \"a8fb60db4fa3d7a12f7d15b7dbbf4092607abd6e9e019372cf3819037c4fea1e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2cf5fc705c8e2927771f835c83d0624ab8c76365ea36cdfa8b8afb6aa015863d\""
Aug  5 22:32:22.902876 containerd[1444]: time="2024-08-05T22:32:22.902829830Z" level=info msg="StartContainer for \"2cf5fc705c8e2927771f835c83d0624ab8c76365ea36cdfa8b8afb6aa015863d\""
Aug  5 22:32:22.905710 containerd[1444]: time="2024-08-05T22:32:22.905656216Z" level=info msg="CreateContainer within sandbox \"7f51cb2403ff35f72bc14f04e99521416c1040882dd8fdc0f58f399bcd619ade\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"565fef8d5810a71db09ee135d831f96bf3f90fc28953a1965551f4061fb0af12\""
Aug  5 22:32:22.906061 containerd[1444]: time="2024-08-05T22:32:22.906030868Z" level=info msg="StartContainer for \"565fef8d5810a71db09ee135d831f96bf3f90fc28953a1965551f4061fb0af12\""
Aug  5 22:32:22.932397 systemd[1]: Started cri-containerd-9d505bf8d479fc563c86d603580f6a5bb8d719ddc24400e252faa5c63047f341.scope - libcontainer container 9d505bf8d479fc563c86d603580f6a5bb8d719ddc24400e252faa5c63047f341.
Aug  5 22:32:22.937729 systemd[1]: Started cri-containerd-2cf5fc705c8e2927771f835c83d0624ab8c76365ea36cdfa8b8afb6aa015863d.scope - libcontainer container 2cf5fc705c8e2927771f835c83d0624ab8c76365ea36cdfa8b8afb6aa015863d.
Aug  5 22:32:22.951747 systemd[1]: Started cri-containerd-565fef8d5810a71db09ee135d831f96bf3f90fc28953a1965551f4061fb0af12.scope - libcontainer container 565fef8d5810a71db09ee135d831f96bf3f90fc28953a1965551f4061fb0af12.
Aug  5 22:32:23.000066 containerd[1444]: time="2024-08-05T22:32:22.999988415Z" level=info msg="StartContainer for \"2cf5fc705c8e2927771f835c83d0624ab8c76365ea36cdfa8b8afb6aa015863d\" returns successfully"
Aug  5 22:32:23.000601 containerd[1444]: time="2024-08-05T22:32:22.999993564Z" level=info msg="StartContainer for \"9d505bf8d479fc563c86d603580f6a5bb8d719ddc24400e252faa5c63047f341\" returns successfully"
Aug  5 22:32:23.009072 containerd[1444]: time="2024-08-05T22:32:23.008986132Z" level=info msg="StartContainer for \"565fef8d5810a71db09ee135d831f96bf3f90fc28953a1965551f4061fb0af12\" returns successfully"
Aug  5 22:32:23.592874 kubelet[2213]: E0805 22:32:23.592831    2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:23.595042 kubelet[2213]: E0805 22:32:23.595014    2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:23.596985 kubelet[2213]: E0805 22:32:23.596954    2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:24.595179 kubelet[2213]: E0805 22:32:24.595095    2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:24.974557 kubelet[2213]: I0805 22:32:24.974480    2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Aug  5 22:32:25.157046 update_engine[1431]: I0805 22:32:25.156926  1431 update_attempter.cc:509] Updating boot flags...
Aug  5 22:32:25.247870 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2499)
Aug  5 22:32:25.336326 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2503)
Aug  5 22:32:26.013714 kubelet[2213]: E0805 22:32:26.013647    2213 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost"
Aug  5 22:32:26.101435 kubelet[2213]: I0805 22:32:26.101374    2213 kubelet_node_status.go:76] "Successfully registered node" node="localhost"
Aug  5 22:32:26.112877 kubelet[2213]: E0805 22:32:26.112824    2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:26.114191 kubelet[2213]: E0805 22:32:26.114165    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:26.148760 kubelet[2213]: E0805 22:32:26.148433    2213 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17e8f5d1b7e7dcb2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 22:32:18.551610546 +0000 UTC m=+0.625847432,LastTimestamp:2024-08-05 22:32:18.551610546 +0000 UTC m=+0.625847432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Aug  5 22:32:26.206370 kubelet[2213]: E0805 22:32:26.206250    2213 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17e8f5d1b850c481  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 22:32:18.558485633 +0000 UTC m=+0.632722509,LastTimestamp:2024-08-05 22:32:18.558485633 +0000 UTC m=+0.632722509,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Aug  5 22:32:26.214439 kubelet[2213]: E0805 22:32:26.214285    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:26.315026 kubelet[2213]: E0805 22:32:26.314844    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:26.415030 kubelet[2213]: E0805 22:32:26.414978    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:26.515839 kubelet[2213]: E0805 22:32:26.515767    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:26.616157 kubelet[2213]: E0805 22:32:26.615900    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:26.716727 kubelet[2213]: E0805 22:32:26.716641    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:26.817385 kubelet[2213]: E0805 22:32:26.817301    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:26.918015 kubelet[2213]: E0805 22:32:26.917933    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:27.018248 kubelet[2213]: E0805 22:32:27.018175    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:27.119284 kubelet[2213]: E0805 22:32:27.119221    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:27.220140 kubelet[2213]: E0805 22:32:27.219980    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:27.321254 kubelet[2213]: E0805 22:32:27.321197    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:27.421559 kubelet[2213]: E0805 22:32:27.421503    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:27.522330 kubelet[2213]: E0805 22:32:27.522151    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:27.622920 kubelet[2213]: E0805 22:32:27.622853    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:27.723451 kubelet[2213]: E0805 22:32:27.723388    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:27.823927 kubelet[2213]: E0805 22:32:27.823750    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:27.924608 kubelet[2213]: E0805 22:32:27.924530    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:28.025027 kubelet[2213]: E0805 22:32:28.024973    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:28.125889 kubelet[2213]: E0805 22:32:28.125680    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:28.226477 kubelet[2213]: E0805 22:32:28.226396    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:28.327632 kubelet[2213]: E0805 22:32:28.327535    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:28.378074 systemd[1]: Reloading requested from client PID 2507 ('systemctl') (unit session-9.scope)...
Aug  5 22:32:28.378102 systemd[1]: Reloading...
Aug  5 22:32:28.428564 kubelet[2213]: E0805 22:32:28.428523    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:28.453913 zram_generator::config[2547]: No configuration found.
Aug  5 22:32:28.530469 kubelet[2213]: E0805 22:32:28.530394    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:28.587263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Aug  5 22:32:28.631814 kubelet[2213]: E0805 22:32:28.631467    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:28.699187 systemd[1]: Reloading finished in 320 ms.
Aug  5 22:32:28.732302 kubelet[2213]: E0805 22:32:28.732247    2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Aug  5 22:32:28.749198 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Aug  5 22:32:28.761146 systemd[1]: kubelet.service: Deactivated successfully.
Aug  5 22:32:28.761527 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Aug  5 22:32:28.761612 systemd[1]: kubelet.service: Consumed 1.013s CPU time, 118.3M memory peak, 0B memory swap peak.
Aug  5 22:32:28.769774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Aug  5 22:32:28.978835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Aug  5 22:32:28.985704 (kubelet)[2589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Aug  5 22:32:29.056083 kubelet[2589]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug  5 22:32:29.056083 kubelet[2589]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Aug  5 22:32:29.056083 kubelet[2589]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug  5 22:32:29.056684 kubelet[2589]: I0805 22:32:29.056136    2589 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Aug  5 22:32:29.062211 kubelet[2589]: I0805 22:32:29.062131    2589 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Aug  5 22:32:29.062211 kubelet[2589]: I0805 22:32:29.062165    2589 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Aug  5 22:32:29.062738 kubelet[2589]: I0805 22:32:29.062715    2589 server.go:927] "Client rotation is on, will bootstrap in background"
Aug  5 22:32:29.064277 kubelet[2589]: I0805 22:32:29.064248    2589 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Aug  5 22:32:29.066060 kubelet[2589]: I0805 22:32:29.065561    2589 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Aug  5 22:32:29.076497 kubelet[2589]: I0805 22:32:29.076431    2589 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Aug  5 22:32:29.076755 kubelet[2589]: I0805 22:32:29.076706    2589 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Aug  5 22:32:29.076976 kubelet[2589]: I0805 22:32:29.076746    2589 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Aug  5 22:32:29.077104 kubelet[2589]: I0805 22:32:29.076990    2589 topology_manager.go:138] "Creating topology manager with none policy"
Aug  5 22:32:29.077104 kubelet[2589]: I0805 22:32:29.077003    2589 container_manager_linux.go:301] "Creating device plugin manager"
Aug  5 22:32:29.077104 kubelet[2589]: I0805 22:32:29.077058    2589 state_mem.go:36] "Initialized new in-memory state store"
Aug  5 22:32:29.077223 kubelet[2589]: I0805 22:32:29.077198    2589 kubelet.go:400] "Attempting to sync node with API server"
Aug  5 22:32:29.077223 kubelet[2589]: I0805 22:32:29.077213    2589 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Aug  5 22:32:29.077281 kubelet[2589]: I0805 22:32:29.077250    2589 kubelet.go:312] "Adding apiserver pod source"
Aug  5 22:32:29.077281 kubelet[2589]: I0805 22:32:29.077274    2589 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Aug  5 22:32:29.078274 kubelet[2589]: I0805 22:32:29.078188    2589 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1"
Aug  5 22:32:29.078601 kubelet[2589]: I0805 22:32:29.078402    2589 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Aug  5 22:32:29.078885 kubelet[2589]: I0805 22:32:29.078853    2589 server.go:1264] "Started kubelet"
Aug  5 22:32:29.079367 kubelet[2589]: I0805 22:32:29.079325    2589 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Aug  5 22:32:29.079414 kubelet[2589]: I0805 22:32:29.079343    2589 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Aug  5 22:32:29.079809 kubelet[2589]: I0805 22:32:29.079773    2589 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Aug  5 22:32:29.084110 kubelet[2589]: I0805 22:32:29.084058    2589 server.go:455] "Adding debug handlers to kubelet server"
Aug  5 22:32:29.089136 kubelet[2589]: I0805 22:32:29.085513    2589 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Aug  5 22:32:29.089136 kubelet[2589]: I0805 22:32:29.085844    2589 volume_manager.go:291] "Starting Kubelet Volume Manager"
Aug  5 22:32:29.089136 kubelet[2589]: I0805 22:32:29.087774    2589 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Aug  5 22:32:29.089136 kubelet[2589]: I0805 22:32:29.088022    2589 reconciler.go:26] "Reconciler: start to sync state"
Aug  5 22:32:29.094171 kubelet[2589]: E0805 22:32:29.092566    2589 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Aug  5 22:32:29.094171 kubelet[2589]: I0805 22:32:29.093683    2589 factory.go:221] Registration of the systemd container factory successfully
Aug  5 22:32:29.094171 kubelet[2589]: I0805 22:32:29.093763    2589 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Aug  5 22:32:29.098151 kubelet[2589]: I0805 22:32:29.096529    2589 factory.go:221] Registration of the containerd container factory successfully
Aug  5 22:32:29.099984 kubelet[2589]: I0805 22:32:29.099948    2589 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Aug  5 22:32:29.101840 kubelet[2589]: I0805 22:32:29.101818    2589 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Aug  5 22:32:29.101893 kubelet[2589]: I0805 22:32:29.101853    2589 status_manager.go:217] "Starting to sync pod status with apiserver"
Aug  5 22:32:29.101893 kubelet[2589]: I0805 22:32:29.101885    2589 kubelet.go:2337] "Starting kubelet main sync loop"
Aug  5 22:32:29.101976 kubelet[2589]: E0805 22:32:29.101930    2589 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Aug  5 22:32:29.137808 kubelet[2589]: I0805 22:32:29.137760    2589 cpu_manager.go:214] "Starting CPU manager" policy="none"
Aug  5 22:32:29.137808 kubelet[2589]: I0805 22:32:29.137785    2589 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Aug  5 22:32:29.137808 kubelet[2589]: I0805 22:32:29.137811    2589 state_mem.go:36] "Initialized new in-memory state store"
Aug  5 22:32:29.138024 kubelet[2589]: I0805 22:32:29.138009    2589 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Aug  5 22:32:29.138046 kubelet[2589]: I0805 22:32:29.138022    2589 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Aug  5 22:32:29.138074 kubelet[2589]: I0805 22:32:29.138047    2589 policy_none.go:49] "None policy: Start"
Aug  5 22:32:29.138506 kubelet[2589]: I0805 22:32:29.138477    2589 memory_manager.go:170] "Starting memorymanager" policy="None"
Aug  5 22:32:29.138506 kubelet[2589]: I0805 22:32:29.138504    2589 state_mem.go:35] "Initializing new in-memory state store"
Aug  5 22:32:29.138680 kubelet[2589]: I0805 22:32:29.138661    2589 state_mem.go:75] "Updated machine memory state"
Aug  5 22:32:29.143647 kubelet[2589]: I0805 22:32:29.143619    2589 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Aug  5 22:32:29.143895 kubelet[2589]: I0805 22:32:29.143819    2589 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Aug  5 22:32:29.144062 kubelet[2589]: I0805 22:32:29.143929    2589 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Aug  5 22:32:29.200259 kubelet[2589]: I0805 22:32:29.200196    2589 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Aug  5 22:32:29.202667 kubelet[2589]: I0805 22:32:29.202612    2589 topology_manager.go:215] "Topology Admit Handler" podUID="471a108742c0b3658d07e3bda7ae5d17" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Aug  5 22:32:29.202859 kubelet[2589]: I0805 22:32:29.202826    2589 topology_manager.go:215] "Topology Admit Handler" podUID="3b0306f30b5bc847ed1d56b34a56bbaf" podNamespace="kube-system" podName="kube-scheduler-localhost"
Aug  5 22:32:29.202935 kubelet[2589]: I0805 22:32:29.202875    2589 topology_manager.go:215] "Topology Admit Handler" podUID="4f49d7d0d334c18c22d8b4d9086c9ace" podNamespace="kube-system" podName="kube-apiserver-localhost"
Aug  5 22:32:29.208383 kubelet[2589]: I0805 22:32:29.208350    2589 kubelet_node_status.go:112] "Node was previously registered" node="localhost"
Aug  5 22:32:29.208522 kubelet[2589]: I0805 22:32:29.208444    2589 kubelet_node_status.go:76] "Successfully registered node" node="localhost"
Aug  5 22:32:29.389404 kubelet[2589]: I0805 22:32:29.389215    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f49d7d0d334c18c22d8b4d9086c9ace-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f49d7d0d334c18c22d8b4d9086c9ace\") " pod="kube-system/kube-apiserver-localhost"
Aug  5 22:32:29.389404 kubelet[2589]: I0805 22:32:29.389286    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f49d7d0d334c18c22d8b4d9086c9ace-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f49d7d0d334c18c22d8b4d9086c9ace\") " pod="kube-system/kube-apiserver-localhost"
Aug  5 22:32:29.389404 kubelet[2589]: I0805 22:32:29.389322    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f49d7d0d334c18c22d8b4d9086c9ace-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4f49d7d0d334c18c22d8b4d9086c9ace\") " pod="kube-system/kube-apiserver-localhost"
Aug  5 22:32:29.389404 kubelet[2589]: I0805 22:32:29.389350    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost"
Aug  5 22:32:29.389811 kubelet[2589]: I0805 22:32:29.389426    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost"
Aug  5 22:32:29.389811 kubelet[2589]: I0805 22:32:29.389476    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b0306f30b5bc847ed1d56b34a56bbaf-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3b0306f30b5bc847ed1d56b34a56bbaf\") " pod="kube-system/kube-scheduler-localhost"
Aug  5 22:32:29.389811 kubelet[2589]: I0805 22:32:29.389498    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost"
Aug  5 22:32:29.389811 kubelet[2589]: I0805 22:32:29.389520    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost"
Aug  5 22:32:29.389811 kubelet[2589]: I0805 22:32:29.389540    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost"
Aug  5 22:32:29.511971 kubelet[2589]: E0805 22:32:29.511924    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:29.516970 kubelet[2589]: E0805 22:32:29.516926    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:29.517183 kubelet[2589]: E0805 22:32:29.517153    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:30.078654 kubelet[2589]: I0805 22:32:30.078536    2589 apiserver.go:52] "Watching apiserver"
Aug  5 22:32:30.088975 kubelet[2589]: I0805 22:32:30.088901    2589 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Aug  5 22:32:30.122460 kubelet[2589]: E0805 22:32:30.122409    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:30.124603 kubelet[2589]: E0805 22:32:30.124531    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:30.126098 kubelet[2589]: E0805 22:32:30.126072    2589 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost"
Aug  5 22:32:30.129161 kubelet[2589]: E0805 22:32:30.126620    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:30.144478 kubelet[2589]: I0805 22:32:30.144396    2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.1443423369999999 podStartE2EDuration="1.144342337s" podCreationTimestamp="2024-08-05 22:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:32:30.144183487 +0000 UTC m=+1.145054761" watchObservedRunningTime="2024-08-05 22:32:30.144342337 +0000 UTC m=+1.145213611"
Aug  5 22:32:30.160209 kubelet[2589]: I0805 22:32:30.160107    2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.160080449 podStartE2EDuration="1.160080449s" podCreationTimestamp="2024-08-05 22:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:32:30.152362955 +0000 UTC m=+1.153234239" watchObservedRunningTime="2024-08-05 22:32:30.160080449 +0000 UTC m=+1.160951733"
Aug  5 22:32:30.179151 kubelet[2589]: I0805 22:32:30.176371    2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.1763449559999999 podStartE2EDuration="1.176344956s" podCreationTimestamp="2024-08-05 22:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:32:30.160447052 +0000 UTC m=+1.161318336" watchObservedRunningTime="2024-08-05 22:32:30.176344956 +0000 UTC m=+1.177216230"
Aug  5 22:32:31.123804 kubelet[2589]: E0805 22:32:31.123759    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:31.124426 kubelet[2589]: E0805 22:32:31.123909    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:33.252093 kubelet[2589]: E0805 22:32:33.252049    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:34.129017 kubelet[2589]: E0805 22:32:34.128978    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:34.316818 sudo[1641]: pam_unix(sudo:session): session closed for user root
Aug  5 22:32:34.322104 sshd[1638]: pam_unix(sshd:session): session closed for user core
Aug  5 22:32:34.326853 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:56610.service: Deactivated successfully.
Aug  5 22:32:34.329471 systemd[1]: session-9.scope: Deactivated successfully.
Aug  5 22:32:34.329714 systemd[1]: session-9.scope: Consumed 5.391s CPU time, 143.3M memory peak, 0B memory swap peak.
Aug  5 22:32:34.330235 systemd-logind[1428]: Session 9 logged out. Waiting for processes to exit.
Aug  5 22:32:34.331261 systemd-logind[1428]: Removed session 9.
Aug  5 22:32:39.283517 kubelet[2589]: E0805 22:32:39.283149    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:40.139542 kubelet[2589]: E0805 22:32:40.138930    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:40.744316 kubelet[2589]: E0805 22:32:40.744272    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:41.960529 kubelet[2589]: I0805 22:32:41.960461    2589 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Aug  5 22:32:41.961099 containerd[1444]: time="2024-08-05T22:32:41.961021259Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Aug  5 22:32:41.961717 kubelet[2589]: I0805 22:32:41.961295    2589 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Aug  5 22:32:42.902747 kubelet[2589]: I0805 22:32:42.902698    2589 topology_manager.go:215] "Topology Admit Handler" podUID="5e0bb9f3-b162-4c70-b6a5-e99d047c3bca" podNamespace="kube-system" podName="kube-proxy-9nrk5"
Aug  5 22:32:42.910419 systemd[1]: Created slice kubepods-besteffort-pod5e0bb9f3_b162_4c70_b6a5_e99d047c3bca.slice - libcontainer container kubepods-besteffort-pod5e0bb9f3_b162_4c70_b6a5_e99d047c3bca.slice.
Aug  5 22:32:42.971699 kubelet[2589]: I0805 22:32:42.971631    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e0bb9f3-b162-4c70-b6a5-e99d047c3bca-kube-proxy\") pod \"kube-proxy-9nrk5\" (UID: \"5e0bb9f3-b162-4c70-b6a5-e99d047c3bca\") " pod="kube-system/kube-proxy-9nrk5"
Aug  5 22:32:42.971699 kubelet[2589]: I0805 22:32:42.971682    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e0bb9f3-b162-4c70-b6a5-e99d047c3bca-xtables-lock\") pod \"kube-proxy-9nrk5\" (UID: \"5e0bb9f3-b162-4c70-b6a5-e99d047c3bca\") " pod="kube-system/kube-proxy-9nrk5"
Aug  5 22:32:42.971699 kubelet[2589]: I0805 22:32:42.971702    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e0bb9f3-b162-4c70-b6a5-e99d047c3bca-lib-modules\") pod \"kube-proxy-9nrk5\" (UID: \"5e0bb9f3-b162-4c70-b6a5-e99d047c3bca\") " pod="kube-system/kube-proxy-9nrk5"
Aug  5 22:32:42.972340 kubelet[2589]: I0805 22:32:42.971726    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6ftw\" (UniqueName: \"kubernetes.io/projected/5e0bb9f3-b162-4c70-b6a5-e99d047c3bca-kube-api-access-b6ftw\") pod \"kube-proxy-9nrk5\" (UID: \"5e0bb9f3-b162-4c70-b6a5-e99d047c3bca\") " pod="kube-system/kube-proxy-9nrk5"
Aug  5 22:32:43.225328 kubelet[2589]: E0805 22:32:43.225156    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:43.226103 containerd[1444]: time="2024-08-05T22:32:43.226045305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9nrk5,Uid:5e0bb9f3-b162-4c70-b6a5-e99d047c3bca,Namespace:kube-system,Attempt:0,}"
Aug  5 22:32:43.508319 kubelet[2589]: I0805 22:32:43.507620    2589 topology_manager.go:215] "Topology Admit Handler" podUID="aa0d1d77-ec06-48f7-9fe0-d7d5619dbb83" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-x7dgj"
Aug  5 22:32:43.516208 systemd[1]: Created slice kubepods-besteffort-podaa0d1d77_ec06_48f7_9fe0_d7d5619dbb83.slice - libcontainer container kubepods-besteffort-podaa0d1d77_ec06_48f7_9fe0_d7d5619dbb83.slice.
Aug  5 22:32:43.575850 kubelet[2589]: I0805 22:32:43.575759    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa0d1d77-ec06-48f7-9fe0-d7d5619dbb83-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-x7dgj\" (UID: \"aa0d1d77-ec06-48f7-9fe0-d7d5619dbb83\") " pod="tigera-operator/tigera-operator-76ff79f7fd-x7dgj"
Aug  5 22:32:43.575850 kubelet[2589]: I0805 22:32:43.575812    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfhv\" (UniqueName: \"kubernetes.io/projected/aa0d1d77-ec06-48f7-9fe0-d7d5619dbb83-kube-api-access-2rfhv\") pod \"tigera-operator-76ff79f7fd-x7dgj\" (UID: \"aa0d1d77-ec06-48f7-9fe0-d7d5619dbb83\") " pod="tigera-operator/tigera-operator-76ff79f7fd-x7dgj"
Aug  5 22:32:43.576729 containerd[1444]: time="2024-08-05T22:32:43.576259589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug  5 22:32:43.576729 containerd[1444]: time="2024-08-05T22:32:43.576444638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:43.576729 containerd[1444]: time="2024-08-05T22:32:43.576478511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug  5 22:32:43.576729 containerd[1444]: time="2024-08-05T22:32:43.576496956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:43.604412 systemd[1]: Started cri-containerd-db5b5f0f72417830b7ca3c7fe7bf785e21bf4c769c7086f0fd9aed026dee0277.scope - libcontainer container db5b5f0f72417830b7ca3c7fe7bf785e21bf4c769c7086f0fd9aed026dee0277.
Aug  5 22:32:43.630243 containerd[1444]: time="2024-08-05T22:32:43.630192158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9nrk5,Uid:5e0bb9f3-b162-4c70-b6a5-e99d047c3bca,Namespace:kube-system,Attempt:0,} returns sandbox id \"db5b5f0f72417830b7ca3c7fe7bf785e21bf4c769c7086f0fd9aed026dee0277\""
Aug  5 22:32:43.631323 kubelet[2589]: E0805 22:32:43.631264    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:43.633584 containerd[1444]: time="2024-08-05T22:32:43.633533447Z" level=info msg="CreateContainer within sandbox \"db5b5f0f72417830b7ca3c7fe7bf785e21bf4c769c7086f0fd9aed026dee0277\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Aug  5 22:32:43.819867 containerd[1444]: time="2024-08-05T22:32:43.819737079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-x7dgj,Uid:aa0d1d77-ec06-48f7-9fe0-d7d5619dbb83,Namespace:tigera-operator,Attempt:0,}"
Aug  5 22:32:43.983876 containerd[1444]: time="2024-08-05T22:32:43.983779080Z" level=info msg="CreateContainer within sandbox \"db5b5f0f72417830b7ca3c7fe7bf785e21bf4c769c7086f0fd9aed026dee0277\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b708e96dcfbd856f0b8d86e84aa417e55f8bf5a220591847c89577168c45be8d\""
Aug  5 22:32:43.984628 containerd[1444]: time="2024-08-05T22:32:43.984571792Z" level=info msg="StartContainer for \"b708e96dcfbd856f0b8d86e84aa417e55f8bf5a220591847c89577168c45be8d\""
Aug  5 22:32:44.015769 systemd[1]: Started cri-containerd-b708e96dcfbd856f0b8d86e84aa417e55f8bf5a220591847c89577168c45be8d.scope - libcontainer container b708e96dcfbd856f0b8d86e84aa417e55f8bf5a220591847c89577168c45be8d.
Aug  5 22:32:44.023654 containerd[1444]: time="2024-08-05T22:32:44.023497616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug  5 22:32:44.023654 containerd[1444]: time="2024-08-05T22:32:44.023555044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:44.023654 containerd[1444]: time="2024-08-05T22:32:44.023574792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug  5 22:32:44.023654 containerd[1444]: time="2024-08-05T22:32:44.023591583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:44.050307 systemd[1]: Started cri-containerd-dad3351db75483d0966ef2dcc8323d6ae4c392908edf2945f90d46866b136656.scope - libcontainer container dad3351db75483d0966ef2dcc8323d6ae4c392908edf2945f90d46866b136656.
Aug  5 22:32:44.232174 containerd[1444]: time="2024-08-05T22:32:44.232087603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-x7dgj,Uid:aa0d1d77-ec06-48f7-9fe0-d7d5619dbb83,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dad3351db75483d0966ef2dcc8323d6ae4c392908edf2945f90d46866b136656\""
Aug  5 22:32:44.232660 containerd[1444]: time="2024-08-05T22:32:44.232096250Z" level=info msg="StartContainer for \"b708e96dcfbd856f0b8d86e84aa417e55f8bf5a220591847c89577168c45be8d\" returns successfully"
Aug  5 22:32:44.268898 containerd[1444]: time="2024-08-05T22:32:44.268267283Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\""
Aug  5 22:32:45.237648 kubelet[2589]: E0805 22:32:45.237600    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:45.449544 kubelet[2589]: I0805 22:32:45.449476    2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9nrk5" podStartSLOduration=3.449454479 podStartE2EDuration="3.449454479s" podCreationTimestamp="2024-08-05 22:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:32:45.421820581 +0000 UTC m=+16.422691855" watchObservedRunningTime="2024-08-05 22:32:45.449454479 +0000 UTC m=+16.450325753"
Aug  5 22:32:46.998669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount519556334.mount: Deactivated successfully.
Aug  5 22:32:47.307032 containerd[1444]: time="2024-08-05T22:32:47.306871165Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:47.307923 containerd[1444]: time="2024-08-05T22:32:47.307856828Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076072"
Aug  5 22:32:47.308959 containerd[1444]: time="2024-08-05T22:32:47.308925177Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:47.311675 containerd[1444]: time="2024-08-05T22:32:47.311636517Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:47.312501 containerd[1444]: time="2024-08-05T22:32:47.312461227Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 3.044146415s"
Aug  5 22:32:47.312501 containerd[1444]: time="2024-08-05T22:32:47.312498176Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\""
Aug  5 22:32:47.314525 containerd[1444]: time="2024-08-05T22:32:47.314475906Z" level=info msg="CreateContainer within sandbox \"dad3351db75483d0966ef2dcc8323d6ae4c392908edf2945f90d46866b136656\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}"
Aug  5 22:32:47.327544 containerd[1444]: time="2024-08-05T22:32:47.327476365Z" level=info msg="CreateContainer within sandbox \"dad3351db75483d0966ef2dcc8323d6ae4c392908edf2945f90d46866b136656\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7844c1ee39ff7615ca85ad5767d1ca8b881d99a0e4a984871f3a4ffc8ef91d1f\""
Aug  5 22:32:47.329078 containerd[1444]: time="2024-08-05T22:32:47.327999799Z" level=info msg="StartContainer for \"7844c1ee39ff7615ca85ad5767d1ca8b881d99a0e4a984871f3a4ffc8ef91d1f\""
Aug  5 22:32:47.363286 systemd[1]: Started cri-containerd-7844c1ee39ff7615ca85ad5767d1ca8b881d99a0e4a984871f3a4ffc8ef91d1f.scope - libcontainer container 7844c1ee39ff7615ca85ad5767d1ca8b881d99a0e4a984871f3a4ffc8ef91d1f.
Aug  5 22:32:47.395713 containerd[1444]: time="2024-08-05T22:32:47.395658358Z" level=info msg="StartContainer for \"7844c1ee39ff7615ca85ad5767d1ca8b881d99a0e4a984871f3a4ffc8ef91d1f\" returns successfully"
Aug  5 22:32:48.252249 kubelet[2589]: I0805 22:32:48.252002    2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-x7dgj" podStartSLOduration=2.20537247 podStartE2EDuration="5.251956871s" podCreationTimestamp="2024-08-05 22:32:43 +0000 UTC" firstStartedPulling="2024-08-05 22:32:44.266719581 +0000 UTC m=+15.267590855" lastFinishedPulling="2024-08-05 22:32:47.313303982 +0000 UTC m=+18.314175256" observedRunningTime="2024-08-05 22:32:48.251714644 +0000 UTC m=+19.252585919" watchObservedRunningTime="2024-08-05 22:32:48.251956871 +0000 UTC m=+19.252828145"
Aug  5 22:32:50.108683 kubelet[2589]: I0805 22:32:50.108616    2589 topology_manager.go:215] "Topology Admit Handler" podUID="8e61adb5-138a-43d9-82e6-a0ecf9539b21" podNamespace="calico-system" podName="calico-typha-5f897f4664-5ppxw"
Aug  5 22:32:50.120229 kubelet[2589]: I0805 22:32:50.120173    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e61adb5-138a-43d9-82e6-a0ecf9539b21-tigera-ca-bundle\") pod \"calico-typha-5f897f4664-5ppxw\" (UID: \"8e61adb5-138a-43d9-82e6-a0ecf9539b21\") " pod="calico-system/calico-typha-5f897f4664-5ppxw"
Aug  5 22:32:50.120229 kubelet[2589]: I0805 22:32:50.120236    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8e61adb5-138a-43d9-82e6-a0ecf9539b21-typha-certs\") pod \"calico-typha-5f897f4664-5ppxw\" (UID: \"8e61adb5-138a-43d9-82e6-a0ecf9539b21\") " pod="calico-system/calico-typha-5f897f4664-5ppxw"
Aug  5 22:32:50.120452 kubelet[2589]: I0805 22:32:50.120262    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p89fw\" (UniqueName: \"kubernetes.io/projected/8e61adb5-138a-43d9-82e6-a0ecf9539b21-kube-api-access-p89fw\") pod \"calico-typha-5f897f4664-5ppxw\" (UID: \"8e61adb5-138a-43d9-82e6-a0ecf9539b21\") " pod="calico-system/calico-typha-5f897f4664-5ppxw"
Aug  5 22:32:50.122621 systemd[1]: Created slice kubepods-besteffort-pod8e61adb5_138a_43d9_82e6_a0ecf9539b21.slice - libcontainer container kubepods-besteffort-pod8e61adb5_138a_43d9_82e6_a0ecf9539b21.slice.
Aug  5 22:32:50.167752 kubelet[2589]: I0805 22:32:50.167689    2589 topology_manager.go:215] "Topology Admit Handler" podUID="218e11a4-8074-4b94-a0a6-7d840f489e2e" podNamespace="calico-system" podName="calico-node-jg7hq"
Aug  5 22:32:50.175858 systemd[1]: Created slice kubepods-besteffort-pod218e11a4_8074_4b94_a0a6_7d840f489e2e.slice - libcontainer container kubepods-besteffort-pod218e11a4_8074_4b94_a0a6_7d840f489e2e.slice.
Aug  5 22:32:50.278369 kubelet[2589]: I0805 22:32:50.278310    2589 topology_manager.go:215] "Topology Admit Handler" podUID="39f5cd8b-f47c-400b-a523-7412e6e8f022" podNamespace="calico-system" podName="csi-node-driver-lnnbx"
Aug  5 22:32:50.278985 kubelet[2589]: E0805 22:32:50.278696    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnnbx" podUID="39f5cd8b-f47c-400b-a523-7412e6e8f022"
Aug  5 22:32:50.321850 kubelet[2589]: I0805 22:32:50.321373    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39f5cd8b-f47c-400b-a523-7412e6e8f022-kubelet-dir\") pod \"csi-node-driver-lnnbx\" (UID: \"39f5cd8b-f47c-400b-a523-7412e6e8f022\") " pod="calico-system/csi-node-driver-lnnbx"
Aug  5 22:32:50.321850 kubelet[2589]: I0805 22:32:50.321434    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qdxl\" (UniqueName: \"kubernetes.io/projected/39f5cd8b-f47c-400b-a523-7412e6e8f022-kube-api-access-8qdxl\") pod \"csi-node-driver-lnnbx\" (UID: \"39f5cd8b-f47c-400b-a523-7412e6e8f022\") " pod="calico-system/csi-node-driver-lnnbx"
Aug  5 22:32:50.321850 kubelet[2589]: I0805 22:32:50.321459    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/39f5cd8b-f47c-400b-a523-7412e6e8f022-socket-dir\") pod \"csi-node-driver-lnnbx\" (UID: \"39f5cd8b-f47c-400b-a523-7412e6e8f022\") " pod="calico-system/csi-node-driver-lnnbx"
Aug  5 22:32:50.321850 kubelet[2589]: I0805 22:32:50.321481    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/218e11a4-8074-4b94-a0a6-7d840f489e2e-flexvol-driver-host\") pod \"calico-node-jg7hq\" (UID: \"218e11a4-8074-4b94-a0a6-7d840f489e2e\") " pod="calico-system/calico-node-jg7hq"
Aug  5 22:32:50.321850 kubelet[2589]: I0805 22:32:50.321511    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/218e11a4-8074-4b94-a0a6-7d840f489e2e-var-lib-calico\") pod \"calico-node-jg7hq\" (UID: \"218e11a4-8074-4b94-a0a6-7d840f489e2e\") " pod="calico-system/calico-node-jg7hq"
Aug  5 22:32:50.322141 kubelet[2589]: I0805 22:32:50.321531    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/218e11a4-8074-4b94-a0a6-7d840f489e2e-node-certs\") pod \"calico-node-jg7hq\" (UID: \"218e11a4-8074-4b94-a0a6-7d840f489e2e\") " pod="calico-system/calico-node-jg7hq"
Aug  5 22:32:50.322141 kubelet[2589]: I0805 22:32:50.321556    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfrwz\" (UniqueName: \"kubernetes.io/projected/218e11a4-8074-4b94-a0a6-7d840f489e2e-kube-api-access-wfrwz\") pod \"calico-node-jg7hq\" (UID: \"218e11a4-8074-4b94-a0a6-7d840f489e2e\") " pod="calico-system/calico-node-jg7hq"
Aug  5 22:32:50.322141 kubelet[2589]: I0805 22:32:50.321572    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/218e11a4-8074-4b94-a0a6-7d840f489e2e-cni-net-dir\") pod \"calico-node-jg7hq\" (UID: \"218e11a4-8074-4b94-a0a6-7d840f489e2e\") " pod="calico-system/calico-node-jg7hq"
Aug  5 22:32:50.322141 kubelet[2589]: I0805 22:32:50.321588    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/218e11a4-8074-4b94-a0a6-7d840f489e2e-var-run-calico\") pod \"calico-node-jg7hq\" (UID: \"218e11a4-8074-4b94-a0a6-7d840f489e2e\") " pod="calico-system/calico-node-jg7hq"
Aug  5 22:32:50.322141 kubelet[2589]: I0805 22:32:50.321603    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/218e11a4-8074-4b94-a0a6-7d840f489e2e-cni-bin-dir\") pod \"calico-node-jg7hq\" (UID: \"218e11a4-8074-4b94-a0a6-7d840f489e2e\") " pod="calico-system/calico-node-jg7hq"
Aug  5 22:32:50.322333 kubelet[2589]: I0805 22:32:50.321617    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/39f5cd8b-f47c-400b-a523-7412e6e8f022-registration-dir\") pod \"csi-node-driver-lnnbx\" (UID: \"39f5cd8b-f47c-400b-a523-7412e6e8f022\") " pod="calico-system/csi-node-driver-lnnbx"
Aug  5 22:32:50.322333 kubelet[2589]: I0805 22:32:50.321631    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/218e11a4-8074-4b94-a0a6-7d840f489e2e-lib-modules\") pod \"calico-node-jg7hq\" (UID: \"218e11a4-8074-4b94-a0a6-7d840f489e2e\") " pod="calico-system/calico-node-jg7hq"
Aug  5 22:32:50.322333 kubelet[2589]: I0805 22:32:50.321645    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/218e11a4-8074-4b94-a0a6-7d840f489e2e-xtables-lock\") pod \"calico-node-jg7hq\" (UID: \"218e11a4-8074-4b94-a0a6-7d840f489e2e\") " pod="calico-system/calico-node-jg7hq"
Aug  5 22:32:50.322333 kubelet[2589]: I0805 22:32:50.321665    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/218e11a4-8074-4b94-a0a6-7d840f489e2e-cni-log-dir\") pod \"calico-node-jg7hq\" (UID: \"218e11a4-8074-4b94-a0a6-7d840f489e2e\") " pod="calico-system/calico-node-jg7hq"
Aug  5 22:32:50.322333 kubelet[2589]: I0805 22:32:50.321682    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/218e11a4-8074-4b94-a0a6-7d840f489e2e-policysync\") pod \"calico-node-jg7hq\" (UID: \"218e11a4-8074-4b94-a0a6-7d840f489e2e\") " pod="calico-system/calico-node-jg7hq"
Aug  5 22:32:50.322479 kubelet[2589]: I0805 22:32:50.321696    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/218e11a4-8074-4b94-a0a6-7d840f489e2e-tigera-ca-bundle\") pod \"calico-node-jg7hq\" (UID: \"218e11a4-8074-4b94-a0a6-7d840f489e2e\") " pod="calico-system/calico-node-jg7hq"
Aug  5 22:32:50.322479 kubelet[2589]: I0805 22:32:50.321711    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/39f5cd8b-f47c-400b-a523-7412e6e8f022-varrun\") pod \"csi-node-driver-lnnbx\" (UID: \"39f5cd8b-f47c-400b-a523-7412e6e8f022\") " pod="calico-system/csi-node-driver-lnnbx"
Aug  5 22:32:50.428166 kubelet[2589]: E0805 22:32:50.425970    2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Aug  5 22:32:50.428166 kubelet[2589]: W0805 22:32:50.425996    2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Aug  5 22:32:50.428166 kubelet[2589]: E0805 22:32:50.426030    2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Aug  5 22:32:50.428166 kubelet[2589]: E0805 22:32:50.426930    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:50.429832 containerd[1444]: time="2024-08-05T22:32:50.429775370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f897f4664-5ppxw,Uid:8e61adb5-138a-43d9-82e6-a0ecf9539b21,Namespace:calico-system,Attempt:0,}"
Aug  5 22:32:50.439448 kubelet[2589]: E0805 22:32:50.439402    2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Aug  5 22:32:50.439448 kubelet[2589]: W0805 22:32:50.439436    2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Aug  5 22:32:50.439613 kubelet[2589]: E0805 22:32:50.439465    2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Aug  5 22:32:50.451142 kubelet[2589]: E0805 22:32:50.449216    2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Aug  5 22:32:50.451142 kubelet[2589]: W0805 22:32:50.449241    2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Aug  5 22:32:50.451142 kubelet[2589]: E0805 22:32:50.449264    2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Aug  5 22:32:50.456138 kubelet[2589]: E0805 22:32:50.453608    2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Aug  5 22:32:50.456138 kubelet[2589]: W0805 22:32:50.453629    2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Aug  5 22:32:50.456138 kubelet[2589]: E0805 22:32:50.453652    2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Aug  5 22:32:50.479050 kubelet[2589]: E0805 22:32:50.479007    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:50.479746 containerd[1444]: time="2024-08-05T22:32:50.479679253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jg7hq,Uid:218e11a4-8074-4b94-a0a6-7d840f489e2e,Namespace:calico-system,Attempt:0,}"
Aug  5 22:32:50.564642 containerd[1444]: time="2024-08-05T22:32:50.564039292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug  5 22:32:50.564642 containerd[1444]: time="2024-08-05T22:32:50.564105126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:50.564642 containerd[1444]: time="2024-08-05T22:32:50.564154889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug  5 22:32:50.564642 containerd[1444]: time="2024-08-05T22:32:50.564168124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:50.565210 containerd[1444]: time="2024-08-05T22:32:50.564992193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug  5 22:32:50.565210 containerd[1444]: time="2024-08-05T22:32:50.565071202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:50.565210 containerd[1444]: time="2024-08-05T22:32:50.565108722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug  5 22:32:50.565210 containerd[1444]: time="2024-08-05T22:32:50.565145512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:32:50.602429 systemd[1]: Started cri-containerd-2854a116ea0009633822abd3195d5755747da3155dd1f5a263542e65f25d72cb.scope - libcontainer container 2854a116ea0009633822abd3195d5755747da3155dd1f5a263542e65f25d72cb.
Aug  5 22:32:50.606376 systemd[1]: Started cri-containerd-bd98587a22bc5eb42d131a10d78555e3577f9f8d50795dfe1a1014efdea6f084.scope - libcontainer container bd98587a22bc5eb42d131a10d78555e3577f9f8d50795dfe1a1014efdea6f084.
Aug  5 22:32:50.637358 containerd[1444]: time="2024-08-05T22:32:50.637282105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jg7hq,Uid:218e11a4-8074-4b94-a0a6-7d840f489e2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd98587a22bc5eb42d131a10d78555e3577f9f8d50795dfe1a1014efdea6f084\""
Aug  5 22:32:50.641245 kubelet[2589]: E0805 22:32:50.641213    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:50.643815 containerd[1444]: time="2024-08-05T22:32:50.643779699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\""
Aug  5 22:32:50.657214 containerd[1444]: time="2024-08-05T22:32:50.657150772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f897f4664-5ppxw,Uid:8e61adb5-138a-43d9-82e6-a0ecf9539b21,Namespace:calico-system,Attempt:0,} returns sandbox id \"2854a116ea0009633822abd3195d5755747da3155dd1f5a263542e65f25d72cb\""
Aug  5 22:32:50.658243 kubelet[2589]: E0805 22:32:50.658134    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:52.105149 kubelet[2589]: E0805 22:32:52.102576    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnnbx" podUID="39f5cd8b-f47c-400b-a523-7412e6e8f022"
Aug  5 22:32:53.193901 containerd[1444]: time="2024-08-05T22:32:53.193815632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:53.194628 containerd[1444]: time="2024-08-05T22:32:53.194572063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568"
Aug  5 22:32:53.195803 containerd[1444]: time="2024-08-05T22:32:53.195762500Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:53.198159 containerd[1444]: time="2024-08-05T22:32:53.198088580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:53.198783 containerd[1444]: time="2024-08-05T22:32:53.198751856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 2.554932053s"
Aug  5 22:32:53.198821 containerd[1444]: time="2024-08-05T22:32:53.198780069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\""
Aug  5 22:32:53.200223 containerd[1444]: time="2024-08-05T22:32:53.200180531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\""
Aug  5 22:32:53.201356 containerd[1444]: time="2024-08-05T22:32:53.201322246Z" level=info msg="CreateContainer within sandbox \"bd98587a22bc5eb42d131a10d78555e3577f9f8d50795dfe1a1014efdea6f084\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Aug  5 22:32:53.229308 containerd[1444]: time="2024-08-05T22:32:53.229237446Z" level=info msg="CreateContainer within sandbox \"bd98587a22bc5eb42d131a10d78555e3577f9f8d50795dfe1a1014efdea6f084\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b7c0b108b1347078b00fb9722a85693a572a83e56fb96ff3aa9aed5aaba46c5b\""
Aug  5 22:32:53.230410 containerd[1444]: time="2024-08-05T22:32:53.230360697Z" level=info msg="StartContainer for \"b7c0b108b1347078b00fb9722a85693a572a83e56fb96ff3aa9aed5aaba46c5b\""
Aug  5 22:32:53.271441 systemd[1]: Started cri-containerd-b7c0b108b1347078b00fb9722a85693a572a83e56fb96ff3aa9aed5aaba46c5b.scope - libcontainer container b7c0b108b1347078b00fb9722a85693a572a83e56fb96ff3aa9aed5aaba46c5b.
Aug  5 22:32:53.322481 systemd[1]: cri-containerd-b7c0b108b1347078b00fb9722a85693a572a83e56fb96ff3aa9aed5aaba46c5b.scope: Deactivated successfully.
Aug  5 22:32:53.515755 containerd[1444]: time="2024-08-05T22:32:53.515531689Z" level=info msg="StartContainer for \"b7c0b108b1347078b00fb9722a85693a572a83e56fb96ff3aa9aed5aaba46c5b\" returns successfully"
Aug  5 22:32:53.537348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7c0b108b1347078b00fb9722a85693a572a83e56fb96ff3aa9aed5aaba46c5b-rootfs.mount: Deactivated successfully.
Aug  5 22:32:53.603967 containerd[1444]: time="2024-08-05T22:32:53.603886451Z" level=info msg="shim disconnected" id=b7c0b108b1347078b00fb9722a85693a572a83e56fb96ff3aa9aed5aaba46c5b namespace=k8s.io
Aug  5 22:32:53.603967 containerd[1444]: time="2024-08-05T22:32:53.603949850Z" level=warning msg="cleaning up after shim disconnected" id=b7c0b108b1347078b00fb9722a85693a572a83e56fb96ff3aa9aed5aaba46c5b namespace=k8s.io
Aug  5 22:32:53.603967 containerd[1444]: time="2024-08-05T22:32:53.603960690Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Aug  5 22:32:54.103070 kubelet[2589]: E0805 22:32:54.103015    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnnbx" podUID="39f5cd8b-f47c-400b-a523-7412e6e8f022"
Aug  5 22:32:54.261863 kubelet[2589]: E0805 22:32:54.261825    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:55.160015 containerd[1444]: time="2024-08-05T22:32:55.159929024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:55.160995 containerd[1444]: time="2024-08-05T22:32:55.160937568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030"
Aug  5 22:32:55.162251 containerd[1444]: time="2024-08-05T22:32:55.162227261Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:55.167076 containerd[1444]: time="2024-08-05T22:32:55.167016247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:55.167970 containerd[1444]: time="2024-08-05T22:32:55.167904065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 1.967682065s"
Aug  5 22:32:55.167970 containerd[1444]: time="2024-08-05T22:32:55.167959329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\""
Aug  5 22:32:55.170171 containerd[1444]: time="2024-08-05T22:32:55.170080112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\""
Aug  5 22:32:55.191660 containerd[1444]: time="2024-08-05T22:32:55.191597763Z" level=info msg="CreateContainer within sandbox \"2854a116ea0009633822abd3195d5755747da3155dd1f5a263542e65f25d72cb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Aug  5 22:32:56.103209 kubelet[2589]: E0805 22:32:56.103115    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnnbx" podUID="39f5cd8b-f47c-400b-a523-7412e6e8f022"
Aug  5 22:32:56.226953 containerd[1444]: time="2024-08-05T22:32:56.226863200Z" level=info msg="CreateContainer within sandbox \"2854a116ea0009633822abd3195d5755747da3155dd1f5a263542e65f25d72cb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1b9b03b467654a3c6a399f777c8b5e78d452cb23760e964d93a5124228a500b6\""
Aug  5 22:32:56.227794 containerd[1444]: time="2024-08-05T22:32:56.227755577Z" level=info msg="StartContainer for \"1b9b03b467654a3c6a399f777c8b5e78d452cb23760e964d93a5124228a500b6\""
Aug  5 22:32:56.267300 systemd[1]: Started cri-containerd-1b9b03b467654a3c6a399f777c8b5e78d452cb23760e964d93a5124228a500b6.scope - libcontainer container 1b9b03b467654a3c6a399f777c8b5e78d452cb23760e964d93a5124228a500b6.
Aug  5 22:32:56.329497 containerd[1444]: time="2024-08-05T22:32:56.329168019Z" level=info msg="StartContainer for \"1b9b03b467654a3c6a399f777c8b5e78d452cb23760e964d93a5124228a500b6\" returns successfully"
Aug  5 22:32:57.277910 kubelet[2589]: E0805 22:32:57.277835    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:57.298678 kubelet[2589]: I0805 22:32:57.298378    2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f897f4664-5ppxw" podStartSLOduration=2.788091868 podStartE2EDuration="7.298353405s" podCreationTimestamp="2024-08-05 22:32:50 +0000 UTC" firstStartedPulling="2024-08-05 22:32:50.658703741 +0000 UTC m=+21.659575015" lastFinishedPulling="2024-08-05 22:32:55.168965278 +0000 UTC m=+26.169836552" observedRunningTime="2024-08-05 22:32:57.298091243 +0000 UTC m=+28.298962527" watchObservedRunningTime="2024-08-05 22:32:57.298353405 +0000 UTC m=+28.299224679"
Aug  5 22:32:58.102833 kubelet[2589]: E0805 22:32:58.102717    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnnbx" podUID="39f5cd8b-f47c-400b-a523-7412e6e8f022"
Aug  5 22:32:58.578504 kubelet[2589]: I0805 22:32:58.578465    2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Aug  5 22:32:58.580345 kubelet[2589]: E0805 22:32:58.580326    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:59.580428 kubelet[2589]: E0805 22:32:59.580375    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:32:59.849954 containerd[1444]: time="2024-08-05T22:32:59.849704740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:59.851720 containerd[1444]: time="2024-08-05T22:32:59.851630727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850"
Aug  5 22:32:59.853155 containerd[1444]: time="2024-08-05T22:32:59.853111657Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:59.855910 containerd[1444]: time="2024-08-05T22:32:59.855847174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:32:59.856662 containerd[1444]: time="2024-08-05T22:32:59.856582425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 4.686397235s"
Aug  5 22:32:59.856662 containerd[1444]: time="2024-08-05T22:32:59.856638420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\""
Aug  5 22:32:59.859646 containerd[1444]: time="2024-08-05T22:32:59.859612695Z" level=info msg="CreateContainer within sandbox \"bd98587a22bc5eb42d131a10d78555e3577f9f8d50795dfe1a1014efdea6f084\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Aug  5 22:32:59.880437 containerd[1444]: time="2024-08-05T22:32:59.880383908Z" level=info msg="CreateContainer within sandbox \"bd98587a22bc5eb42d131a10d78555e3577f9f8d50795dfe1a1014efdea6f084\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"90453ee3e3302e7d75499b0a91be8e1fef7d6a07e812c5596f2cb3f8943dec7e\""
Aug  5 22:32:59.881064 containerd[1444]: time="2024-08-05T22:32:59.881038107Z" level=info msg="StartContainer for \"90453ee3e3302e7d75499b0a91be8e1fef7d6a07e812c5596f2cb3f8943dec7e\""
Aug  5 22:32:59.914290 systemd[1]: Started cri-containerd-90453ee3e3302e7d75499b0a91be8e1fef7d6a07e812c5596f2cb3f8943dec7e.scope - libcontainer container 90453ee3e3302e7d75499b0a91be8e1fef7d6a07e812c5596f2cb3f8943dec7e.
Aug  5 22:32:59.973434 containerd[1444]: time="2024-08-05T22:32:59.973381817Z" level=info msg="StartContainer for \"90453ee3e3302e7d75499b0a91be8e1fef7d6a07e812c5596f2cb3f8943dec7e\" returns successfully"
Aug  5 22:33:00.103165 kubelet[2589]: E0805 22:33:00.102964    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnnbx" podUID="39f5cd8b-f47c-400b-a523-7412e6e8f022"
Aug  5 22:33:00.583814 kubelet[2589]: E0805 22:33:00.583775    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:00.584502 kubelet[2589]: E0805 22:33:00.584055    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:01.411603 systemd[1]: cri-containerd-90453ee3e3302e7d75499b0a91be8e1fef7d6a07e812c5596f2cb3f8943dec7e.scope: Deactivated successfully.
Aug  5 22:33:01.434764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90453ee3e3302e7d75499b0a91be8e1fef7d6a07e812c5596f2cb3f8943dec7e-rootfs.mount: Deactivated successfully.
Aug  5 22:33:01.468294 kubelet[2589]: I0805 22:33:01.468243    2589 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Aug  5 22:33:01.585357 kubelet[2589]: E0805 22:33:01.585317    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:01.614727 kubelet[2589]: I0805 22:33:01.613541    2589 topology_manager.go:215] "Topology Admit Handler" podUID="89476694-84bf-42c4-a686-21517cd48dc0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h9546"
Aug  5 22:33:01.621144 kubelet[2589]: I0805 22:33:01.620500    2589 topology_manager.go:215] "Topology Admit Handler" podUID="57fa67d2-700f-4c57-9da6-ee6cbe4fdfef" podNamespace="calico-system" podName="calico-kube-controllers-6668f8dc88-4t4lk"
Aug  5 22:33:01.621144 kubelet[2589]: I0805 22:33:01.620845    2589 topology_manager.go:215] "Topology Admit Handler" podUID="73a44b70-815b-476a-b6da-63de43927fa6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-clgfp"
Aug  5 22:33:01.623378 systemd[1]: Created slice kubepods-burstable-pod89476694_84bf_42c4_a686_21517cd48dc0.slice - libcontainer container kubepods-burstable-pod89476694_84bf_42c4_a686_21517cd48dc0.slice.
Aug  5 22:33:01.629367 systemd[1]: Created slice kubepods-besteffort-pod57fa67d2_700f_4c57_9da6_ee6cbe4fdfef.slice - libcontainer container kubepods-besteffort-pod57fa67d2_700f_4c57_9da6_ee6cbe4fdfef.slice.
Aug  5 22:33:01.634643 systemd[1]: Created slice kubepods-burstable-pod73a44b70_815b_476a_b6da_63de43927fa6.slice - libcontainer container kubepods-burstable-pod73a44b70_815b_476a_b6da_63de43927fa6.slice.
Aug  5 22:33:01.659251 containerd[1444]: time="2024-08-05T22:33:01.659143091Z" level=info msg="shim disconnected" id=90453ee3e3302e7d75499b0a91be8e1fef7d6a07e812c5596f2cb3f8943dec7e namespace=k8s.io
Aug  5 22:33:01.659251 containerd[1444]: time="2024-08-05T22:33:01.659220296Z" level=warning msg="cleaning up after shim disconnected" id=90453ee3e3302e7d75499b0a91be8e1fef7d6a07e812c5596f2cb3f8943dec7e namespace=k8s.io
Aug  5 22:33:01.659251 containerd[1444]: time="2024-08-05T22:33:01.659236326Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Aug  5 22:33:01.813815 kubelet[2589]: I0805 22:33:01.813667    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts7wx\" (UniqueName: \"kubernetes.io/projected/57fa67d2-700f-4c57-9da6-ee6cbe4fdfef-kube-api-access-ts7wx\") pod \"calico-kube-controllers-6668f8dc88-4t4lk\" (UID: \"57fa67d2-700f-4c57-9da6-ee6cbe4fdfef\") " pod="calico-system/calico-kube-controllers-6668f8dc88-4t4lk"
Aug  5 22:33:01.813815 kubelet[2589]: I0805 22:33:01.813724    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q9b6\" (UniqueName: \"kubernetes.io/projected/73a44b70-815b-476a-b6da-63de43927fa6-kube-api-access-7q9b6\") pod \"coredns-7db6d8ff4d-clgfp\" (UID: \"73a44b70-815b-476a-b6da-63de43927fa6\") " pod="kube-system/coredns-7db6d8ff4d-clgfp"
Aug  5 22:33:01.813815 kubelet[2589]: I0805 22:33:01.813752    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57fa67d2-700f-4c57-9da6-ee6cbe4fdfef-tigera-ca-bundle\") pod \"calico-kube-controllers-6668f8dc88-4t4lk\" (UID: \"57fa67d2-700f-4c57-9da6-ee6cbe4fdfef\") " pod="calico-system/calico-kube-controllers-6668f8dc88-4t4lk"
Aug  5 22:33:01.813815 kubelet[2589]: I0805 22:33:01.813791    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsjvd\" (UniqueName: \"kubernetes.io/projected/89476694-84bf-42c4-a686-21517cd48dc0-kube-api-access-fsjvd\") pod \"coredns-7db6d8ff4d-h9546\" (UID: \"89476694-84bf-42c4-a686-21517cd48dc0\") " pod="kube-system/coredns-7db6d8ff4d-h9546"
Aug  5 22:33:01.814083 kubelet[2589]: I0805 22:33:01.813819    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89476694-84bf-42c4-a686-21517cd48dc0-config-volume\") pod \"coredns-7db6d8ff4d-h9546\" (UID: \"89476694-84bf-42c4-a686-21517cd48dc0\") " pod="kube-system/coredns-7db6d8ff4d-h9546"
Aug  5 22:33:01.814083 kubelet[2589]: I0805 22:33:01.813841    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73a44b70-815b-476a-b6da-63de43927fa6-config-volume\") pod \"coredns-7db6d8ff4d-clgfp\" (UID: \"73a44b70-815b-476a-b6da-63de43927fa6\") " pod="kube-system/coredns-7db6d8ff4d-clgfp"
Aug  5 22:33:01.926464 kubelet[2589]: E0805 22:33:01.926419    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:01.928855 containerd[1444]: time="2024-08-05T22:33:01.928377544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h9546,Uid:89476694-84bf-42c4-a686-21517cd48dc0,Namespace:kube-system,Attempt:0,}"
Aug  5 22:33:01.932444 containerd[1444]: time="2024-08-05T22:33:01.932413200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6668f8dc88-4t4lk,Uid:57fa67d2-700f-4c57-9da6-ee6cbe4fdfef,Namespace:calico-system,Attempt:0,}"
Aug  5 22:33:01.940908 kubelet[2589]: E0805 22:33:01.940855    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:01.941473 containerd[1444]: time="2024-08-05T22:33:01.941441081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-clgfp,Uid:73a44b70-815b-476a-b6da-63de43927fa6,Namespace:kube-system,Attempt:0,}"
Aug  5 22:33:02.089607 containerd[1444]: time="2024-08-05T22:33:02.089458579Z" level=error msg="Failed to destroy network for sandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.090269 containerd[1444]: time="2024-08-05T22:33:02.090223665Z" level=error msg="encountered an error cleaning up failed sandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.090376 containerd[1444]: time="2024-08-05T22:33:02.090291092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h9546,Uid:89476694-84bf-42c4-a686-21517cd48dc0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.090634 kubelet[2589]: E0805 22:33:02.090570    2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.090704 kubelet[2589]: E0805 22:33:02.090658    2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h9546"
Aug  5 22:33:02.090704 kubelet[2589]: E0805 22:33:02.090684    2589 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h9546"
Aug  5 22:33:02.090784 kubelet[2589]: E0805 22:33:02.090730    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h9546_kube-system(89476694-84bf-42c4-a686-21517cd48dc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h9546_kube-system(89476694-84bf-42c4-a686-21517cd48dc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h9546" podUID="89476694-84bf-42c4-a686-21517cd48dc0"
Aug  5 22:33:02.093763 containerd[1444]: time="2024-08-05T22:33:02.093724447Z" level=error msg="Failed to destroy network for sandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.094099 containerd[1444]: time="2024-08-05T22:33:02.094076418Z" level=error msg="encountered an error cleaning up failed sandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.094165 containerd[1444]: time="2024-08-05T22:33:02.094140849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6668f8dc88-4t4lk,Uid:57fa67d2-700f-4c57-9da6-ee6cbe4fdfef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.094325 kubelet[2589]: E0805 22:33:02.094297    2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.094374 kubelet[2589]: E0805 22:33:02.094344    2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6668f8dc88-4t4lk"
Aug  5 22:33:02.094399 kubelet[2589]: E0805 22:33:02.094365    2589 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6668f8dc88-4t4lk"
Aug  5 22:33:02.094441 kubelet[2589]: E0805 22:33:02.094413    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6668f8dc88-4t4lk_calico-system(57fa67d2-700f-4c57-9da6-ee6cbe4fdfef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6668f8dc88-4t4lk_calico-system(57fa67d2-700f-4c57-9da6-ee6cbe4fdfef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6668f8dc88-4t4lk" podUID="57fa67d2-700f-4c57-9da6-ee6cbe4fdfef"
Aug  5 22:33:02.108868 systemd[1]: Created slice kubepods-besteffort-pod39f5cd8b_f47c_400b_a523_7412e6e8f022.slice - libcontainer container kubepods-besteffort-pod39f5cd8b_f47c_400b_a523_7412e6e8f022.slice.
Aug  5 22:33:02.110868 containerd[1444]: time="2024-08-05T22:33:02.110835681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnnbx,Uid:39f5cd8b-f47c-400b-a523-7412e6e8f022,Namespace:calico-system,Attempt:0,}"
Aug  5 22:33:02.304236 containerd[1444]: time="2024-08-05T22:33:02.304091990Z" level=error msg="Failed to destroy network for sandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.304764 containerd[1444]: time="2024-08-05T22:33:02.304723456Z" level=error msg="encountered an error cleaning up failed sandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.304904 containerd[1444]: time="2024-08-05T22:33:02.304784601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-clgfp,Uid:73a44b70-815b-476a-b6da-63de43927fa6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.305156 kubelet[2589]: E0805 22:33:02.305096    2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.305248 kubelet[2589]: E0805 22:33:02.305190    2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-clgfp"
Aug  5 22:33:02.305248 kubelet[2589]: E0805 22:33:02.305217    2589 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-clgfp"
Aug  5 22:33:02.305304 kubelet[2589]: E0805 22:33:02.305276    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-clgfp_kube-system(73a44b70-815b-476a-b6da-63de43927fa6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-clgfp_kube-system(73a44b70-815b-476a-b6da-63de43927fa6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-clgfp" podUID="73a44b70-815b-476a-b6da-63de43927fa6"
Aug  5 22:33:02.435076 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582-shm.mount: Deactivated successfully.
Aug  5 22:33:02.435220 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214-shm.mount: Deactivated successfully.
Aug  5 22:33:02.588367 kubelet[2589]: I0805 22:33:02.588328    2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:02.589144 containerd[1444]: time="2024-08-05T22:33:02.589081805Z" level=info msg="StopPodSandbox for \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\""
Aug  5 22:33:02.589744 containerd[1444]: time="2024-08-05T22:33:02.589365548Z" level=info msg="Ensure that sandbox 756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe in task-service has been cleanup successfully"
Aug  5 22:33:02.591826 kubelet[2589]: E0805 22:33:02.591804    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:02.593507 kubelet[2589]: I0805 22:33:02.593486    2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:02.593568 containerd[1444]: time="2024-08-05T22:33:02.593480112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\""
Aug  5 22:33:02.594799 kubelet[2589]: I0805 22:33:02.594558    2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:02.595138 containerd[1444]: time="2024-08-05T22:33:02.594105977Z" level=info msg="StopPodSandbox for \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\""
Aug  5 22:33:02.595227 containerd[1444]: time="2024-08-05T22:33:02.595197256Z" level=info msg="StopPodSandbox for \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\""
Aug  5 22:33:02.595385 containerd[1444]: time="2024-08-05T22:33:02.595362546Z" level=info msg="Ensure that sandbox faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582 in task-service has been cleanup successfully"
Aug  5 22:33:02.595480 containerd[1444]: time="2024-08-05T22:33:02.595455120Z" level=info msg="Ensure that sandbox 6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214 in task-service has been cleanup successfully"
Aug  5 22:33:02.613483 containerd[1444]: time="2024-08-05T22:33:02.613428282Z" level=error msg="Failed to destroy network for sandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.614770 containerd[1444]: time="2024-08-05T22:33:02.614726119Z" level=error msg="encountered an error cleaning up failed sandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.614942 containerd[1444]: time="2024-08-05T22:33:02.614788406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnnbx,Uid:39f5cd8b-f47c-400b-a523-7412e6e8f022,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.615779 kubelet[2589]: E0805 22:33:02.615389    2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.615779 kubelet[2589]: E0805 22:33:02.615454    2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lnnbx"
Aug  5 22:33:02.615779 kubelet[2589]: E0805 22:33:02.615476    2589 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lnnbx"
Aug  5 22:33:02.615905 kubelet[2589]: E0805 22:33:02.615523    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lnnbx_calico-system(39f5cd8b-f47c-400b-a523-7412e6e8f022)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lnnbx_calico-system(39f5cd8b-f47c-400b-a523-7412e6e8f022)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lnnbx" podUID="39f5cd8b-f47c-400b-a523-7412e6e8f022"
Aug  5 22:33:02.616804 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213-shm.mount: Deactivated successfully.
Aug  5 22:33:02.630844 containerd[1444]: time="2024-08-05T22:33:02.630786831Z" level=error msg="StopPodSandbox for \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\" failed" error="failed to destroy network for sandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.631004 containerd[1444]: time="2024-08-05T22:33:02.630786790Z" level=error msg="StopPodSandbox for \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\" failed" error="failed to destroy network for sandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.631183 kubelet[2589]: E0805 22:33:02.631104    2589 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:02.631183 kubelet[2589]: E0805 22:33:02.631142    2589 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:02.631339 kubelet[2589]: E0805 22:33:02.631188    2589 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"}
Aug  5 22:33:02.631339 kubelet[2589]: E0805 22:33:02.631215    2589 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"}
Aug  5 22:33:02.631339 kubelet[2589]: E0805 22:33:02.631253    2589 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57fa67d2-700f-4c57-9da6-ee6cbe4fdfef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Aug  5 22:33:02.631339 kubelet[2589]: E0805 22:33:02.631263    2589 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73a44b70-815b-476a-b6da-63de43927fa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Aug  5 22:33:02.631504 kubelet[2589]: E0805 22:33:02.631282    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57fa67d2-700f-4c57-9da6-ee6cbe4fdfef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6668f8dc88-4t4lk" podUID="57fa67d2-700f-4c57-9da6-ee6cbe4fdfef"
Aug  5 22:33:02.631504 kubelet[2589]: E0805 22:33:02.631291    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73a44b70-815b-476a-b6da-63de43927fa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-clgfp" podUID="73a44b70-815b-476a-b6da-63de43927fa6"
Aug  5 22:33:02.646025 containerd[1444]: time="2024-08-05T22:33:02.645862773Z" level=error msg="StopPodSandbox for \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\" failed" error="failed to destroy network for sandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:02.646520 kubelet[2589]: E0805 22:33:02.646237    2589 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:02.646520 kubelet[2589]: E0805 22:33:02.646289    2589 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"}
Aug  5 22:33:02.646520 kubelet[2589]: E0805 22:33:02.646330    2589 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89476694-84bf-42c4-a686-21517cd48dc0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Aug  5 22:33:02.646520 kubelet[2589]: E0805 22:33:02.646361    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89476694-84bf-42c4-a686-21517cd48dc0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h9546" podUID="89476694-84bf-42c4-a686-21517cd48dc0"
Aug  5 22:33:02.795046 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:53578.service - OpenSSH per-connection server daemon (10.0.0.1:53578).
Aug  5 22:33:02.830961 sshd[3472]: Accepted publickey for core from 10.0.0.1 port 53578 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:02.832750 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:02.837094 systemd-logind[1428]: New session 10 of user core.
Aug  5 22:33:02.846225 systemd[1]: Started session-10.scope - Session 10 of User core.
Aug  5 22:33:02.978830 sshd[3472]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:02.983280 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:53578.service: Deactivated successfully.
Aug  5 22:33:02.985679 systemd[1]: session-10.scope: Deactivated successfully.
Aug  5 22:33:02.986474 systemd-logind[1428]: Session 10 logged out. Waiting for processes to exit.
Aug  5 22:33:02.987451 systemd-logind[1428]: Removed session 10.
Aug  5 22:33:03.596866 kubelet[2589]: I0805 22:33:03.596814    2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:03.597598 containerd[1444]: time="2024-08-05T22:33:03.597398643Z" level=info msg="StopPodSandbox for \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\""
Aug  5 22:33:03.597598 containerd[1444]: time="2024-08-05T22:33:03.597580835Z" level=info msg="Ensure that sandbox c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213 in task-service has been cleanup successfully"
Aug  5 22:33:03.621665 containerd[1444]: time="2024-08-05T22:33:03.621606176Z" level=error msg="StopPodSandbox for \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\" failed" error="failed to destroy network for sandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Aug  5 22:33:03.621945 kubelet[2589]: E0805 22:33:03.621884    2589 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:03.622026 kubelet[2589]: E0805 22:33:03.621966    2589 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"}
Aug  5 22:33:03.622026 kubelet[2589]: E0805 22:33:03.622013    2589 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39f5cd8b-f47c-400b-a523-7412e6e8f022\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Aug  5 22:33:03.622174 kubelet[2589]: E0805 22:33:03.622046    2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39f5cd8b-f47c-400b-a523-7412e6e8f022\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lnnbx" podUID="39f5cd8b-f47c-400b-a523-7412e6e8f022"
Aug  5 22:33:06.387283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount952084236.mount: Deactivated successfully.
Aug  5 22:33:07.278902 containerd[1444]: time="2024-08-05T22:33:07.278761825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:07.279701 containerd[1444]: time="2024-08-05T22:33:07.279637419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750"
Aug  5 22:33:07.297835 containerd[1444]: time="2024-08-05T22:33:07.297762816Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:07.301105 containerd[1444]: time="2024-08-05T22:33:07.301055787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:07.301852 containerd[1444]: time="2024-08-05T22:33:07.301781138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 4.706989182s"
Aug  5 22:33:07.301852 containerd[1444]: time="2024-08-05T22:33:07.301844868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\""
Aug  5 22:33:07.320188 containerd[1444]: time="2024-08-05T22:33:07.319062121Z" level=info msg="CreateContainer within sandbox \"bd98587a22bc5eb42d131a10d78555e3577f9f8d50795dfe1a1014efdea6f084\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Aug  5 22:33:07.348941 containerd[1444]: time="2024-08-05T22:33:07.348867229Z" level=info msg="CreateContainer within sandbox \"bd98587a22bc5eb42d131a10d78555e3577f9f8d50795dfe1a1014efdea6f084\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9a8cf0bc356fdc639e5b8d1519196a3ed9bb66ca96fbe130967cfc55a11100f7\""
Aug  5 22:33:07.349554 containerd[1444]: time="2024-08-05T22:33:07.349498093Z" level=info msg="StartContainer for \"9a8cf0bc356fdc639e5b8d1519196a3ed9bb66ca96fbe130967cfc55a11100f7\""
Aug  5 22:33:07.420773 systemd[1]: run-containerd-runc-k8s.io-9a8cf0bc356fdc639e5b8d1519196a3ed9bb66ca96fbe130967cfc55a11100f7-runc.4RLgeF.mount: Deactivated successfully.
Aug  5 22:33:07.435384 systemd[1]: Started cri-containerd-9a8cf0bc356fdc639e5b8d1519196a3ed9bb66ca96fbe130967cfc55a11100f7.scope - libcontainer container 9a8cf0bc356fdc639e5b8d1519196a3ed9bb66ca96fbe130967cfc55a11100f7.
Aug  5 22:33:07.991054 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:53580.service - OpenSSH per-connection server daemon (10.0.0.1:53580).
Aug  5 22:33:08.076165 containerd[1444]: time="2024-08-05T22:33:08.074618391Z" level=info msg="StartContainer for \"9a8cf0bc356fdc639e5b8d1519196a3ed9bb66ca96fbe130967cfc55a11100f7\" returns successfully"
Aug  5 22:33:08.089414 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Aug  5 22:33:08.089552 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Aug  5 22:33:08.109099 sshd[3560]: Accepted publickey for core from 10.0.0.1 port 53580 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:08.111383 sshd[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:08.116293 systemd-logind[1428]: New session 11 of user core.
Aug  5 22:33:08.122407 systemd[1]: Started session-11.scope - Session 11 of User core.
Aug  5 22:33:08.433075 sshd[3560]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:08.436968 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:53580.service: Deactivated successfully.
Aug  5 22:33:08.439628 systemd[1]: session-11.scope: Deactivated successfully.
Aug  5 22:33:08.440535 systemd-logind[1428]: Session 11 logged out. Waiting for processes to exit.
Aug  5 22:33:08.441565 systemd-logind[1428]: Removed session 11.
Aug  5 22:33:09.086968 kubelet[2589]: E0805 22:33:09.086907    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:09.101344 kubelet[2589]: I0805 22:33:09.100012    2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jg7hq" podStartSLOduration=2.439255923 podStartE2EDuration="19.099975038s" podCreationTimestamp="2024-08-05 22:32:50 +0000 UTC" firstStartedPulling="2024-08-05 22:32:50.641972191 +0000 UTC m=+21.642843465" lastFinishedPulling="2024-08-05 22:33:07.302691306 +0000 UTC m=+38.303562580" observedRunningTime="2024-08-05 22:33:09.099806131 +0000 UTC m=+40.100677415" watchObservedRunningTime="2024-08-05 22:33:09.099975038 +0000 UTC m=+40.100846312"
Aug  5 22:33:09.804847 systemd-networkd[1375]: vxlan.calico: Link UP
Aug  5 22:33:09.804857 systemd-networkd[1375]: vxlan.calico: Gained carrier
Aug  5 22:33:10.088909 kubelet[2589]: E0805 22:33:10.088740    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:10.840315 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL
Aug  5 22:33:13.103849 containerd[1444]: time="2024-08-05T22:33:13.103423923Z" level=info msg="StopPodSandbox for \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\""
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.165 [INFO][3859] k8s.go 608: Cleaning up netns ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.166 [INFO][3859] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" iface="eth0" netns="/var/run/netns/cni-383309e7-77ec-9146-fac7-bb45facaf75a"
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.166 [INFO][3859] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" iface="eth0" netns="/var/run/netns/cni-383309e7-77ec-9146-fac7-bb45facaf75a"
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.166 [INFO][3859] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" iface="eth0" netns="/var/run/netns/cni-383309e7-77ec-9146-fac7-bb45facaf75a"
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.166 [INFO][3859] k8s.go 615: Releasing IP address(es) ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.166 [INFO][3859] utils.go 188: Calico CNI releasing IP address ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.227 [INFO][3867] ipam_plugin.go 411: Releasing address using handleID ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" HandleID="k8s-pod-network.6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" Workload="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.228 [INFO][3867] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.228 [INFO][3867] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.235 [WARNING][3867] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" HandleID="k8s-pod-network.6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" Workload="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.235 [INFO][3867] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" HandleID="k8s-pod-network.6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" Workload="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.237 [INFO][3867] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:13.242483 containerd[1444]: 2024-08-05 22:33:13.239 [INFO][3859] k8s.go 621: Teardown processing complete. ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:13.243062 containerd[1444]: time="2024-08-05T22:33:13.242782747Z" level=info msg="TearDown network for sandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\" successfully"
Aug  5 22:33:13.243062 containerd[1444]: time="2024-08-05T22:33:13.242825888Z" level=info msg="StopPodSandbox for \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\" returns successfully"
Aug  5 22:33:13.246150 kubelet[2589]: E0805 22:33:13.244620    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:13.246453 containerd[1444]: time="2024-08-05T22:33:13.245739876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h9546,Uid:89476694-84bf-42c4-a686-21517cd48dc0,Namespace:kube-system,Attempt:1,}"
Aug  5 22:33:13.245620 systemd[1]: run-netns-cni\x2d383309e7\x2d77ec\x2d9146\x2dfac7\x2dbb45facaf75a.mount: Deactivated successfully.
Aug  5 22:33:13.373461 systemd-networkd[1375]: calie8e02e1f1d8: Link UP
Aug  5 22:33:13.374292 systemd-networkd[1375]: calie8e02e1f1d8: Gained carrier
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.303 [INFO][3882] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--h9546-eth0 coredns-7db6d8ff4d- kube-system  89476694-84bf-42c4-a686-21517cd48dc0 821 0 2024-08-05 22:32:43 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  localhost  coredns-7db6d8ff4d-h9546 eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] calie8e02e1f1d8  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9546" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h9546-"
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.303 [INFO][3882] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9546" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.334 [INFO][3889] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" HandleID="k8s-pod-network.673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" Workload="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.343 [INFO][3889] ipam_plugin.go 264: Auto assigning IP ContainerID="673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" HandleID="k8s-pod-network.673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" Workload="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027ddf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-h9546", "timestamp":"2024-08-05 22:33:13.334842662 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.343 [INFO][3889] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.343 [INFO][3889] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.343 [INFO][3889] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.345 [INFO][3889] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" host="localhost"
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.349 [INFO][3889] ipam.go 372: Looking up existing affinities for host host="localhost"
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.353 [INFO][3889] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.355 [INFO][3889] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.357 [INFO][3889] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.357 [INFO][3889] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" host="localhost"
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.359 [INFO][3889] ipam.go 1685: Creating new handle: k8s-pod-network.673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.361 [INFO][3889] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" host="localhost"
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.365 [INFO][3889] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" host="localhost"
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.366 [INFO][3889] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" host="localhost"
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.366 [INFO][3889] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:13.386362 containerd[1444]: 2024-08-05 22:33:13.366 [INFO][3889] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" HandleID="k8s-pod-network.673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" Workload="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:13.387160 containerd[1444]: 2024-08-05 22:33:13.369 [INFO][3882] k8s.go 386: Populated endpoint ContainerID="673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9546" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--h9546-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"89476694-84bf-42c4-a686-21517cd48dc0", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-h9546", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8e02e1f1d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:13.387160 containerd[1444]: 2024-08-05 22:33:13.371 [INFO][3882] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9546" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:13.387160 containerd[1444]: 2024-08-05 22:33:13.371 [INFO][3882] dataplane_linux.go 68: Setting the host side veth name to calie8e02e1f1d8 ContainerID="673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9546" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:13.387160 containerd[1444]: 2024-08-05 22:33:13.374 [INFO][3882] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9546" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:13.387160 containerd[1444]: 2024-08-05 22:33:13.374 [INFO][3882] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9546" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--h9546-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"89476694-84bf-42c4-a686-21517cd48dc0", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385", Pod:"coredns-7db6d8ff4d-h9546", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8e02e1f1d8", MAC:"92:6f:72:01:96:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:13.387160 containerd[1444]: 2024-08-05 22:33:13.382 [INFO][3882] k8s.go 500: Wrote updated endpoint to datastore ContainerID="673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9546" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:13.418269 containerd[1444]: time="2024-08-05T22:33:13.417805188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug  5 22:33:13.418269 containerd[1444]: time="2024-08-05T22:33:13.417871853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:33:13.418269 containerd[1444]: time="2024-08-05T22:33:13.417913281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug  5 22:33:13.418269 containerd[1444]: time="2024-08-05T22:33:13.417931756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:33:13.449547 systemd[1]: Started cri-containerd-673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385.scope - libcontainer container 673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385.
Aug  5 22:33:13.450943 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:48610.service - OpenSSH per-connection server daemon (10.0.0.1:48610).
Aug  5 22:33:13.465338 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Aug  5 22:33:13.490695 sshd[3941]: Accepted publickey for core from 10.0.0.1 port 48610 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:13.492767 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:13.496498 containerd[1444]: time="2024-08-05T22:33:13.496093563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h9546,Uid:89476694-84bf-42c4-a686-21517cd48dc0,Namespace:kube-system,Attempt:1,} returns sandbox id \"673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385\""
Aug  5 22:33:13.497316 kubelet[2589]: E0805 22:33:13.497266    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:13.499461 systemd-logind[1428]: New session 12 of user core.
Aug  5 22:33:13.508406 containerd[1444]: time="2024-08-05T22:33:13.499993320Z" level=info msg="CreateContainer within sandbox \"673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Aug  5 22:33:13.508302 systemd[1]: Started session-12.scope - Session 12 of User core.
Aug  5 22:33:13.732084 containerd[1444]: time="2024-08-05T22:33:13.732010954Z" level=info msg="CreateContainer within sandbox \"673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06e20b94f221089dee208587d8067bbb345a234845720671d479021026974a2a\""
Aug  5 22:33:13.732932 containerd[1444]: time="2024-08-05T22:33:13.732871448Z" level=info msg="StartContainer for \"06e20b94f221089dee208587d8067bbb345a234845720671d479021026974a2a\""
Aug  5 22:33:13.753425 sshd[3941]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:13.757908 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:48610.service: Deactivated successfully.
Aug  5 22:33:13.760178 systemd[1]: session-12.scope: Deactivated successfully.
Aug  5 22:33:13.760878 systemd-logind[1428]: Session 12 logged out. Waiting for processes to exit.
Aug  5 22:33:13.768545 systemd[1]: Started cri-containerd-06e20b94f221089dee208587d8067bbb345a234845720671d479021026974a2a.scope - libcontainer container 06e20b94f221089dee208587d8067bbb345a234845720671d479021026974a2a.
Aug  5 22:33:13.769496 systemd-logind[1428]: Removed session 12.
Aug  5 22:33:13.805946 containerd[1444]: time="2024-08-05T22:33:13.805900786Z" level=info msg="StartContainer for \"06e20b94f221089dee208587d8067bbb345a234845720671d479021026974a2a\" returns successfully"
Aug  5 22:33:14.098565 kubelet[2589]: E0805 22:33:14.098147    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:14.103361 containerd[1444]: time="2024-08-05T22:33:14.103303707Z" level=info msg="StopPodSandbox for \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\""
Aug  5 22:33:14.106303 kubelet[2589]: I0805 22:33:14.106252    2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-h9546" podStartSLOduration=31.106232081 podStartE2EDuration="31.106232081s" podCreationTimestamp="2024-08-05 22:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:33:14.105826801 +0000 UTC m=+45.106698075" watchObservedRunningTime="2024-08-05 22:33:14.106232081 +0000 UTC m=+45.107103356"
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.305 [INFO][4021] k8s.go 608: Cleaning up netns ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.306 [INFO][4021] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" iface="eth0" netns="/var/run/netns/cni-925bc254-9845-d971-e183-7dcbc65d5228"
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.306 [INFO][4021] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" iface="eth0" netns="/var/run/netns/cni-925bc254-9845-d971-e183-7dcbc65d5228"
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.306 [INFO][4021] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" iface="eth0" netns="/var/run/netns/cni-925bc254-9845-d971-e183-7dcbc65d5228"
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.306 [INFO][4021] k8s.go 615: Releasing IP address(es) ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.306 [INFO][4021] utils.go 188: Calico CNI releasing IP address ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.328 [INFO][4032] ipam_plugin.go 411: Releasing address using handleID ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" HandleID="k8s-pod-network.756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" Workload="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.328 [INFO][4032] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.328 [INFO][4032] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.359 [WARNING][4032] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" HandleID="k8s-pod-network.756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" Workload="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.359 [INFO][4032] ipam_plugin.go 439: Releasing address using workloadID ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" HandleID="k8s-pod-network.756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" Workload="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.362 [INFO][4032] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:14.368479 containerd[1444]: 2024-08-05 22:33:14.365 [INFO][4021] k8s.go 621: Teardown processing complete. ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:14.369776 containerd[1444]: time="2024-08-05T22:33:14.368595177Z" level=info msg="TearDown network for sandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\" successfully"
Aug  5 22:33:14.369776 containerd[1444]: time="2024-08-05T22:33:14.368652154Z" level=info msg="StopPodSandbox for \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\" returns successfully"
Aug  5 22:33:14.369886 kubelet[2589]: E0805 22:33:14.369147    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:14.370239 containerd[1444]: time="2024-08-05T22:33:14.370037894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-clgfp,Uid:73a44b70-815b-476a-b6da-63de43927fa6,Namespace:kube-system,Attempt:1,}"
Aug  5 22:33:14.372803 systemd[1]: run-netns-cni\x2d925bc254\x2d9845\x2dd971\x2de183\x2d7dcbc65d5228.mount: Deactivated successfully.
Aug  5 22:33:14.606466 systemd-networkd[1375]: calida249170c2f: Link UP
Aug  5 22:33:14.606972 systemd-networkd[1375]: calida249170c2f: Gained carrier
Aug  5 22:33:14.617545 systemd-networkd[1375]: calie8e02e1f1d8: Gained IPv6LL
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.428 [INFO][4044] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0 coredns-7db6d8ff4d- kube-system  73a44b70-815b-476a-b6da-63de43927fa6 847 0 2024-08-05 22:32:43 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  localhost  coredns-7db6d8ff4d-clgfp eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] calida249170c2f  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-clgfp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--clgfp-"
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.428 [INFO][4044] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-clgfp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.466 [INFO][4057] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" HandleID="k8s-pod-network.f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" Workload="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.546 [INFO][4057] ipam_plugin.go 264: Auto assigning IP ContainerID="f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" HandleID="k8s-pod-network.f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" Workload="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0007157a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-clgfp", "timestamp":"2024-08-05 22:33:14.466070698 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.546 [INFO][4057] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.546 [INFO][4057] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.546 [INFO][4057] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.548 [INFO][4057] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" host="localhost"
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.554 [INFO][4057] ipam.go 372: Looking up existing affinities for host host="localhost"
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.559 [INFO][4057] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.561 [INFO][4057] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.563 [INFO][4057] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.563 [INFO][4057] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" host="localhost"
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.565 [INFO][4057] ipam.go 1685: Creating new handle: k8s-pod-network.f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.569 [INFO][4057] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" host="localhost"
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.599 [INFO][4057] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" host="localhost"
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.600 [INFO][4057] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" host="localhost"
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.600 [INFO][4057] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:14.682918 containerd[1444]: 2024-08-05 22:33:14.600 [INFO][4057] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" HandleID="k8s-pod-network.f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" Workload="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:14.683742 containerd[1444]: 2024-08-05 22:33:14.603 [INFO][4044] k8s.go 386: Populated endpoint ContainerID="f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-clgfp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"73a44b70-815b-476a-b6da-63de43927fa6", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-clgfp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida249170c2f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:14.683742 containerd[1444]: 2024-08-05 22:33:14.603 [INFO][4044] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-clgfp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:14.683742 containerd[1444]: 2024-08-05 22:33:14.603 [INFO][4044] dataplane_linux.go 68: Setting the host side veth name to calida249170c2f ContainerID="f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-clgfp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:14.683742 containerd[1444]: 2024-08-05 22:33:14.607 [INFO][4044] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-clgfp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:14.683742 containerd[1444]: 2024-08-05 22:33:14.608 [INFO][4044] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-clgfp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"73a44b70-815b-476a-b6da-63de43927fa6", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1", Pod:"coredns-7db6d8ff4d-clgfp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida249170c2f", MAC:"1e:70:f4:cc:00:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:14.683742 containerd[1444]: 2024-08-05 22:33:14.679 [INFO][4044] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-clgfp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:14.710379 containerd[1444]: time="2024-08-05T22:33:14.710272117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug  5 22:33:14.710379 containerd[1444]: time="2024-08-05T22:33:14.710334634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:33:14.710670 containerd[1444]: time="2024-08-05T22:33:14.710621532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug  5 22:33:14.710908 containerd[1444]: time="2024-08-05T22:33:14.710659694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:33:14.738342 systemd[1]: Started cri-containerd-f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1.scope - libcontainer container f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1.
Aug  5 22:33:14.752500 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Aug  5 22:33:14.778718 containerd[1444]: time="2024-08-05T22:33:14.778670961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-clgfp,Uid:73a44b70-815b-476a-b6da-63de43927fa6,Namespace:kube-system,Attempt:1,} returns sandbox id \"f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1\""
Aug  5 22:33:14.779486 kubelet[2589]: E0805 22:33:14.779461    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:14.781908 containerd[1444]: time="2024-08-05T22:33:14.781849505Z" level=info msg="CreateContainer within sandbox \"f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Aug  5 22:33:14.797930 containerd[1444]: time="2024-08-05T22:33:14.797867960Z" level=info msg="CreateContainer within sandbox \"f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88dda3191d6990392d8fb50ac4a080c7214ca7f53038e9aa4670b16645d7a505\""
Aug  5 22:33:14.798531 containerd[1444]: time="2024-08-05T22:33:14.798479809Z" level=info msg="StartContainer for \"88dda3191d6990392d8fb50ac4a080c7214ca7f53038e9aa4670b16645d7a505\""
Aug  5 22:33:14.830306 systemd[1]: Started cri-containerd-88dda3191d6990392d8fb50ac4a080c7214ca7f53038e9aa4670b16645d7a505.scope - libcontainer container 88dda3191d6990392d8fb50ac4a080c7214ca7f53038e9aa4670b16645d7a505.
Aug  5 22:33:14.861088 containerd[1444]: time="2024-08-05T22:33:14.861035971Z" level=info msg="StartContainer for \"88dda3191d6990392d8fb50ac4a080c7214ca7f53038e9aa4670b16645d7a505\" returns successfully"
Aug  5 22:33:15.102964 kubelet[2589]: E0805 22:33:15.101929    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:15.102964 kubelet[2589]: E0805 22:33:15.102033    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:15.112073 kubelet[2589]: I0805 22:33:15.112003    2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-clgfp" podStartSLOduration=32.111982951 podStartE2EDuration="32.111982951s" podCreationTimestamp="2024-08-05 22:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:33:15.111638114 +0000 UTC m=+46.112509398" watchObservedRunningTime="2024-08-05 22:33:15.111982951 +0000 UTC m=+46.112854225"
Aug  5 22:33:16.103482 kubelet[2589]: E0805 22:33:16.103413    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:16.103988 kubelet[2589]: E0805 22:33:16.103764    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:16.601470 systemd-networkd[1375]: calida249170c2f: Gained IPv6LL
Aug  5 22:33:17.103718 containerd[1444]: time="2024-08-05T22:33:17.103002399Z" level=info msg="StopPodSandbox for \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\""
Aug  5 22:33:17.106083 kubelet[2589]: E0805 22:33:17.106025    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.269 [INFO][4184] k8s.go 608: Cleaning up netns ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.269 [INFO][4184] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" iface="eth0" netns="/var/run/netns/cni-9be14b89-fd7c-7b8f-12cc-c3b32cecfd7e"
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.270 [INFO][4184] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" iface="eth0" netns="/var/run/netns/cni-9be14b89-fd7c-7b8f-12cc-c3b32cecfd7e"
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.270 [INFO][4184] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" iface="eth0" netns="/var/run/netns/cni-9be14b89-fd7c-7b8f-12cc-c3b32cecfd7e"
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.270 [INFO][4184] k8s.go 615: Releasing IP address(es) ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.270 [INFO][4184] utils.go 188: Calico CNI releasing IP address ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.292 [INFO][4191] ipam_plugin.go 411: Releasing address using handleID ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" HandleID="k8s-pod-network.faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" Workload="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.292 [INFO][4191] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.292 [INFO][4191] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.311 [WARNING][4191] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" HandleID="k8s-pod-network.faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" Workload="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.311 [INFO][4191] ipam_plugin.go 439: Releasing address using workloadID ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" HandleID="k8s-pod-network.faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" Workload="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.313 [INFO][4191] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:17.318213 containerd[1444]: 2024-08-05 22:33:17.315 [INFO][4184] k8s.go 621: Teardown processing complete. ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:17.318900 containerd[1444]: time="2024-08-05T22:33:17.318453672Z" level=info msg="TearDown network for sandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\" successfully"
Aug  5 22:33:17.318900 containerd[1444]: time="2024-08-05T22:33:17.318489331Z" level=info msg="StopPodSandbox for \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\" returns successfully"
Aug  5 22:33:17.321037 containerd[1444]: time="2024-08-05T22:33:17.320986062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6668f8dc88-4t4lk,Uid:57fa67d2-700f-4c57-9da6-ee6cbe4fdfef,Namespace:calico-system,Attempt:1,}"
Aug  5 22:33:17.321265 systemd[1]: run-netns-cni\x2d9be14b89\x2dfd7c\x2d7b8f\x2d12cc\x2dc3b32cecfd7e.mount: Deactivated successfully.
Aug  5 22:33:17.535730 systemd-networkd[1375]: calic45995488d2: Link UP
Aug  5 22:33:17.535978 systemd-networkd[1375]: calic45995488d2: Gained carrier
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.458 [INFO][4204] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0 calico-kube-controllers-6668f8dc88- calico-system  57fa67d2-700f-4c57-9da6-ee6cbe4fdfef 883 0 2024-08-05 22:32:50 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6668f8dc88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  localhost  calico-kube-controllers-6668f8dc88-4t4lk eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] calic45995488d2  [] []}} ContainerID="bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" Namespace="calico-system" Pod="calico-kube-controllers-6668f8dc88-4t4lk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-"
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.458 [INFO][4204] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" Namespace="calico-system" Pod="calico-kube-controllers-6668f8dc88-4t4lk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.494 [INFO][4213] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" HandleID="k8s-pod-network.bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" Workload="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.502 [INFO][4213] ipam_plugin.go 264: Auto assigning IP ContainerID="bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" HandleID="k8s-pod-network.bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" Workload="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ffb50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6668f8dc88-4t4lk", "timestamp":"2024-08-05 22:33:17.494016868 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.502 [INFO][4213] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.503 [INFO][4213] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.503 [INFO][4213] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.504 [INFO][4213] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" host="localhost"
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.508 [INFO][4213] ipam.go 372: Looking up existing affinities for host host="localhost"
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.512 [INFO][4213] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.513 [INFO][4213] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.516 [INFO][4213] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.516 [INFO][4213] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" host="localhost"
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.517 [INFO][4213] ipam.go 1685: Creating new handle: k8s-pod-network.bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.520 [INFO][4213] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" host="localhost"
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.529 [INFO][4213] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" host="localhost"
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.530 [INFO][4213] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" host="localhost"
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.530 [INFO][4213] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:17.554334 containerd[1444]: 2024-08-05 22:33:17.530 [INFO][4213] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" HandleID="k8s-pod-network.bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" Workload="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:17.555059 containerd[1444]: 2024-08-05 22:33:17.533 [INFO][4204] k8s.go 386: Populated endpoint ContainerID="bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" Namespace="calico-system" Pod="calico-kube-controllers-6668f8dc88-4t4lk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0", GenerateName:"calico-kube-controllers-6668f8dc88-", Namespace:"calico-system", SelfLink:"", UID:"57fa67d2-700f-4c57-9da6-ee6cbe4fdfef", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6668f8dc88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6668f8dc88-4t4lk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic45995488d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:17.555059 containerd[1444]: 2024-08-05 22:33:17.533 [INFO][4204] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" Namespace="calico-system" Pod="calico-kube-controllers-6668f8dc88-4t4lk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:17.555059 containerd[1444]: 2024-08-05 22:33:17.533 [INFO][4204] dataplane_linux.go 68: Setting the host side veth name to calic45995488d2 ContainerID="bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" Namespace="calico-system" Pod="calico-kube-controllers-6668f8dc88-4t4lk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:17.555059 containerd[1444]: 2024-08-05 22:33:17.536 [INFO][4204] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" Namespace="calico-system" Pod="calico-kube-controllers-6668f8dc88-4t4lk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:17.555059 containerd[1444]: 2024-08-05 22:33:17.536 [INFO][4204] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" Namespace="calico-system" Pod="calico-kube-controllers-6668f8dc88-4t4lk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0", GenerateName:"calico-kube-controllers-6668f8dc88-", Namespace:"calico-system", SelfLink:"", UID:"57fa67d2-700f-4c57-9da6-ee6cbe4fdfef", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6668f8dc88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84", Pod:"calico-kube-controllers-6668f8dc88-4t4lk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic45995488d2", MAC:"6e:43:83:64:e1:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:17.555059 containerd[1444]: 2024-08-05 22:33:17.550 [INFO][4204] k8s.go 500: Wrote updated endpoint to datastore ContainerID="bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84" Namespace="calico-system" Pod="calico-kube-controllers-6668f8dc88-4t4lk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:17.627164 containerd[1444]: time="2024-08-05T22:33:17.626450887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug  5 22:33:17.627164 containerd[1444]: time="2024-08-05T22:33:17.627105092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:33:17.627164 containerd[1444]: time="2024-08-05T22:33:17.627148857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug  5 22:33:17.627352 containerd[1444]: time="2024-08-05T22:33:17.627160048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:33:17.654265 systemd[1]: Started cri-containerd-bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84.scope - libcontainer container bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84.
Aug  5 22:33:17.667358 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Aug  5 22:33:17.693106 containerd[1444]: time="2024-08-05T22:33:17.693043402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6668f8dc88-4t4lk,Uid:57fa67d2-700f-4c57-9da6-ee6cbe4fdfef,Namespace:calico-system,Attempt:1,} returns sandbox id \"bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84\""
Aug  5 22:33:17.694854 containerd[1444]: time="2024-08-05T22:33:17.694631154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\""
Aug  5 22:33:18.102979 containerd[1444]: time="2024-08-05T22:33:18.102925367Z" level=info msg="StopPodSandbox for \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\""
Aug  5 22:33:18.110141 kubelet[2589]: E0805 22:33:18.110069    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.158 [INFO][4291] k8s.go 608: Cleaning up netns ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.159 [INFO][4291] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" iface="eth0" netns="/var/run/netns/cni-2e93ee6d-eb92-bf90-e26a-fe5e49a97aa4"
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.159 [INFO][4291] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" iface="eth0" netns="/var/run/netns/cni-2e93ee6d-eb92-bf90-e26a-fe5e49a97aa4"
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.159 [INFO][4291] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" iface="eth0" netns="/var/run/netns/cni-2e93ee6d-eb92-bf90-e26a-fe5e49a97aa4"
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.159 [INFO][4291] k8s.go 615: Releasing IP address(es) ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.159 [INFO][4291] utils.go 188: Calico CNI releasing IP address ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.184 [INFO][4298] ipam_plugin.go 411: Releasing address using handleID ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" HandleID="k8s-pod-network.c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" Workload="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.184 [INFO][4298] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.185 [INFO][4298] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.191 [WARNING][4298] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" HandleID="k8s-pod-network.c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" Workload="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.191 [INFO][4298] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" HandleID="k8s-pod-network.c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" Workload="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.193 [INFO][4298] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:18.197901 containerd[1444]: 2024-08-05 22:33:18.195 [INFO][4291] k8s.go 621: Teardown processing complete. ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:18.199018 containerd[1444]: time="2024-08-05T22:33:18.198140276Z" level=info msg="TearDown network for sandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\" successfully"
Aug  5 22:33:18.199018 containerd[1444]: time="2024-08-05T22:33:18.198178640Z" level=info msg="StopPodSandbox for \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\" returns successfully"
Aug  5 22:33:18.199100 containerd[1444]: time="2024-08-05T22:33:18.199008574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnnbx,Uid:39f5cd8b-f47c-400b-a523-7412e6e8f022,Namespace:calico-system,Attempt:1,}"
Aug  5 22:33:18.323683 systemd[1]: run-netns-cni\x2d2e93ee6d\x2deb92\x2dbf90\x2de26a\x2dfe5e49a97aa4.mount: Deactivated successfully.
Aug  5 22:33:18.394892 systemd-networkd[1375]: cali4e6791847a8: Link UP
Aug  5 22:33:18.396048 systemd-networkd[1375]: cali4e6791847a8: Gained carrier
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.309 [INFO][4306] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lnnbx-eth0 csi-node-driver- calico-system  39f5cd8b-f47c-400b-a523-7412e6e8f022 892 0 2024-08-05 22:32:50 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  localhost  csi-node-driver-lnnbx eth0 default [] []   [kns.calico-system ksa.calico-system.default] cali4e6791847a8  [] []}} ContainerID="659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" Namespace="calico-system" Pod="csi-node-driver-lnnbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnnbx-"
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.310 [INFO][4306] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" Namespace="calico-system" Pod="csi-node-driver-lnnbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.340 [INFO][4318] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" HandleID="k8s-pod-network.659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" Workload="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.348 [INFO][4318] ipam_plugin.go 264: Auto assigning IP ContainerID="659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" HandleID="k8s-pod-network.659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" Workload="localhost-k8s-csi--node--driver--lnnbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000128700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lnnbx", "timestamp":"2024-08-05 22:33:18.340263237 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.349 [INFO][4318] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.349 [INFO][4318] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.349 [INFO][4318] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.350 [INFO][4318] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" host="localhost"
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.355 [INFO][4318] ipam.go 372: Looking up existing affinities for host host="localhost"
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.369 [INFO][4318] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.371 [INFO][4318] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.373 [INFO][4318] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.373 [INFO][4318] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" host="localhost"
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.375 [INFO][4318] ipam.go 1685: Creating new handle: k8s-pod-network.659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.378 [INFO][4318] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" host="localhost"
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.389 [INFO][4318] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" host="localhost"
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.389 [INFO][4318] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" host="localhost"
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.389 [INFO][4318] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:18.408826 containerd[1444]: 2024-08-05 22:33:18.389 [INFO][4318] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" HandleID="k8s-pod-network.659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" Workload="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:18.409684 containerd[1444]: 2024-08-05 22:33:18.392 [INFO][4306] k8s.go 386: Populated endpoint ContainerID="659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" Namespace="calico-system" Pod="csi-node-driver-lnnbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnnbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lnnbx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39f5cd8b-f47c-400b-a523-7412e6e8f022", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lnnbx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4e6791847a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:18.409684 containerd[1444]: 2024-08-05 22:33:18.392 [INFO][4306] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" Namespace="calico-system" Pod="csi-node-driver-lnnbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:18.409684 containerd[1444]: 2024-08-05 22:33:18.392 [INFO][4306] dataplane_linux.go 68: Setting the host side veth name to cali4e6791847a8 ContainerID="659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" Namespace="calico-system" Pod="csi-node-driver-lnnbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:18.409684 containerd[1444]: 2024-08-05 22:33:18.395 [INFO][4306] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" Namespace="calico-system" Pod="csi-node-driver-lnnbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:18.409684 containerd[1444]: 2024-08-05 22:33:18.396 [INFO][4306] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" Namespace="calico-system" Pod="csi-node-driver-lnnbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnnbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lnnbx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39f5cd8b-f47c-400b-a523-7412e6e8f022", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f", Pod:"csi-node-driver-lnnbx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4e6791847a8", MAC:"22:06:5f:44:e7:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:18.409684 containerd[1444]: 2024-08-05 22:33:18.405 [INFO][4306] k8s.go 500: Wrote updated endpoint to datastore ContainerID="659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f" Namespace="calico-system" Pod="csi-node-driver-lnnbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:18.467419 containerd[1444]: time="2024-08-05T22:33:18.467083074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug  5 22:33:18.467419 containerd[1444]: time="2024-08-05T22:33:18.467181725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:33:18.467419 containerd[1444]: time="2024-08-05T22:33:18.467200631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug  5 22:33:18.467419 containerd[1444]: time="2024-08-05T22:33:18.467227282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:33:18.504399 systemd[1]: Started cri-containerd-659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f.scope - libcontainer container 659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f.
Aug  5 22:33:18.517868 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Aug  5 22:33:18.532482 containerd[1444]: time="2024-08-05T22:33:18.532415443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnnbx,Uid:39f5cd8b-f47c-400b-a523-7412e6e8f022,Namespace:calico-system,Attempt:1,} returns sandbox id \"659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f\""
Aug  5 22:33:18.768726 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:48614.service - OpenSSH per-connection server daemon (10.0.0.1:48614).
Aug  5 22:33:18.808178 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 48614 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:18.810170 sshd[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:18.814827 systemd-logind[1428]: New session 13 of user core.
Aug  5 22:33:18.819305 systemd[1]: Started session-13.scope - Session 13 of User core.
Aug  5 22:33:18.947835 sshd[4382]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:18.960156 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:48614.service: Deactivated successfully.
Aug  5 22:33:18.962986 systemd[1]: session-13.scope: Deactivated successfully.
Aug  5 22:33:18.967260 systemd-logind[1428]: Session 13 logged out. Waiting for processes to exit.
Aug  5 22:33:18.967977 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:48628.service - OpenSSH per-connection server daemon (10.0.0.1:48628).
Aug  5 22:33:18.969722 systemd-logind[1428]: Removed session 13.
Aug  5 22:33:19.018330 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 48628 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:19.020062 sshd[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:19.025776 systemd-logind[1428]: New session 14 of user core.
Aug  5 22:33:19.032347 systemd-networkd[1375]: calic45995488d2: Gained IPv6LL
Aug  5 22:33:19.040383 systemd[1]: Started session-14.scope - Session 14 of User core.
Aug  5 22:33:19.332464 sshd[4397]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:19.346303 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:48628.service: Deactivated successfully.
Aug  5 22:33:19.350447 systemd[1]: session-14.scope: Deactivated successfully.
Aug  5 22:33:19.353800 systemd-logind[1428]: Session 14 logged out. Waiting for processes to exit.
Aug  5 22:33:19.370000 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:48634.service - OpenSSH per-connection server daemon (10.0.0.1:48634).
Aug  5 22:33:19.372862 systemd-logind[1428]: Removed session 14.
Aug  5 22:33:19.416031 sshd[4419]: Accepted publickey for core from 10.0.0.1 port 48634 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:19.418294 sshd[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:19.424424 systemd-logind[1428]: New session 15 of user core.
Aug  5 22:33:19.432313 systemd[1]: Started session-15.scope - Session 15 of User core.
Aug  5 22:33:19.571295 sshd[4419]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:19.575063 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:48634.service: Deactivated successfully.
Aug  5 22:33:19.577582 systemd[1]: session-15.scope: Deactivated successfully.
Aug  5 22:33:19.579576 systemd-logind[1428]: Session 15 logged out. Waiting for processes to exit.
Aug  5 22:33:19.580753 systemd-logind[1428]: Removed session 15.
Aug  5 22:33:20.269073 containerd[1444]: time="2024-08-05T22:33:20.269005957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:20.291033 containerd[1444]: time="2024-08-05T22:33:20.290957133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793"
Aug  5 22:33:20.318357 containerd[1444]: time="2024-08-05T22:33:20.318286591Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:20.331269 containerd[1444]: time="2024-08-05T22:33:20.331203481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:20.331899 containerd[1444]: time="2024-08-05T22:33:20.331861912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.637192923s"
Aug  5 22:33:20.331960 containerd[1444]: time="2024-08-05T22:33:20.331906117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\""
Aug  5 22:33:20.333160 containerd[1444]: time="2024-08-05T22:33:20.332913411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\""
Aug  5 22:33:20.342377 containerd[1444]: time="2024-08-05T22:33:20.342331047Z" level=info msg="CreateContainer within sandbox \"bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Aug  5 22:33:20.377288 systemd-networkd[1375]: cali4e6791847a8: Gained IPv6LL
Aug  5 22:33:20.673444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46042504.mount: Deactivated successfully.
Aug  5 22:33:20.842105 containerd[1444]: time="2024-08-05T22:33:20.842045221Z" level=info msg="CreateContainer within sandbox \"bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"18df67880dec5a8b46410a78e2c5ffc081abaf538f513a78395ea2abc90fe176\""
Aug  5 22:33:20.844202 containerd[1444]: time="2024-08-05T22:33:20.842907575Z" level=info msg="StartContainer for \"18df67880dec5a8b46410a78e2c5ffc081abaf538f513a78395ea2abc90fe176\""
Aug  5 22:33:20.887401 systemd[1]: Started cri-containerd-18df67880dec5a8b46410a78e2c5ffc081abaf538f513a78395ea2abc90fe176.scope - libcontainer container 18df67880dec5a8b46410a78e2c5ffc081abaf538f513a78395ea2abc90fe176.
Aug  5 22:33:21.121707 containerd[1444]: time="2024-08-05T22:33:21.121556361Z" level=info msg="StartContainer for \"18df67880dec5a8b46410a78e2c5ffc081abaf538f513a78395ea2abc90fe176\" returns successfully"
Aug  5 22:33:21.142051 kubelet[2589]: I0805 22:33:21.141232    2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6668f8dc88-4t4lk" podStartSLOduration=28.502517754 podStartE2EDuration="31.141208101s" podCreationTimestamp="2024-08-05 22:32:50 +0000 UTC" firstStartedPulling="2024-08-05 22:33:17.694099115 +0000 UTC m=+48.694970389" lastFinishedPulling="2024-08-05 22:33:20.332789462 +0000 UTC m=+51.333660736" observedRunningTime="2024-08-05 22:33:21.139083453 +0000 UTC m=+52.139954727" watchObservedRunningTime="2024-08-05 22:33:21.141208101 +0000 UTC m=+52.142079385"
Aug  5 22:33:22.147374 containerd[1444]: time="2024-08-05T22:33:22.147171143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:22.149870 containerd[1444]: time="2024-08-05T22:33:22.149824355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062"
Aug  5 22:33:22.155288 containerd[1444]: time="2024-08-05T22:33:22.152451668Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:22.167837 containerd[1444]: time="2024-08-05T22:33:22.167632864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:22.172606 containerd[1444]: time="2024-08-05T22:33:22.172393087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.839425522s"
Aug  5 22:33:22.172606 containerd[1444]: time="2024-08-05T22:33:22.172540722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\""
Aug  5 22:33:22.181576 containerd[1444]: time="2024-08-05T22:33:22.180759120Z" level=info msg="CreateContainer within sandbox \"659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Aug  5 22:33:22.228479 containerd[1444]: time="2024-08-05T22:33:22.228393265Z" level=info msg="CreateContainer within sandbox \"659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"55506fb50c644f4b2d4e66aeb9bd2731505aa592b2b7cb23c6b050bc0c761766\""
Aug  5 22:33:22.230577 containerd[1444]: time="2024-08-05T22:33:22.230489225Z" level=info msg="StartContainer for \"55506fb50c644f4b2d4e66aeb9bd2731505aa592b2b7cb23c6b050bc0c761766\""
Aug  5 22:33:22.309649 systemd[1]: Started cri-containerd-55506fb50c644f4b2d4e66aeb9bd2731505aa592b2b7cb23c6b050bc0c761766.scope - libcontainer container 55506fb50c644f4b2d4e66aeb9bd2731505aa592b2b7cb23c6b050bc0c761766.
Aug  5 22:33:22.386368 containerd[1444]: time="2024-08-05T22:33:22.386279419Z" level=info msg="StartContainer for \"55506fb50c644f4b2d4e66aeb9bd2731505aa592b2b7cb23c6b050bc0c761766\" returns successfully"
Aug  5 22:33:22.387910 containerd[1444]: time="2024-08-05T22:33:22.387860656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\""
Aug  5 22:33:23.904990 containerd[1444]: time="2024-08-05T22:33:23.904902202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:23.905953 containerd[1444]: time="2024-08-05T22:33:23.905893060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655"
Aug  5 22:33:23.907532 containerd[1444]: time="2024-08-05T22:33:23.907477922Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:23.910229 containerd[1444]: time="2024-08-05T22:33:23.910171950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:23.910841 containerd[1444]: time="2024-08-05T22:33:23.910786103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.522872305s"
Aug  5 22:33:23.910841 containerd[1444]: time="2024-08-05T22:33:23.910837822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\""
Aug  5 22:33:23.913320 containerd[1444]: time="2024-08-05T22:33:23.913271699Z" level=info msg="CreateContainer within sandbox \"659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Aug  5 22:33:23.946223 containerd[1444]: time="2024-08-05T22:33:23.946153180Z" level=info msg="CreateContainer within sandbox \"659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f8e9a24c343c2062e7ca33ffc3b5e77d246231f0d88006d1fa3041d7db488502\""
Aug  5 22:33:23.947881 containerd[1444]: time="2024-08-05T22:33:23.947835018Z" level=info msg="StartContainer for \"f8e9a24c343c2062e7ca33ffc3b5e77d246231f0d88006d1fa3041d7db488502\""
Aug  5 22:33:24.003475 systemd[1]: Started cri-containerd-f8e9a24c343c2062e7ca33ffc3b5e77d246231f0d88006d1fa3041d7db488502.scope - libcontainer container f8e9a24c343c2062e7ca33ffc3b5e77d246231f0d88006d1fa3041d7db488502.
Aug  5 22:33:24.043798 containerd[1444]: time="2024-08-05T22:33:24.043752829Z" level=info msg="StartContainer for \"f8e9a24c343c2062e7ca33ffc3b5e77d246231f0d88006d1fa3041d7db488502\" returns successfully"
Aug  5 22:33:24.151988 kubelet[2589]: I0805 22:33:24.151880    2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lnnbx" podStartSLOduration=28.774318363 podStartE2EDuration="34.151848671s" podCreationTimestamp="2024-08-05 22:32:50 +0000 UTC" firstStartedPulling="2024-08-05 22:33:18.534193711 +0000 UTC m=+49.535064985" lastFinishedPulling="2024-08-05 22:33:23.911724009 +0000 UTC m=+54.912595293" observedRunningTime="2024-08-05 22:33:24.149865175 +0000 UTC m=+55.150736469" watchObservedRunningTime="2024-08-05 22:33:24.151848671 +0000 UTC m=+55.152719945"
Aug  5 22:33:24.181765 kubelet[2589]: I0805 22:33:24.181572    2589 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Aug  5 22:33:24.181765 kubelet[2589]: I0805 22:33:24.181638    2589 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Aug  5 22:33:24.584811 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:48736.service - OpenSSH per-connection server daemon (10.0.0.1:48736).
Aug  5 22:33:24.620352 sshd[4615]: Accepted publickey for core from 10.0.0.1 port 48736 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:24.622041 sshd[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:24.626777 systemd-logind[1428]: New session 16 of user core.
Aug  5 22:33:24.631252 systemd[1]: Started session-16.scope - Session 16 of User core.
Aug  5 22:33:24.757663 sshd[4615]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:24.761458 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:48736.service: Deactivated successfully.
Aug  5 22:33:24.763576 systemd[1]: session-16.scope: Deactivated successfully.
Aug  5 22:33:24.764277 systemd-logind[1428]: Session 16 logged out. Waiting for processes to exit.
Aug  5 22:33:24.765335 systemd-logind[1428]: Removed session 16.
Aug  5 22:33:29.093545 containerd[1444]: time="2024-08-05T22:33:29.093483929Z" level=info msg="StopPodSandbox for \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\""
Aug  5 22:33:29.171765 containerd[1444]: 2024-08-05 22:33:29.132 [WARNING][4647] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lnnbx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39f5cd8b-f47c-400b-a523-7412e6e8f022", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f", Pod:"csi-node-driver-lnnbx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4e6791847a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:29.171765 containerd[1444]: 2024-08-05 22:33:29.132 [INFO][4647] k8s.go 608: Cleaning up netns ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:29.171765 containerd[1444]: 2024-08-05 22:33:29.132 [INFO][4647] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" iface="eth0" netns=""
Aug  5 22:33:29.171765 containerd[1444]: 2024-08-05 22:33:29.132 [INFO][4647] k8s.go 615: Releasing IP address(es) ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:29.171765 containerd[1444]: 2024-08-05 22:33:29.132 [INFO][4647] utils.go 188: Calico CNI releasing IP address ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:29.171765 containerd[1444]: 2024-08-05 22:33:29.160 [INFO][4656] ipam_plugin.go 411: Releasing address using handleID ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" HandleID="k8s-pod-network.c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" Workload="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:29.171765 containerd[1444]: 2024-08-05 22:33:29.160 [INFO][4656] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:29.171765 containerd[1444]: 2024-08-05 22:33:29.160 [INFO][4656] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:29.171765 containerd[1444]: 2024-08-05 22:33:29.165 [WARNING][4656] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" HandleID="k8s-pod-network.c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" Workload="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:29.171765 containerd[1444]: 2024-08-05 22:33:29.165 [INFO][4656] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" HandleID="k8s-pod-network.c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" Workload="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:29.171765 containerd[1444]: 2024-08-05 22:33:29.166 [INFO][4656] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:29.171765 containerd[1444]: 2024-08-05 22:33:29.168 [INFO][4647] k8s.go 621: Teardown processing complete. ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:29.172335 containerd[1444]: time="2024-08-05T22:33:29.171803088Z" level=info msg="TearDown network for sandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\" successfully"
Aug  5 22:33:29.172335 containerd[1444]: time="2024-08-05T22:33:29.171833946Z" level=info msg="StopPodSandbox for \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\" returns successfully"
Aug  5 22:33:29.178723 containerd[1444]: time="2024-08-05T22:33:29.178662304Z" level=info msg="RemovePodSandbox for \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\""
Aug  5 22:33:29.191332 containerd[1444]: time="2024-08-05T22:33:29.182372110Z" level=info msg="Forcibly stopping sandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\""
Aug  5 22:33:29.259155 containerd[1444]: 2024-08-05 22:33:29.226 [WARNING][4678] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lnnbx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39f5cd8b-f47c-400b-a523-7412e6e8f022", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"659c65eb71f83a168b3eee49fbb020c045167581e792e7513a6377b02c7af00f", Pod:"csi-node-driver-lnnbx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4e6791847a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:29.259155 containerd[1444]: 2024-08-05 22:33:29.226 [INFO][4678] k8s.go 608: Cleaning up netns ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:29.259155 containerd[1444]: 2024-08-05 22:33:29.226 [INFO][4678] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" iface="eth0" netns=""
Aug  5 22:33:29.259155 containerd[1444]: 2024-08-05 22:33:29.226 [INFO][4678] k8s.go 615: Releasing IP address(es) ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:29.259155 containerd[1444]: 2024-08-05 22:33:29.226 [INFO][4678] utils.go 188: Calico CNI releasing IP address ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:29.259155 containerd[1444]: 2024-08-05 22:33:29.247 [INFO][4686] ipam_plugin.go 411: Releasing address using handleID ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" HandleID="k8s-pod-network.c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" Workload="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:29.259155 containerd[1444]: 2024-08-05 22:33:29.247 [INFO][4686] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:29.259155 containerd[1444]: 2024-08-05 22:33:29.248 [INFO][4686] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:29.259155 containerd[1444]: 2024-08-05 22:33:29.252 [WARNING][4686] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" HandleID="k8s-pod-network.c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" Workload="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:29.259155 containerd[1444]: 2024-08-05 22:33:29.252 [INFO][4686] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" HandleID="k8s-pod-network.c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213" Workload="localhost-k8s-csi--node--driver--lnnbx-eth0"
Aug  5 22:33:29.259155 containerd[1444]: 2024-08-05 22:33:29.253 [INFO][4686] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:29.259155 containerd[1444]: 2024-08-05 22:33:29.256 [INFO][4678] k8s.go 621: Teardown processing complete. ContainerID="c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213"
Aug  5 22:33:29.259155 containerd[1444]: time="2024-08-05T22:33:29.258474776Z" level=info msg="TearDown network for sandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\" successfully"
Aug  5 22:33:29.449358 containerd[1444]: time="2024-08-05T22:33:29.449288484Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Aug  5 22:33:29.449510 containerd[1444]: time="2024-08-05T22:33:29.449393246Z" level=info msg="RemovePodSandbox \"c289467c18c6da52a52a5a9e04700d6854fb044095b08e08f1178c01e74c6213\" returns successfully"
Aug  5 22:33:29.450057 containerd[1444]: time="2024-08-05T22:33:29.450011862Z" level=info msg="StopPodSandbox for \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\""
Aug  5 22:33:29.522683 containerd[1444]: 2024-08-05 22:33:29.481 [WARNING][4710] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--h9546-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"89476694-84bf-42c4-a686-21517cd48dc0", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385", Pod:"coredns-7db6d8ff4d-h9546", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8e02e1f1d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:29.522683 containerd[1444]: 2024-08-05 22:33:29.481 [INFO][4710] k8s.go 608: Cleaning up netns ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:29.522683 containerd[1444]: 2024-08-05 22:33:29.481 [INFO][4710] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" iface="eth0" netns=""
Aug  5 22:33:29.522683 containerd[1444]: 2024-08-05 22:33:29.481 [INFO][4710] k8s.go 615: Releasing IP address(es) ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:29.522683 containerd[1444]: 2024-08-05 22:33:29.481 [INFO][4710] utils.go 188: Calico CNI releasing IP address ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:29.522683 containerd[1444]: 2024-08-05 22:33:29.509 [INFO][4717] ipam_plugin.go 411: Releasing address using handleID ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" HandleID="k8s-pod-network.6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" Workload="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:29.522683 containerd[1444]: 2024-08-05 22:33:29.509 [INFO][4717] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:29.522683 containerd[1444]: 2024-08-05 22:33:29.509 [INFO][4717] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:29.522683 containerd[1444]: 2024-08-05 22:33:29.516 [WARNING][4717] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" HandleID="k8s-pod-network.6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" Workload="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:29.522683 containerd[1444]: 2024-08-05 22:33:29.516 [INFO][4717] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" HandleID="k8s-pod-network.6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" Workload="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:29.522683 containerd[1444]: 2024-08-05 22:33:29.518 [INFO][4717] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:29.522683 containerd[1444]: 2024-08-05 22:33:29.520 [INFO][4710] k8s.go 621: Teardown processing complete. ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:29.523291 containerd[1444]: time="2024-08-05T22:33:29.522729596Z" level=info msg="TearDown network for sandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\" successfully"
Aug  5 22:33:29.523291 containerd[1444]: time="2024-08-05T22:33:29.522756497Z" level=info msg="StopPodSandbox for \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\" returns successfully"
Aug  5 22:33:29.523291 containerd[1444]: time="2024-08-05T22:33:29.523242550Z" level=info msg="RemovePodSandbox for \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\""
Aug  5 22:33:29.523291 containerd[1444]: time="2024-08-05T22:33:29.523280001Z" level=info msg="Forcibly stopping sandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\""
Aug  5 22:33:29.624497 containerd[1444]: 2024-08-05 22:33:29.581 [WARNING][4740] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--h9546-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"89476694-84bf-42c4-a686-21517cd48dc0", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"673c1baeeb42c030ac8a874729ce3870d31bc8c8ee0218409ff08e9cc6202385", Pod:"coredns-7db6d8ff4d-h9546", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8e02e1f1d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:29.624497 containerd[1444]: 2024-08-05 22:33:29.581 [INFO][4740] k8s.go 608: Cleaning up netns ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:29.624497 containerd[1444]: 2024-08-05 22:33:29.581 [INFO][4740] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" iface="eth0" netns=""
Aug  5 22:33:29.624497 containerd[1444]: 2024-08-05 22:33:29.581 [INFO][4740] k8s.go 615: Releasing IP address(es) ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:29.624497 containerd[1444]: 2024-08-05 22:33:29.581 [INFO][4740] utils.go 188: Calico CNI releasing IP address ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:29.624497 containerd[1444]: 2024-08-05 22:33:29.612 [INFO][4748] ipam_plugin.go 411: Releasing address using handleID ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" HandleID="k8s-pod-network.6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" Workload="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:29.624497 containerd[1444]: 2024-08-05 22:33:29.612 [INFO][4748] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:29.624497 containerd[1444]: 2024-08-05 22:33:29.612 [INFO][4748] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:29.624497 containerd[1444]: 2024-08-05 22:33:29.617 [WARNING][4748] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" HandleID="k8s-pod-network.6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" Workload="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:29.624497 containerd[1444]: 2024-08-05 22:33:29.617 [INFO][4748] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" HandleID="k8s-pod-network.6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214" Workload="localhost-k8s-coredns--7db6d8ff4d--h9546-eth0"
Aug  5 22:33:29.624497 containerd[1444]: 2024-08-05 22:33:29.619 [INFO][4748] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:29.624497 containerd[1444]: 2024-08-05 22:33:29.622 [INFO][4740] k8s.go 621: Teardown processing complete. ContainerID="6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214"
Aug  5 22:33:29.624953 containerd[1444]: time="2024-08-05T22:33:29.624590319Z" level=info msg="TearDown network for sandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\" successfully"
Aug  5 22:33:29.629135 containerd[1444]: time="2024-08-05T22:33:29.629075582Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Aug  5 22:33:29.629297 containerd[1444]: time="2024-08-05T22:33:29.629264836Z" level=info msg="RemovePodSandbox \"6bcbec95e2bd3d0d8677ffc40e054a76bf50027a288c6e4882bc268f67a71214\" returns successfully"
Aug  5 22:33:29.629810 containerd[1444]: time="2024-08-05T22:33:29.629776187Z" level=info msg="StopPodSandbox for \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\""
Aug  5 22:33:29.713262 containerd[1444]: 2024-08-05 22:33:29.665 [WARNING][4770] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0", GenerateName:"calico-kube-controllers-6668f8dc88-", Namespace:"calico-system", SelfLink:"", UID:"57fa67d2-700f-4c57-9da6-ee6cbe4fdfef", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6668f8dc88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84", Pod:"calico-kube-controllers-6668f8dc88-4t4lk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic45995488d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:29.713262 containerd[1444]: 2024-08-05 22:33:29.665 [INFO][4770] k8s.go 608: Cleaning up netns ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:29.713262 containerd[1444]: 2024-08-05 22:33:29.666 [INFO][4770] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" iface="eth0" netns=""
Aug  5 22:33:29.713262 containerd[1444]: 2024-08-05 22:33:29.666 [INFO][4770] k8s.go 615: Releasing IP address(es) ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:29.713262 containerd[1444]: 2024-08-05 22:33:29.666 [INFO][4770] utils.go 188: Calico CNI releasing IP address ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:29.713262 containerd[1444]: 2024-08-05 22:33:29.695 [INFO][4778] ipam_plugin.go 411: Releasing address using handleID ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" HandleID="k8s-pod-network.faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" Workload="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:29.713262 containerd[1444]: 2024-08-05 22:33:29.695 [INFO][4778] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:29.713262 containerd[1444]: 2024-08-05 22:33:29.695 [INFO][4778] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:29.713262 containerd[1444]: 2024-08-05 22:33:29.704 [WARNING][4778] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" HandleID="k8s-pod-network.faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" Workload="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:29.713262 containerd[1444]: 2024-08-05 22:33:29.704 [INFO][4778] ipam_plugin.go 439: Releasing address using workloadID ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" HandleID="k8s-pod-network.faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" Workload="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:29.713262 containerd[1444]: 2024-08-05 22:33:29.707 [INFO][4778] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:29.713262 containerd[1444]: 2024-08-05 22:33:29.709 [INFO][4770] k8s.go 621: Teardown processing complete. ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:29.713989 containerd[1444]: time="2024-08-05T22:33:29.713274228Z" level=info msg="TearDown network for sandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\" successfully"
Aug  5 22:33:29.713989 containerd[1444]: time="2024-08-05T22:33:29.713307302Z" level=info msg="StopPodSandbox for \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\" returns successfully"
Aug  5 22:33:29.714068 containerd[1444]: time="2024-08-05T22:33:29.714007124Z" level=info msg="RemovePodSandbox for \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\""
Aug  5 22:33:29.714068 containerd[1444]: time="2024-08-05T22:33:29.714037041Z" level=info msg="Forcibly stopping sandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\""
Aug  5 22:33:29.769176 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:48738.service - OpenSSH per-connection server daemon (10.0.0.1:48738).
Aug  5 22:33:29.807292 containerd[1444]: 2024-08-05 22:33:29.759 [WARNING][4800] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0", GenerateName:"calico-kube-controllers-6668f8dc88-", Namespace:"calico-system", SelfLink:"", UID:"57fa67d2-700f-4c57-9da6-ee6cbe4fdfef", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6668f8dc88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdd168db1c0f661002f62a66ef940403e5c28c4267d431ddb65bdf576be1dd84", Pod:"calico-kube-controllers-6668f8dc88-4t4lk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic45995488d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:29.807292 containerd[1444]: 2024-08-05 22:33:29.760 [INFO][4800] k8s.go 608: Cleaning up netns ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:29.807292 containerd[1444]: 2024-08-05 22:33:29.760 [INFO][4800] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" iface="eth0" netns=""
Aug  5 22:33:29.807292 containerd[1444]: 2024-08-05 22:33:29.760 [INFO][4800] k8s.go 615: Releasing IP address(es) ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:29.807292 containerd[1444]: 2024-08-05 22:33:29.760 [INFO][4800] utils.go 188: Calico CNI releasing IP address ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:29.807292 containerd[1444]: 2024-08-05 22:33:29.793 [INFO][4808] ipam_plugin.go 411: Releasing address using handleID ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" HandleID="k8s-pod-network.faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" Workload="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:29.807292 containerd[1444]: 2024-08-05 22:33:29.793 [INFO][4808] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:29.807292 containerd[1444]: 2024-08-05 22:33:29.794 [INFO][4808] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:29.807292 containerd[1444]: 2024-08-05 22:33:29.798 [WARNING][4808] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" HandleID="k8s-pod-network.faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" Workload="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:29.807292 containerd[1444]: 2024-08-05 22:33:29.798 [INFO][4808] ipam_plugin.go 439: Releasing address using workloadID ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" HandleID="k8s-pod-network.faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582" Workload="localhost-k8s-calico--kube--controllers--6668f8dc88--4t4lk-eth0"
Aug  5 22:33:29.807292 containerd[1444]: 2024-08-05 22:33:29.799 [INFO][4808] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:29.807292 containerd[1444]: 2024-08-05 22:33:29.804 [INFO][4800] k8s.go 621: Teardown processing complete. ContainerID="faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582"
Aug  5 22:33:29.808063 containerd[1444]: time="2024-08-05T22:33:29.807345881Z" level=info msg="TearDown network for sandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\" successfully"
Aug  5 22:33:29.812377 containerd[1444]: time="2024-08-05T22:33:29.811703680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Aug  5 22:33:29.812377 containerd[1444]: time="2024-08-05T22:33:29.811781619Z" level=info msg="RemovePodSandbox \"faae0d9ecaa71c1970ca3aa1e042569116f0c0d770d61ab573f409eff28f9582\" returns successfully"
Aug  5 22:33:29.812377 containerd[1444]: time="2024-08-05T22:33:29.812328659Z" level=info msg="StopPodSandbox for \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\""
Aug  5 22:33:29.812563 sshd[4814]: Accepted publickey for core from 10.0.0.1 port 48738 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:29.814622 sshd[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:29.821221 systemd-logind[1428]: New session 17 of user core.
Aug  5 22:33:29.826264 systemd[1]: Started session-17.scope - Session 17 of User core.
Aug  5 22:33:29.899330 containerd[1444]: 2024-08-05 22:33:29.853 [WARNING][4833] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"73a44b70-815b-476a-b6da-63de43927fa6", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1", Pod:"coredns-7db6d8ff4d-clgfp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida249170c2f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:29.899330 containerd[1444]: 2024-08-05 22:33:29.853 [INFO][4833] k8s.go 608: Cleaning up netns ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:29.899330 containerd[1444]: 2024-08-05 22:33:29.853 [INFO][4833] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" iface="eth0" netns=""
Aug  5 22:33:29.899330 containerd[1444]: 2024-08-05 22:33:29.853 [INFO][4833] k8s.go 615: Releasing IP address(es) ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:29.899330 containerd[1444]: 2024-08-05 22:33:29.853 [INFO][4833] utils.go 188: Calico CNI releasing IP address ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:29.899330 containerd[1444]: 2024-08-05 22:33:29.877 [INFO][4842] ipam_plugin.go 411: Releasing address using handleID ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" HandleID="k8s-pod-network.756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" Workload="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:29.899330 containerd[1444]: 2024-08-05 22:33:29.877 [INFO][4842] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:29.899330 containerd[1444]: 2024-08-05 22:33:29.878 [INFO][4842] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:29.899330 containerd[1444]: 2024-08-05 22:33:29.885 [WARNING][4842] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" HandleID="k8s-pod-network.756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" Workload="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:29.899330 containerd[1444]: 2024-08-05 22:33:29.885 [INFO][4842] ipam_plugin.go 439: Releasing address using workloadID ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" HandleID="k8s-pod-network.756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" Workload="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:29.899330 containerd[1444]: 2024-08-05 22:33:29.891 [INFO][4842] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:29.899330 containerd[1444]: 2024-08-05 22:33:29.895 [INFO][4833] k8s.go 621: Teardown processing complete. ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:29.899745 containerd[1444]: time="2024-08-05T22:33:29.899391548Z" level=info msg="TearDown network for sandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\" successfully"
Aug  5 22:33:29.899745 containerd[1444]: time="2024-08-05T22:33:29.899429050Z" level=info msg="StopPodSandbox for \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\" returns successfully"
Aug  5 22:33:29.900177 containerd[1444]: time="2024-08-05T22:33:29.900147097Z" level=info msg="RemovePodSandbox for \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\""
Aug  5 22:33:29.900209 containerd[1444]: time="2024-08-05T22:33:29.900187595Z" level=info msg="Forcibly stopping sandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\""
Aug  5 22:33:29.982761 sshd[4814]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:29.992192 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:48738.service: Deactivated successfully.
Aug  5 22:33:29.993683 containerd[1444]: 2024-08-05 22:33:29.949 [WARNING][4873] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"73a44b70-815b-476a-b6da-63de43927fa6", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 32, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f157967227dba85af714d47aa733bb5f265d0193884b02088be679aaee4f53e1", Pod:"coredns-7db6d8ff4d-clgfp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida249170c2f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:29.993683 containerd[1444]: 2024-08-05 22:33:29.949 [INFO][4873] k8s.go 608: Cleaning up netns ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:29.993683 containerd[1444]: 2024-08-05 22:33:29.949 [INFO][4873] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" iface="eth0" netns=""
Aug  5 22:33:29.993683 containerd[1444]: 2024-08-05 22:33:29.950 [INFO][4873] k8s.go 615: Releasing IP address(es) ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:29.993683 containerd[1444]: 2024-08-05 22:33:29.950 [INFO][4873] utils.go 188: Calico CNI releasing IP address ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:29.993683 containerd[1444]: 2024-08-05 22:33:29.980 [INFO][4889] ipam_plugin.go 411: Releasing address using handleID ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" HandleID="k8s-pod-network.756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" Workload="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:29.993683 containerd[1444]: 2024-08-05 22:33:29.980 [INFO][4889] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:29.993683 containerd[1444]: 2024-08-05 22:33:29.980 [INFO][4889] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:29.993683 containerd[1444]: 2024-08-05 22:33:29.986 [WARNING][4889] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" HandleID="k8s-pod-network.756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" Workload="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:29.993683 containerd[1444]: 2024-08-05 22:33:29.986 [INFO][4889] ipam_plugin.go 439: Releasing address using workloadID ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" HandleID="k8s-pod-network.756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe" Workload="localhost-k8s-coredns--7db6d8ff4d--clgfp-eth0"
Aug  5 22:33:29.993683 containerd[1444]: 2024-08-05 22:33:29.987 [INFO][4889] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:29.993683 containerd[1444]: 2024-08-05 22:33:29.990 [INFO][4873] k8s.go 621: Teardown processing complete. ContainerID="756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe"
Aug  5 22:33:29.994234 containerd[1444]: time="2024-08-05T22:33:29.993759751Z" level=info msg="TearDown network for sandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\" successfully"
Aug  5 22:33:29.994367 systemd[1]: session-17.scope: Deactivated successfully.
Aug  5 22:33:29.995531 systemd-logind[1428]: Session 17 logged out. Waiting for processes to exit.
Aug  5 22:33:29.998317 containerd[1444]: time="2024-08-05T22:33:29.998272667Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Aug  5 22:33:29.998481 containerd[1444]: time="2024-08-05T22:33:29.998356808Z" level=info msg="RemovePodSandbox \"756029d0b7c071e62331cf3a60bb57630cdb6db6dc922431da71150eb0f20ffe\" returns successfully"
Aug  5 22:33:30.004810 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:48750.service - OpenSSH per-connection server daemon (10.0.0.1:48750).
Aug  5 22:33:30.006471 systemd-logind[1428]: Removed session 17.
Aug  5 22:33:30.036741 sshd[4899]: Accepted publickey for core from 10.0.0.1 port 48750 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:30.038471 sshd[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:30.042601 systemd-logind[1428]: New session 18 of user core.
Aug  5 22:33:30.052276 systemd[1]: Started session-18.scope - Session 18 of User core.
Aug  5 22:33:30.258106 sshd[4899]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:30.268555 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:48750.service: Deactivated successfully.
Aug  5 22:33:30.270434 systemd[1]: session-18.scope: Deactivated successfully.
Aug  5 22:33:30.272083 systemd-logind[1428]: Session 18 logged out. Waiting for processes to exit.
Aug  5 22:33:30.278420 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:48758.service - OpenSSH per-connection server daemon (10.0.0.1:48758).
Aug  5 22:33:30.279438 systemd-logind[1428]: Removed session 18.
Aug  5 22:33:30.310955 sshd[4911]: Accepted publickey for core from 10.0.0.1 port 48758 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:30.312595 sshd[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:30.317195 systemd-logind[1428]: New session 19 of user core.
Aug  5 22:33:30.330283 systemd[1]: Started session-19.scope - Session 19 of User core.
Aug  5 22:33:31.962182 sshd[4911]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:31.979431 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:48758.service: Deactivated successfully.
Aug  5 22:33:31.981904 systemd[1]: session-19.scope: Deactivated successfully.
Aug  5 22:33:31.985314 systemd-logind[1428]: Session 19 logged out. Waiting for processes to exit.
Aug  5 22:33:31.994663 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:47818.service - OpenSSH per-connection server daemon (10.0.0.1:47818).
Aug  5 22:33:31.996665 systemd-logind[1428]: Removed session 19.
Aug  5 22:33:32.030272 sshd[4952]: Accepted publickey for core from 10.0.0.1 port 47818 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:32.031977 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:32.036424 systemd-logind[1428]: New session 20 of user core.
Aug  5 22:33:32.043301 systemd[1]: Started session-20.scope - Session 20 of User core.
Aug  5 22:33:32.451702 sshd[4952]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:32.463672 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:47818.service: Deactivated successfully.
Aug  5 22:33:32.465943 systemd[1]: session-20.scope: Deactivated successfully.
Aug  5 22:33:32.468012 systemd-logind[1428]: Session 20 logged out. Waiting for processes to exit.
Aug  5 22:33:32.475576 systemd[1]: Started sshd@20-10.0.0.112:22-10.0.0.1:47834.service - OpenSSH per-connection server daemon (10.0.0.1:47834).
Aug  5 22:33:32.476876 systemd-logind[1428]: Removed session 20.
Aug  5 22:33:32.503480 sshd[4969]: Accepted publickey for core from 10.0.0.1 port 47834 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:32.505484 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:32.510466 systemd-logind[1428]: New session 21 of user core.
Aug  5 22:33:32.520286 systemd[1]: Started session-21.scope - Session 21 of User core.
Aug  5 22:33:32.642220 sshd[4969]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:32.647943 systemd[1]: sshd@20-10.0.0.112:22-10.0.0.1:47834.service: Deactivated successfully.
Aug  5 22:33:32.651299 systemd[1]: session-21.scope: Deactivated successfully.
Aug  5 22:33:32.652128 systemd-logind[1428]: Session 21 logged out. Waiting for processes to exit.
Aug  5 22:33:32.653641 systemd-logind[1428]: Removed session 21.
Aug  5 22:33:35.516994 kubelet[2589]: E0805 22:33:35.516947    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:37.653581 systemd[1]: Started sshd@21-10.0.0.112:22-10.0.0.1:47836.service - OpenSSH per-connection server daemon (10.0.0.1:47836).
Aug  5 22:33:37.684808 sshd[5004]: Accepted publickey for core from 10.0.0.1 port 47836 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:37.686484 sshd[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:37.690742 systemd-logind[1428]: New session 22 of user core.
Aug  5 22:33:37.704359 systemd[1]: Started session-22.scope - Session 22 of User core.
Aug  5 22:33:37.818558 sshd[5004]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:37.822953 systemd[1]: sshd@21-10.0.0.112:22-10.0.0.1:47836.service: Deactivated successfully.
Aug  5 22:33:37.825151 systemd[1]: session-22.scope: Deactivated successfully.
Aug  5 22:33:37.825960 systemd-logind[1428]: Session 22 logged out. Waiting for processes to exit.
Aug  5 22:33:37.826958 systemd-logind[1428]: Removed session 22.
Aug  5 22:33:40.129268 kubelet[2589]: I0805 22:33:40.129197    2589 topology_manager.go:215] "Topology Admit Handler" podUID="3b08d14b-4089-4293-b6c7-457168019d9a" podNamespace="calico-apiserver" podName="calico-apiserver-86d8687b98-hfznx"
Aug  5 22:33:40.142243 systemd[1]: Created slice kubepods-besteffort-pod3b08d14b_4089_4293_b6c7_457168019d9a.slice - libcontainer container kubepods-besteffort-pod3b08d14b_4089_4293_b6c7_457168019d9a.slice.
Aug  5 22:33:40.251044 kubelet[2589]: I0805 22:33:40.250975    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl9gg\" (UniqueName: \"kubernetes.io/projected/3b08d14b-4089-4293-b6c7-457168019d9a-kube-api-access-fl9gg\") pod \"calico-apiserver-86d8687b98-hfznx\" (UID: \"3b08d14b-4089-4293-b6c7-457168019d9a\") " pod="calico-apiserver/calico-apiserver-86d8687b98-hfznx"
Aug  5 22:33:40.251044 kubelet[2589]: I0805 22:33:40.251044    2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b08d14b-4089-4293-b6c7-457168019d9a-calico-apiserver-certs\") pod \"calico-apiserver-86d8687b98-hfznx\" (UID: \"3b08d14b-4089-4293-b6c7-457168019d9a\") " pod="calico-apiserver/calico-apiserver-86d8687b98-hfznx"
Aug  5 22:33:40.355275 kubelet[2589]: E0805 22:33:40.355183    2589 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found
Aug  5 22:33:40.355485 kubelet[2589]: E0805 22:33:40.355315    2589 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b08d14b-4089-4293-b6c7-457168019d9a-calico-apiserver-certs podName:3b08d14b-4089-4293-b6c7-457168019d9a nodeName:}" failed. No retries permitted until 2024-08-05 22:33:40.855280741 +0000 UTC m=+71.856152015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/3b08d14b-4089-4293-b6c7-457168019d9a-calico-apiserver-certs") pod "calico-apiserver-86d8687b98-hfznx" (UID: "3b08d14b-4089-4293-b6c7-457168019d9a") : secret "calico-apiserver-certs" not found
Aug  5 22:33:41.047520 containerd[1444]: time="2024-08-05T22:33:41.047472555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d8687b98-hfznx,Uid:3b08d14b-4089-4293-b6c7-457168019d9a,Namespace:calico-apiserver,Attempt:0,}"
Aug  5 22:33:41.103992 kubelet[2589]: E0805 22:33:41.103180    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:41.169673 systemd-networkd[1375]: calid6b8c00a114: Link UP
Aug  5 22:33:41.170995 systemd-networkd[1375]: calid6b8c00a114: Gained carrier
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.097 [INFO][5031] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0 calico-apiserver-86d8687b98- calico-apiserver  3b08d14b-4089-4293-b6c7-457168019d9a 1110 0 2024-08-05 22:33:40 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86d8687b98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  localhost  calico-apiserver-86d8687b98-hfznx eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid6b8c00a114  [] []}} ContainerID="640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" Namespace="calico-apiserver" Pod="calico-apiserver-86d8687b98-hfznx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d8687b98--hfznx-"
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.097 [INFO][5031] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" Namespace="calico-apiserver" Pod="calico-apiserver-86d8687b98-hfznx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0"
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.129 [INFO][5044] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" HandleID="k8s-pod-network.640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" Workload="localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0"
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.136 [INFO][5044] ipam_plugin.go 264: Auto assigning IP ContainerID="640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" HandleID="k8s-pod-network.640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" Workload="localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000281e80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86d8687b98-hfznx", "timestamp":"2024-08-05 22:33:41.129722214 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.137 [INFO][5044] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.137 [INFO][5044] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.137 [INFO][5044] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.138 [INFO][5044] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" host="localhost"
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.142 [INFO][5044] ipam.go 372: Looking up existing affinities for host host="localhost"
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.147 [INFO][5044] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.149 [INFO][5044] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.151 [INFO][5044] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.152 [INFO][5044] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" host="localhost"
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.154 [INFO][5044] ipam.go 1685: Creating new handle: k8s-pod-network.640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.158 [INFO][5044] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" host="localhost"
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.163 [INFO][5044] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" host="localhost"
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.163 [INFO][5044] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" host="localhost"
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.163 [INFO][5044] ipam_plugin.go 373: Released host-wide IPAM lock.
Aug  5 22:33:41.191238 containerd[1444]: 2024-08-05 22:33:41.163 [INFO][5044] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" HandleID="k8s-pod-network.640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" Workload="localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0"
Aug  5 22:33:41.191895 containerd[1444]: 2024-08-05 22:33:41.167 [INFO][5031] k8s.go 386: Populated endpoint ContainerID="640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" Namespace="calico-apiserver" Pod="calico-apiserver-86d8687b98-hfznx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0", GenerateName:"calico-apiserver-86d8687b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b08d14b-4089-4293-b6c7-457168019d9a", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 33, 40, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86d8687b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86d8687b98-hfznx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid6b8c00a114", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:41.191895 containerd[1444]: 2024-08-05 22:33:41.167 [INFO][5031] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" Namespace="calico-apiserver" Pod="calico-apiserver-86d8687b98-hfznx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0"
Aug  5 22:33:41.191895 containerd[1444]: 2024-08-05 22:33:41.167 [INFO][5031] dataplane_linux.go 68: Setting the host side veth name to calid6b8c00a114 ContainerID="640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" Namespace="calico-apiserver" Pod="calico-apiserver-86d8687b98-hfznx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0"
Aug  5 22:33:41.191895 containerd[1444]: 2024-08-05 22:33:41.171 [INFO][5031] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" Namespace="calico-apiserver" Pod="calico-apiserver-86d8687b98-hfznx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0"
Aug  5 22:33:41.191895 containerd[1444]: 2024-08-05 22:33:41.172 [INFO][5031] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" Namespace="calico-apiserver" Pod="calico-apiserver-86d8687b98-hfznx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0", GenerateName:"calico-apiserver-86d8687b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b08d14b-4089-4293-b6c7-457168019d9a", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 33, 40, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86d8687b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24", Pod:"calico-apiserver-86d8687b98-hfznx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid6b8c00a114", MAC:"1e:75:f4:cf:02:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Aug  5 22:33:41.191895 containerd[1444]: 2024-08-05 22:33:41.180 [INFO][5031] k8s.go 500: Wrote updated endpoint to datastore ContainerID="640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24" Namespace="calico-apiserver" Pod="calico-apiserver-86d8687b98-hfznx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d8687b98--hfznx-eth0"
Aug  5 22:33:41.220527 containerd[1444]: time="2024-08-05T22:33:41.220314610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug  5 22:33:41.220729 containerd[1444]: time="2024-08-05T22:33:41.220487811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:33:41.220729 containerd[1444]: time="2024-08-05T22:33:41.220525212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug  5 22:33:41.220729 containerd[1444]: time="2024-08-05T22:33:41.220543627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug  5 22:33:41.246526 systemd[1]: Started cri-containerd-640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24.scope - libcontainer container 640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24.
Aug  5 22:33:41.262608 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Aug  5 22:33:41.307909 containerd[1444]: time="2024-08-05T22:33:41.307690284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d8687b98-hfznx,Uid:3b08d14b-4089-4293-b6c7-457168019d9a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24\""
Aug  5 22:33:41.309945 containerd[1444]: time="2024-08-05T22:33:41.309860311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\""
Aug  5 22:33:42.776362 systemd-networkd[1375]: calid6b8c00a114: Gained IPv6LL
Aug  5 22:33:42.833927 systemd[1]: Started sshd@22-10.0.0.112:22-10.0.0.1:54458.service - OpenSSH per-connection server daemon (10.0.0.1:54458).
Aug  5 22:33:42.880007 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 54458 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:42.881961 sshd[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:42.888043 systemd-logind[1428]: New session 23 of user core.
Aug  5 22:33:42.896411 systemd[1]: Started session-23.scope - Session 23 of User core.
Aug  5 22:33:43.023922 sshd[5112]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:43.028960 systemd[1]: sshd@22-10.0.0.112:22-10.0.0.1:54458.service: Deactivated successfully.
Aug  5 22:33:43.031078 systemd[1]: session-23.scope: Deactivated successfully.
Aug  5 22:33:43.031875 systemd-logind[1428]: Session 23 logged out. Waiting for processes to exit.
Aug  5 22:33:43.032993 systemd-logind[1428]: Removed session 23.
Aug  5 22:33:43.951310 containerd[1444]: time="2024-08-05T22:33:43.951242255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:43.952236 containerd[1444]: time="2024-08-05T22:33:43.952150785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260"
Aug  5 22:33:43.953405 containerd[1444]: time="2024-08-05T22:33:43.953361031Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:43.956182 containerd[1444]: time="2024-08-05T22:33:43.956147700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Aug  5 22:33:43.957104 containerd[1444]: time="2024-08-05T22:33:43.957063736Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 2.647139934s"
Aug  5 22:33:43.957166 containerd[1444]: time="2024-08-05T22:33:43.957105024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\""
Aug  5 22:33:43.960482 containerd[1444]: time="2024-08-05T22:33:43.960404470Z" level=info msg="CreateContainer within sandbox \"640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Aug  5 22:33:43.977798 containerd[1444]: time="2024-08-05T22:33:43.977731813Z" level=info msg="CreateContainer within sandbox \"640ccc2d42899170710d2dc2bc62c270b8d081832c58072297d6175bb1d74d24\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b3e91bcaf93d5c734784f0ee35d1c8ecbd63bf02acb1a74c3d5a4e9171928da1\""
Aug  5 22:33:43.979300 containerd[1444]: time="2024-08-05T22:33:43.979266316Z" level=info msg="StartContainer for \"b3e91bcaf93d5c734784f0ee35d1c8ecbd63bf02acb1a74c3d5a4e9171928da1\""
Aug  5 22:33:44.032265 systemd[1]: run-containerd-runc-k8s.io-b3e91bcaf93d5c734784f0ee35d1c8ecbd63bf02acb1a74c3d5a4e9171928da1-runc.b0O7IU.mount: Deactivated successfully.
Aug  5 22:33:44.047292 systemd[1]: Started cri-containerd-b3e91bcaf93d5c734784f0ee35d1c8ecbd63bf02acb1a74c3d5a4e9171928da1.scope - libcontainer container b3e91bcaf93d5c734784f0ee35d1c8ecbd63bf02acb1a74c3d5a4e9171928da1.
Aug  5 22:33:44.101354 containerd[1444]: time="2024-08-05T22:33:44.101310377Z" level=info msg="StartContainer for \"b3e91bcaf93d5c734784f0ee35d1c8ecbd63bf02acb1a74c3d5a4e9171928da1\" returns successfully"
Aug  5 22:33:44.233579 kubelet[2589]: I0805 22:33:44.233408    2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86d8687b98-hfznx" podStartSLOduration=1.5848654720000002 podStartE2EDuration="4.233384774s" podCreationTimestamp="2024-08-05 22:33:40 +0000 UTC" firstStartedPulling="2024-08-05 22:33:41.309474806 +0000 UTC m=+72.310346080" lastFinishedPulling="2024-08-05 22:33:43.957994108 +0000 UTC m=+74.958865382" observedRunningTime="2024-08-05 22:33:44.233103778 +0000 UTC m=+75.233975052" watchObservedRunningTime="2024-08-05 22:33:44.233384774 +0000 UTC m=+75.234256058"
Aug  5 22:33:48.041502 systemd[1]: Started sshd@23-10.0.0.112:22-10.0.0.1:54472.service - OpenSSH per-connection server daemon (10.0.0.1:54472).
Aug  5 22:33:48.075363 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 54472 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:48.077401 sshd[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:48.081993 systemd-logind[1428]: New session 24 of user core.
Aug  5 22:33:48.097426 systemd[1]: Started session-24.scope - Session 24 of User core.
Aug  5 22:33:48.103131 kubelet[2589]: E0805 22:33:48.103049    2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Aug  5 22:33:48.222659 sshd[5186]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:48.226737 systemd[1]: sshd@23-10.0.0.112:22-10.0.0.1:54472.service: Deactivated successfully.
Aug  5 22:33:48.229112 systemd[1]: session-24.scope: Deactivated successfully.
Aug  5 22:33:48.229862 systemd-logind[1428]: Session 24 logged out. Waiting for processes to exit.
Aug  5 22:33:48.230926 systemd-logind[1428]: Removed session 24.
Aug  5 22:33:53.236608 systemd[1]: Started sshd@24-10.0.0.112:22-10.0.0.1:39758.service - OpenSSH per-connection server daemon (10.0.0.1:39758).
Aug  5 22:33:53.273600 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 39758 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:53.276021 sshd[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:53.281689 systemd-logind[1428]: New session 25 of user core.
Aug  5 22:33:53.288368 systemd[1]: Started session-25.scope - Session 25 of User core.
Aug  5 22:33:53.416855 sshd[5214]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:53.421628 systemd[1]: sshd@24-10.0.0.112:22-10.0.0.1:39758.service: Deactivated successfully.
Aug  5 22:33:53.423936 systemd[1]: session-25.scope: Deactivated successfully.
Aug  5 22:33:53.424706 systemd-logind[1428]: Session 25 logged out. Waiting for processes to exit.
Aug  5 22:33:53.425564 systemd-logind[1428]: Removed session 25.
Aug  5 22:33:58.434144 systemd[1]: Started sshd@25-10.0.0.112:22-10.0.0.1:39774.service - OpenSSH per-connection server daemon (10.0.0.1:39774).
Aug  5 22:33:58.486989 sshd[5228]: Accepted publickey for core from 10.0.0.1 port 39774 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY
Aug  5 22:33:58.489273 sshd[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Aug  5 22:33:58.494716 systemd-logind[1428]: New session 26 of user core.
Aug  5 22:33:58.502304 systemd[1]: Started session-26.scope - Session 26 of User core.
Aug  5 22:33:58.635293 sshd[5228]: pam_unix(sshd:session): session closed for user core
Aug  5 22:33:58.644580 systemd[1]: sshd@25-10.0.0.112:22-10.0.0.1:39774.service: Deactivated successfully.
Aug  5 22:33:58.647514 systemd[1]: session-26.scope: Deactivated successfully.
Aug  5 22:33:58.648318 systemd-logind[1428]: Session 26 logged out. Waiting for processes to exit.
Aug  5 22:33:58.649403 systemd-logind[1428]: Removed session 26.