Sep  4 20:32:21.893586 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep  4 15:49:08 -00 2024
Sep  4 20:32:21.893618 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf
Sep  4 20:32:21.893635 kernel: BIOS-provided physical RAM map:
Sep  4 20:32:21.893642 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Sep  4 20:32:21.893648 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Sep  4 20:32:21.893654 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Sep  4 20:32:21.893662 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Sep  4 20:32:21.893669 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Sep  4 20:32:21.893676 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Sep  4 20:32:21.893685 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Sep  4 20:32:21.893692 kernel: NX (Execute Disable) protection: active
Sep  4 20:32:21.893698 kernel: APIC: Static calls initialized
Sep  4 20:32:21.893705 kernel: SMBIOS 2.8 present.
Sep  4 20:32:21.893712 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017
Sep  4 20:32:21.893720 kernel: Hypervisor detected: KVM
Sep  4 20:32:21.893731 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Sep  4 20:32:21.893738 kernel: kvm-clock: using sched offset of 2766469408 cycles
Sep  4 20:32:21.893750 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Sep  4 20:32:21.893758 kernel: tsc: Detected 2494.140 MHz processor
Sep  4 20:32:21.893766 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Sep  4 20:32:21.893777 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Sep  4 20:32:21.893786 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Sep  4 20:32:21.893798 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Sep  4 20:32:21.893810 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Sep  4 20:32:21.893824 kernel: ACPI: Early table checksum verification disabled
Sep  4 20:32:21.893836 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS )
Sep  4 20:32:21.893848 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 20:32:21.893860 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 20:32:21.893868 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 20:32:21.893875 kernel: ACPI: FACS 0x000000007FFE0000 000040
Sep  4 20:32:21.893884 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 20:32:21.893895 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 20:32:21.893903 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 20:32:21.893914 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 20:32:21.893922 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd]
Sep  4 20:32:21.893929 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769]
Sep  4 20:32:21.893937 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f]
Sep  4 20:32:21.893944 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d]
Sep  4 20:32:21.893952 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895]
Sep  4 20:32:21.893960 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d]
Sep  4 20:32:21.893974 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985]
Sep  4 20:32:21.893982 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Sep  4 20:32:21.893989 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Sep  4 20:32:21.893997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
Sep  4 20:32:21.894005 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff]
Sep  4 20:32:21.894014 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff]
Sep  4 20:32:21.894022 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff]
Sep  4 20:32:21.894033 kernel: Zone ranges:
Sep  4 20:32:21.894041 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Sep  4 20:32:21.894049 kernel:   DMA32    [mem 0x0000000001000000-0x000000007ffdafff]
Sep  4 20:32:21.894057 kernel:   Normal   empty
Sep  4 20:32:21.894065 kernel: Movable zone start for each node
Sep  4 20:32:21.894073 kernel: Early memory node ranges
Sep  4 20:32:21.894081 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Sep  4 20:32:21.894089 kernel:   node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Sep  4 20:32:21.894098 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff]
Sep  4 20:32:21.894108 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Sep  4 20:32:21.894116 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Sep  4 20:32:21.894124 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges
Sep  4 20:32:21.894132 kernel: ACPI: PM-Timer IO Port: 0x608
Sep  4 20:32:21.894578 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Sep  4 20:32:21.894595 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Sep  4 20:32:21.894603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Sep  4 20:32:21.894612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Sep  4 20:32:21.894620 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Sep  4 20:32:21.894633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Sep  4 20:32:21.894644 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Sep  4 20:32:21.894658 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Sep  4 20:32:21.894666 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Sep  4 20:32:21.894674 kernel: TSC deadline timer available
Sep  4 20:32:21.894683 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Sep  4 20:32:21.894691 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Sep  4 20:32:21.894699 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices
Sep  4 20:32:21.894707 kernel: Booting paravirtualized kernel on KVM
Sep  4 20:32:21.894718 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Sep  4 20:32:21.894730 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Sep  4 20:32:21.894738 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576
Sep  4 20:32:21.894746 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152
Sep  4 20:32:21.894754 kernel: pcpu-alloc: [0] 0 1 
Sep  4 20:32:21.894762 kernel: kvm-guest: PV spinlocks disabled, no host support
Sep  4 20:32:21.894771 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf
Sep  4 20:32:21.894780 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Sep  4 20:32:21.894791 kernel: random: crng init done
Sep  4 20:32:21.894799 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Sep  4 20:32:21.894808 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Sep  4 20:32:21.894816 kernel: Fallback order for Node 0: 0 
Sep  4 20:32:21.894824 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 515803
Sep  4 20:32:21.894832 kernel: Policy zone: DMA32
Sep  4 20:32:21.894840 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Sep  4 20:32:21.894848 kernel: Memory: 1965056K/2096612K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 131296K reserved, 0K cma-reserved)
Sep  4 20:32:21.894857 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Sep  4 20:32:21.894868 kernel: Kernel/User page tables isolation: enabled
Sep  4 20:32:21.894876 kernel: ftrace: allocating 37670 entries in 148 pages
Sep  4 20:32:21.894884 kernel: ftrace: allocated 148 pages with 3 groups
Sep  4 20:32:21.894892 kernel: Dynamic Preempt: voluntary
Sep  4 20:32:21.894900 kernel: rcu: Preemptible hierarchical RCU implementation.
Sep  4 20:32:21.894909 kernel: rcu:         RCU event tracing is enabled.
Sep  4 20:32:21.894918 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Sep  4 20:32:21.894926 kernel:         Trampoline variant of Tasks RCU enabled.
Sep  4 20:32:21.894934 kernel:         Rude variant of Tasks RCU enabled.
Sep  4 20:32:21.894942 kernel:         Tracing variant of Tasks RCU enabled.
Sep  4 20:32:21.894953 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Sep  4 20:32:21.894961 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Sep  4 20:32:21.894969 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Sep  4 20:32:21.894977 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Sep  4 20:32:21.894985 kernel: Console: colour VGA+ 80x25
Sep  4 20:32:21.894994 kernel: printk: console [tty0] enabled
Sep  4 20:32:21.895002 kernel: printk: console [ttyS0] enabled
Sep  4 20:32:21.895010 kernel: ACPI: Core revision 20230628
Sep  4 20:32:21.895018 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
Sep  4 20:32:21.895029 kernel: APIC: Switch to symmetric I/O mode setup
Sep  4 20:32:21.895037 kernel: x2apic enabled
Sep  4 20:32:21.895045 kernel: APIC: Switched APIC routing to: physical x2apic
Sep  4 20:32:21.895056 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Sep  4 20:32:21.895064 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns
Sep  4 20:32:21.895073 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140)
Sep  4 20:32:21.895081 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
Sep  4 20:32:21.895089 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Sep  4 20:32:21.895108 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Sep  4 20:32:21.895117 kernel: Spectre V2 : Mitigation: Retpolines
Sep  4 20:32:21.895126 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Sep  4 20:32:21.895137 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Sep  4 20:32:21.895165 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Sep  4 20:32:21.895174 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Sep  4 20:32:21.895183 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Sep  4 20:32:21.895192 kernel: MDS: Mitigation: Clear CPU buffers
Sep  4 20:32:21.895200 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Sep  4 20:32:21.895215 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Sep  4 20:32:21.895224 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Sep  4 20:32:21.895232 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Sep  4 20:32:21.895244 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Sep  4 20:32:21.895258 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Sep  4 20:32:21.895272 kernel: Freeing SMP alternatives memory: 32K
Sep  4 20:32:21.895285 kernel: pid_max: default: 32768 minimum: 301
Sep  4 20:32:21.895299 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity
Sep  4 20:32:21.895313 kernel: SELinux:  Initializing.
Sep  4 20:32:21.895321 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Sep  4 20:32:21.895330 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Sep  4 20:32:21.895339 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1)
Sep  4 20:32:21.895348 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1.
Sep  4 20:32:21.895357 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1.
Sep  4 20:32:21.895365 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1.
Sep  4 20:32:21.895374 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only.
Sep  4 20:32:21.895383 kernel: signal: max sigframe size: 1776
Sep  4 20:32:21.895395 kernel: rcu: Hierarchical SRCU implementation.
Sep  4 20:32:21.895404 kernel: rcu:         Max phase no-delay instances is 400.
Sep  4 20:32:21.895412 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Sep  4 20:32:21.895421 kernel: smp: Bringing up secondary CPUs ...
Sep  4 20:32:21.895429 kernel: smpboot: x86: Booting SMP configuration:
Sep  4 20:32:21.895438 kernel: .... node  #0, CPUs:      #1
Sep  4 20:32:21.895447 kernel: smp: Brought up 1 node, 2 CPUs
Sep  4 20:32:21.895455 kernel: smpboot: Max logical packages: 1
Sep  4 20:32:21.895464 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS)
Sep  4 20:32:21.895476 kernel: devtmpfs: initialized
Sep  4 20:32:21.895501 kernel: x86/mm: Memory block size: 128MB
Sep  4 20:32:21.895514 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Sep  4 20:32:21.895523 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Sep  4 20:32:21.895532 kernel: pinctrl core: initialized pinctrl subsystem
Sep  4 20:32:21.895541 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Sep  4 20:32:21.895550 kernel: audit: initializing netlink subsys (disabled)
Sep  4 20:32:21.895562 kernel: thermal_sys: Registered thermal governor 'step_wise'
Sep  4 20:32:21.895576 kernel: thermal_sys: Registered thermal governor 'user_space'
Sep  4 20:32:21.895595 kernel: audit: type=2000 audit(1725481940.270:1): state=initialized audit_enabled=0 res=1
Sep  4 20:32:21.895606 kernel: cpuidle: using governor menu
Sep  4 20:32:21.895615 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Sep  4 20:32:21.895624 kernel: dca service started, version 1.12.1
Sep  4 20:32:21.895633 kernel: PCI: Using configuration type 1 for base access
Sep  4 20:32:21.895641 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Sep  4 20:32:21.895650 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Sep  4 20:32:21.895659 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Sep  4 20:32:21.895673 kernel: ACPI: Added _OSI(Module Device)
Sep  4 20:32:21.895688 kernel: ACPI: Added _OSI(Processor Device)
Sep  4 20:32:21.895698 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Sep  4 20:32:21.895707 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Sep  4 20:32:21.895715 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Sep  4 20:32:21.895724 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Sep  4 20:32:21.895732 kernel: ACPI: Interpreter enabled
Sep  4 20:32:21.895741 kernel: ACPI: PM: (supports S0 S5)
Sep  4 20:32:21.895750 kernel: ACPI: Using IOAPIC for interrupt routing
Sep  4 20:32:21.895759 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Sep  4 20:32:21.895771 kernel: PCI: Using E820 reservations for host bridge windows
Sep  4 20:32:21.895780 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Sep  4 20:32:21.895789 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Sep  4 20:32:21.896051 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Sep  4 20:32:21.896238 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI]
Sep  4 20:32:21.896341 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
Sep  4 20:32:21.896353 kernel: acpiphp: Slot [3] registered
Sep  4 20:32:21.896370 kernel: acpiphp: Slot [4] registered
Sep  4 20:32:21.896379 kernel: acpiphp: Slot [5] registered
Sep  4 20:32:21.896388 kernel: acpiphp: Slot [6] registered
Sep  4 20:32:21.896397 kernel: acpiphp: Slot [7] registered
Sep  4 20:32:21.896405 kernel: acpiphp: Slot [8] registered
Sep  4 20:32:21.896414 kernel: acpiphp: Slot [9] registered
Sep  4 20:32:21.896423 kernel: acpiphp: Slot [10] registered
Sep  4 20:32:21.896432 kernel: acpiphp: Slot [11] registered
Sep  4 20:32:21.896441 kernel: acpiphp: Slot [12] registered
Sep  4 20:32:21.896450 kernel: acpiphp: Slot [13] registered
Sep  4 20:32:21.896461 kernel: acpiphp: Slot [14] registered
Sep  4 20:32:21.896470 kernel: acpiphp: Slot [15] registered
Sep  4 20:32:21.896479 kernel: acpiphp: Slot [16] registered
Sep  4 20:32:21.896488 kernel: acpiphp: Slot [17] registered
Sep  4 20:32:21.896496 kernel: acpiphp: Slot [18] registered
Sep  4 20:32:21.896505 kernel: acpiphp: Slot [19] registered
Sep  4 20:32:21.896513 kernel: acpiphp: Slot [20] registered
Sep  4 20:32:21.896522 kernel: acpiphp: Slot [21] registered
Sep  4 20:32:21.896530 kernel: acpiphp: Slot [22] registered
Sep  4 20:32:21.896542 kernel: acpiphp: Slot [23] registered
Sep  4 20:32:21.896550 kernel: acpiphp: Slot [24] registered
Sep  4 20:32:21.896561 kernel: acpiphp: Slot [25] registered
Sep  4 20:32:21.896570 kernel: acpiphp: Slot [26] registered
Sep  4 20:32:21.896579 kernel: acpiphp: Slot [27] registered
Sep  4 20:32:21.896587 kernel: acpiphp: Slot [28] registered
Sep  4 20:32:21.896596 kernel: acpiphp: Slot [29] registered
Sep  4 20:32:21.896604 kernel: acpiphp: Slot [30] registered
Sep  4 20:32:21.896613 kernel: acpiphp: Slot [31] registered
Sep  4 20:32:21.896622 kernel: PCI host bridge to bus 0000:00
Sep  4 20:32:21.896733 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Sep  4 20:32:21.896820 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Sep  4 20:32:21.896904 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Sep  4 20:32:21.896992 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Sep  4 20:32:21.897076 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window]
Sep  4 20:32:21.897175 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Sep  4 20:32:21.897311 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Sep  4 20:32:21.897412 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Sep  4 20:32:21.897520 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Sep  4 20:32:21.897614 kernel: pci 0000:00:01.1: reg 0x20: [io  0xc1e0-0xc1ef]
Sep  4 20:32:21.897713 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Sep  4 20:32:21.897811 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Sep  4 20:32:21.897909 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Sep  4 20:32:21.898007 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Sep  4 20:32:21.898109 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
Sep  4 20:32:21.898215 kernel: pci 0000:00:01.2: reg 0x20: [io  0xc180-0xc19f]
Sep  4 20:32:21.898322 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Sep  4 20:32:21.898417 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Sep  4 20:32:21.898511 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Sep  4 20:32:21.898625 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000
Sep  4 20:32:21.898720 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref]
Sep  4 20:32:21.898854 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref]
Sep  4 20:32:21.899002 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff]
Sep  4 20:32:21.899136 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Sep  4 20:32:21.899270 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Sep  4 20:32:21.899384 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Sep  4 20:32:21.899505 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc1a0-0xc1bf]
Sep  4 20:32:21.899605 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff]
Sep  4 20:32:21.899700 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref]
Sep  4 20:32:21.899810 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Sep  4 20:32:21.899908 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc1c0-0xc1df]
Sep  4 20:32:21.900004 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff]
Sep  4 20:32:21.900098 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref]
Sep  4 20:32:21.900249 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000
Sep  4 20:32:21.900363 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc100-0xc13f]
Sep  4 20:32:21.900490 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff]
Sep  4 20:32:21.900597 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref]
Sep  4 20:32:21.900717 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000
Sep  4 20:32:21.900838 kernel: pci 0000:00:06.0: reg 0x10: [io  0xc000-0xc07f]
Sep  4 20:32:21.901047 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff]
Sep  4 20:32:21.902020 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref]
Sep  4 20:32:21.902443 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000
Sep  4 20:32:21.902552 kernel: pci 0000:00:07.0: reg 0x10: [io  0xc080-0xc0ff]
Sep  4 20:32:21.902657 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff]
Sep  4 20:32:21.902755 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref]
Sep  4 20:32:21.902873 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00
Sep  4 20:32:21.905192 kernel: pci 0000:00:08.0: reg 0x10: [io  0xc140-0xc17f]
Sep  4 20:32:21.905355 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref]
Sep  4 20:32:21.905370 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Sep  4 20:32:21.905380 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Sep  4 20:32:21.905389 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Sep  4 20:32:21.905398 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Sep  4 20:32:21.905407 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Sep  4 20:32:21.905416 kernel: iommu: Default domain type: Translated
Sep  4 20:32:21.905429 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Sep  4 20:32:21.905438 kernel: PCI: Using ACPI for IRQ routing
Sep  4 20:32:21.905447 kernel: PCI: pci_cache_line_size set to 64 bytes
Sep  4 20:32:21.905457 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Sep  4 20:32:21.905470 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff]
Sep  4 20:32:21.905574 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Sep  4 20:32:21.905671 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Sep  4 20:32:21.905766 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Sep  4 20:32:21.905782 kernel: vgaarb: loaded
Sep  4 20:32:21.905791 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Sep  4 20:32:21.905800 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter
Sep  4 20:32:21.905808 kernel: clocksource: Switched to clocksource kvm-clock
Sep  4 20:32:21.905817 kernel: VFS: Disk quotas dquot_6.6.0
Sep  4 20:32:21.905826 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Sep  4 20:32:21.905835 kernel: pnp: PnP ACPI init
Sep  4 20:32:21.905844 kernel: pnp: PnP ACPI: found 4 devices
Sep  4 20:32:21.905852 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Sep  4 20:32:21.905863 kernel: NET: Registered PF_INET protocol family
Sep  4 20:32:21.905872 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Sep  4 20:32:21.905886 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Sep  4 20:32:21.905895 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Sep  4 20:32:21.905904 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Sep  4 20:32:21.905915 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear)
Sep  4 20:32:21.905929 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Sep  4 20:32:21.905942 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Sep  4 20:32:21.905954 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Sep  4 20:32:21.905970 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Sep  4 20:32:21.905979 kernel: NET: Registered PF_XDP protocol family
Sep  4 20:32:21.906078 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Sep  4 20:32:21.906206 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Sep  4 20:32:21.906294 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Sep  4 20:32:21.906382 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Sep  4 20:32:21.906491 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window]
Sep  4 20:32:21.906600 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Sep  4 20:32:21.906720 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Sep  4 20:32:21.906740 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Sep  4 20:32:21.906852 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 30706 usecs
Sep  4 20:32:21.906865 kernel: PCI: CLS 0 bytes, default 64
Sep  4 20:32:21.906874 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Sep  4 20:32:21.906883 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns
Sep  4 20:32:21.906893 kernel: Initialise system trusted keyrings
Sep  4 20:32:21.906902 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Sep  4 20:32:21.906917 kernel: Key type asymmetric registered
Sep  4 20:32:21.906925 kernel: Asymmetric key parser 'x509' registered
Sep  4 20:32:21.906934 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Sep  4 20:32:21.906943 kernel: io scheduler mq-deadline registered
Sep  4 20:32:21.906952 kernel: io scheduler kyber registered
Sep  4 20:32:21.906961 kernel: io scheduler bfq registered
Sep  4 20:32:21.906969 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Sep  4 20:32:21.906979 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Sep  4 20:32:21.906987 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Sep  4 20:32:21.906996 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Sep  4 20:32:21.907008 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Sep  4 20:32:21.907017 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Sep  4 20:32:21.907026 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Sep  4 20:32:21.907035 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Sep  4 20:32:21.907044 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Sep  4 20:32:21.908272 kernel: rtc_cmos 00:03: RTC can wake from S4
Sep  4 20:32:21.908304 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Sep  4 20:32:21.908412 kernel: rtc_cmos 00:03: registered as rtc0
Sep  4 20:32:21.908510 kernel: rtc_cmos 00:03: setting system clock to 2024-09-04T20:32:21 UTC (1725481941)
Sep  4 20:32:21.908609 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Sep  4 20:32:21.908621 kernel: intel_pstate: CPU model not supported
Sep  4 20:32:21.908631 kernel: NET: Registered PF_INET6 protocol family
Sep  4 20:32:21.908640 kernel: Segment Routing with IPv6
Sep  4 20:32:21.908649 kernel: In-situ OAM (IOAM) with IPv6
Sep  4 20:32:21.908657 kernel: NET: Registered PF_PACKET protocol family
Sep  4 20:32:21.908669 kernel: Key type dns_resolver registered
Sep  4 20:32:21.908688 kernel: IPI shorthand broadcast: enabled
Sep  4 20:32:21.908701 kernel: sched_clock: Marking stable (995004084, 101702511)->(1202651155, -105944560)
Sep  4 20:32:21.908715 kernel: registered taskstats version 1
Sep  4 20:32:21.908726 kernel: Loading compiled-in X.509 certificates
Sep  4 20:32:21.908738 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02'
Sep  4 20:32:21.908752 kernel: Key type .fscrypt registered
Sep  4 20:32:21.908764 kernel: Key type fscrypt-provisioning registered
Sep  4 20:32:21.908773 kernel: ima: No TPM chip found, activating TPM-bypass!
Sep  4 20:32:21.908782 kernel: ima: Allocated hash algorithm: sha1
Sep  4 20:32:21.908794 kernel: ima: No architecture policies found
Sep  4 20:32:21.908803 kernel: clk: Disabling unused clocks
Sep  4 20:32:21.908812 kernel: Freeing unused kernel image (initmem) memory: 49336K
Sep  4 20:32:21.908821 kernel: Write protecting the kernel read-only data: 36864k
Sep  4 20:32:21.908829 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K
Sep  4 20:32:21.908861 kernel: Run /init as init process
Sep  4 20:32:21.908873 kernel:   with arguments:
Sep  4 20:32:21.908882 kernel:     /init
Sep  4 20:32:21.908891 kernel:   with environment:
Sep  4 20:32:21.908903 kernel:     HOME=/
Sep  4 20:32:21.908912 kernel:     TERM=linux
Sep  4 20:32:21.908921 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Sep  4 20:32:21.908933 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Sep  4 20:32:21.908945 systemd[1]: Detected virtualization kvm.
Sep  4 20:32:21.908958 systemd[1]: Detected architecture x86-64.
Sep  4 20:32:21.908967 systemd[1]: Running in initrd.
Sep  4 20:32:21.908976 systemd[1]: No hostname configured, using default hostname.
Sep  4 20:32:21.908988 systemd[1]: Hostname set to <localhost>.
Sep  4 20:32:21.908998 systemd[1]: Initializing machine ID from VM UUID.
Sep  4 20:32:21.909007 systemd[1]: Queued start job for default target initrd.target.
Sep  4 20:32:21.909017 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Sep  4 20:32:21.909026 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Sep  4 20:32:21.909037 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Sep  4 20:32:21.909047 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Sep  4 20:32:21.909056 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Sep  4 20:32:21.909069 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Sep  4 20:32:21.909079 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Sep  4 20:32:21.909089 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Sep  4 20:32:21.909099 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Sep  4 20:32:21.909108 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Sep  4 20:32:21.909118 systemd[1]: Reached target paths.target - Path Units.
Sep  4 20:32:21.909130 systemd[1]: Reached target slices.target - Slice Units.
Sep  4 20:32:21.910173 systemd[1]: Reached target swap.target - Swaps.
Sep  4 20:32:21.910203 systemd[1]: Reached target timers.target - Timer Units.
Sep  4 20:32:21.910225 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Sep  4 20:32:21.910235 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Sep  4 20:32:21.910245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Sep  4 20:32:21.910257 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Sep  4 20:32:21.910267 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Sep  4 20:32:21.910277 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Sep  4 20:32:21.910287 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Sep  4 20:32:21.910297 systemd[1]: Reached target sockets.target - Socket Units.
Sep  4 20:32:21.910306 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Sep  4 20:32:21.910316 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Sep  4 20:32:21.910326 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Sep  4 20:32:21.910338 systemd[1]: Starting systemd-fsck-usr.service...
Sep  4 20:32:21.910348 systemd[1]: Starting systemd-journald.service - Journal Service...
Sep  4 20:32:21.910358 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Sep  4 20:32:21.910367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 20:32:21.910377 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Sep  4 20:32:21.910424 systemd-journald[183]: Collecting audit messages is disabled.
Sep  4 20:32:21.910451 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Sep  4 20:32:21.910461 systemd[1]: Finished systemd-fsck-usr.service.
Sep  4 20:32:21.910473 systemd-journald[183]: Journal started
Sep  4 20:32:21.910497 systemd-journald[183]: Runtime Journal (/run/log/journal/56dcaae656294ba88b9d047e779c5f38) is 4.9M, max 39.3M, 34.4M free.
Sep  4 20:32:21.906192 systemd-modules-load[184]: Inserted module 'overlay'
Sep  4 20:32:21.918289 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Sep  4 20:32:21.920181 systemd[1]: Started systemd-journald.service - Journal Service.
Sep  4 20:32:21.924561 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Sep  4 20:32:21.953961 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Sep  4 20:32:21.953996 kernel: Bridge firewalling registered
Sep  4 20:32:21.949058 systemd-modules-load[184]: Inserted module 'br_netfilter'
Sep  4 20:32:21.954774 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Sep  4 20:32:21.960006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 20:32:21.966388 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Sep  4 20:32:21.973387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Sep  4 20:32:21.978042 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Sep  4 20:32:21.989271 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories...
Sep  4 20:32:21.997526 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 20:32:22.000241 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Sep  4 20:32:22.008406 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Sep  4 20:32:22.010054 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Sep  4 20:32:22.011595 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Sep  4 20:32:22.016151 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Sep  4 20:32:22.024002 dracut-cmdline[216]: dracut-dracut-053
Sep  4 20:32:22.028553 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf
Sep  4 20:32:22.058856 systemd-resolved[222]: Positive Trust Anchors:
Sep  4 20:32:22.059616 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Sep  4 20:32:22.059658 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test
Sep  4 20:32:22.065199 systemd-resolved[222]: Defaulting to hostname 'linux'.
Sep  4 20:32:22.066530 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Sep  4 20:32:22.067014 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Sep  4 20:32:22.119200 kernel: SCSI subsystem initialized
Sep  4 20:32:22.134196 kernel: Loading iSCSI transport class v2.0-870.
Sep  4 20:32:22.153182 kernel: iscsi: registered transport (tcp)
Sep  4 20:32:22.189228 kernel: iscsi: registered transport (qla4xxx)
Sep  4 20:32:22.189330 kernel: QLogic iSCSI HBA Driver
Sep  4 20:32:22.246818 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Sep  4 20:32:22.253382 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Sep  4 20:32:22.283398 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Sep  4 20:32:22.283477 kernel: device-mapper: uevent: version 1.0.3
Sep  4 20:32:22.283533 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Sep  4 20:32:22.332215 kernel: raid6: avx2x4   gen() 17037 MB/s
Sep  4 20:32:22.349172 kernel: raid6: avx2x2   gen() 17638 MB/s
Sep  4 20:32:22.366577 kernel: raid6: avx2x1   gen() 13609 MB/s
Sep  4 20:32:22.366672 kernel: raid6: using algorithm avx2x2 gen() 17638 MB/s
Sep  4 20:32:22.384398 kernel: raid6: .... xor() 20037 MB/s, rmw enabled
Sep  4 20:32:22.384496 kernel: raid6: using avx2x2 recovery algorithm
Sep  4 20:32:22.411205 kernel: xor: automatically using best checksumming function   avx       
Sep  4 20:32:22.613176 kernel: Btrfs loaded, zoned=no, fsverity=no
Sep  4 20:32:22.627638 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Sep  4 20:32:22.634469 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Sep  4 20:32:22.652314 systemd-udevd[402]: Using default interface naming scheme 'v255'.
Sep  4 20:32:22.657820 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Sep  4 20:32:22.669071 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Sep  4 20:32:22.687757 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation
Sep  4 20:32:22.730537 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Sep  4 20:32:22.736406 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Sep  4 20:32:22.820524 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Sep  4 20:32:22.828546 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Sep  4 20:32:22.851818 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Sep  4 20:32:22.856208 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Sep  4 20:32:22.857241 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep  4 20:32:22.858605 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Sep  4 20:32:22.865728 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Sep  4 20:32:22.894819 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Sep  4 20:32:22.934174 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues
Sep  4 20:32:22.939280 kernel: scsi host0: Virtio SCSI HBA
Sep  4 20:32:22.939382 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB)
Sep  4 20:32:22.950176 kernel: cryptd: max_cpu_qlen set to 1000
Sep  4 20:32:22.959746 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Sep  4 20:32:22.959806 kernel: GPT:9289727 != 125829119
Sep  4 20:32:22.959819 kernel: GPT:Alternate GPT header not at the end of the disk.
Sep  4 20:32:22.959831 kernel: GPT:9289727 != 125829119
Sep  4 20:32:22.961318 kernel: GPT: Use GNU Parted to correct GPT errors.
Sep  4 20:32:22.961386 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Sep  4 20:32:22.984294 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues
Sep  4 20:32:22.988595 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB)
Sep  4 20:32:22.991644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Sep  4 20:32:22.991805 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 20:32:22.992453 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Sep  4 20:32:22.994199 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep  4 20:32:22.994383 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 20:32:22.994997 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 20:32:22.998167 kernel: libata version 3.00 loaded.
Sep  4 20:32:23.001559 kernel: ata_piix 0000:00:01.1: version 2.13
Sep  4 20:32:23.003162 kernel: scsi host1: ata_piix
Sep  4 20:32:23.007623 kernel: scsi host2: ata_piix
Sep  4 20:32:23.007890 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14
Sep  4 20:32:23.007905 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15
Sep  4 20:32:23.011529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 20:32:23.028182 kernel: AVX2 version of gcm_enc/dec engaged.
Sep  4 20:32:23.031168 kernel: AES CTR mode by8 optimization enabled
Sep  4 20:32:23.047179 kernel: ACPI: bus type USB registered
Sep  4 20:32:23.048168 kernel: usbcore: registered new interface driver usbfs
Sep  4 20:32:23.048217 kernel: usbcore: registered new interface driver hub
Sep  4 20:32:23.048230 kernel: usbcore: registered new device driver usb
Sep  4 20:32:23.064205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 20:32:23.068369 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Sep  4 20:32:23.090952 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 20:32:23.200224 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (455)
Sep  4 20:32:23.207165 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (459)
Sep  4 20:32:23.215709 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Sep  4 20:32:23.225153 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Sep  4 20:32:23.234694 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Sep  4 20:32:23.234944 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Sep  4 20:32:23.234719 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Sep  4 20:32:23.238531 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Sep  4 20:32:23.238779 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180
Sep  4 20:32:23.238952 kernel: hub 1-0:1.0: USB hub found
Sep  4 20:32:23.239088 kernel: hub 1-0:1.0: 2 ports detected
Sep  4 20:32:23.241512 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Sep  4 20:32:23.242000 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Sep  4 20:32:23.246415 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Sep  4 20:32:23.255918 disk-uuid[548]: Primary Header is updated.
Sep  4 20:32:23.255918 disk-uuid[548]: Secondary Entries is updated.
Sep  4 20:32:23.255918 disk-uuid[548]: Secondary Header is updated.
Sep  4 20:32:23.263183 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Sep  4 20:32:23.268238 kernel: GPT:disk_guids don't match.
Sep  4 20:32:23.268326 kernel: GPT: Use GNU Parted to correct GPT errors.
Sep  4 20:32:23.269223 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Sep  4 20:32:23.288188 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Sep  4 20:32:24.277227 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Sep  4 20:32:24.277871 disk-uuid[549]: The operation has completed successfully.
Sep  4 20:32:24.319086 systemd[1]: disk-uuid.service: Deactivated successfully.
Sep  4 20:32:24.319221 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Sep  4 20:32:24.342412 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Sep  4 20:32:24.347588 sh[562]: Success
Sep  4 20:32:24.364167 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Sep  4 20:32:24.444503 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Sep  4 20:32:24.447329 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Sep  4 20:32:24.448061 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Sep  4 20:32:24.482522 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602
Sep  4 20:32:24.482595 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Sep  4 20:32:24.482613 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Sep  4 20:32:24.482986 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Sep  4 20:32:24.484263 kernel: BTRFS info (device dm-0): using free space tree
Sep  4 20:32:24.493045 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Sep  4 20:32:24.494233 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Sep  4 20:32:24.507025 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Sep  4 20:32:24.510329 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Sep  4 20:32:24.520491 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b
Sep  4 20:32:24.520558 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Sep  4 20:32:24.520573 kernel: BTRFS info (device vda6): using free space tree
Sep  4 20:32:24.525214 kernel: BTRFS info (device vda6): auto enabling async discard
Sep  4 20:32:24.538675 systemd[1]: mnt-oem.mount: Deactivated successfully.
Sep  4 20:32:24.539598 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b
Sep  4 20:32:24.548022 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Sep  4 20:32:24.556421 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Sep  4 20:32:24.663343 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Sep  4 20:32:24.671151 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Sep  4 20:32:24.702100 ignition[654]: Ignition 2.18.0
Sep  4 20:32:24.702116 ignition[654]: Stage: fetch-offline
Sep  4 20:32:24.704442 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Sep  4 20:32:24.702218 ignition[654]: no configs at "/usr/lib/ignition/base.d"
Sep  4 20:32:24.702236 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Sep  4 20:32:24.702484 ignition[654]: parsed url from cmdline: ""
Sep  4 20:32:24.708481 systemd-networkd[750]: lo: Link UP
Sep  4 20:32:24.702491 ignition[654]: no config URL provided
Sep  4 20:32:24.708486 systemd-networkd[750]: lo: Gained carrier
Sep  4 20:32:24.702501 ignition[654]: reading system config file "/usr/lib/ignition/user.ign"
Sep  4 20:32:24.702517 ignition[654]: no config at "/usr/lib/ignition/user.ign"
Sep  4 20:32:24.702526 ignition[654]: failed to fetch config: resource requires networking
Sep  4 20:32:24.702832 ignition[654]: Ignition finished successfully
Sep  4 20:32:24.712967 systemd-networkd[750]: Enumeration completed
Sep  4 20:32:24.713559 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name.
Sep  4 20:32:24.713571 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network.
Sep  4 20:32:24.714404 systemd[1]: Started systemd-networkd.service - Network Configuration.
Sep  4 20:32:24.714845 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 20:32:24.714849 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network.
Sep  4 20:32:24.715456 systemd-networkd[750]: eth0: Link UP
Sep  4 20:32:24.715461 systemd-networkd[750]: eth0: Gained carrier
Sep  4 20:32:24.715473 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name.
Sep  4 20:32:24.716219 systemd[1]: Reached target network.target - Network.
Sep  4 20:32:24.718443 systemd-networkd[750]: eth1: Link UP
Sep  4 20:32:24.718447 systemd-networkd[750]: eth1: Gained carrier
Sep  4 20:32:24.718460 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 20:32:24.722376 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Sep  4 20:32:24.732273 systemd-networkd[750]: eth0: DHCPv4 address 209.38.64.58/20, gateway 209.38.64.1 acquired from 169.254.169.253
Sep  4 20:32:24.739267 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253
Sep  4 20:32:24.751819 ignition[756]: Ignition 2.18.0
Sep  4 20:32:24.751831 ignition[756]: Stage: fetch
Sep  4 20:32:24.752006 ignition[756]: no configs at "/usr/lib/ignition/base.d"
Sep  4 20:32:24.752015 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Sep  4 20:32:24.752212 ignition[756]: parsed url from cmdline: ""
Sep  4 20:32:24.752218 ignition[756]: no config URL provided
Sep  4 20:32:24.752231 ignition[756]: reading system config file "/usr/lib/ignition/user.ign"
Sep  4 20:32:24.752241 ignition[756]: no config at "/usr/lib/ignition/user.ign"
Sep  4 20:32:24.752261 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1
Sep  4 20:32:24.766922 ignition[756]: GET result: OK
Sep  4 20:32:24.767620 ignition[756]: parsing config with SHA512: 32f2c8b36f6bd6a7b1cde7c9679a657ffd03ad3a69596af4898db61120d87c90933831be54527678406c2e6403359ef218a6033368067f514fe8c2e1e6976c1b
Sep  4 20:32:24.778177 unknown[756]: fetched base config from "system"
Sep  4 20:32:24.778232 unknown[756]: fetched base config from "system"
Sep  4 20:32:24.778705 ignition[756]: fetch: fetch complete
Sep  4 20:32:24.778240 unknown[756]: fetched user config from "digitalocean"
Sep  4 20:32:24.778712 ignition[756]: fetch: fetch passed
Sep  4 20:32:24.781193 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Sep  4 20:32:24.778765 ignition[756]: Ignition finished successfully
Sep  4 20:32:24.787347 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Sep  4 20:32:24.806717 ignition[764]: Ignition 2.18.0
Sep  4 20:32:24.806731 ignition[764]: Stage: kargs
Sep  4 20:32:24.806924 ignition[764]: no configs at "/usr/lib/ignition/base.d"
Sep  4 20:32:24.806938 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Sep  4 20:32:24.808026 ignition[764]: kargs: kargs passed
Sep  4 20:32:24.808089 ignition[764]: Ignition finished successfully
Sep  4 20:32:24.809318 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Sep  4 20:32:24.815381 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Sep  4 20:32:24.839216 ignition[771]: Ignition 2.18.0
Sep  4 20:32:24.839233 ignition[771]: Stage: disks
Sep  4 20:32:24.839632 ignition[771]: no configs at "/usr/lib/ignition/base.d"
Sep  4 20:32:24.839652 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Sep  4 20:32:24.841124 ignition[771]: disks: disks passed
Sep  4 20:32:24.841214 ignition[771]: Ignition finished successfully
Sep  4 20:32:24.842248 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Sep  4 20:32:24.846354 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Sep  4 20:32:24.846763 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Sep  4 20:32:24.847737 systemd[1]: Reached target local-fs.target - Local File Systems.
Sep  4 20:32:24.848608 systemd[1]: Reached target sysinit.target - System Initialization.
Sep  4 20:32:24.849378 systemd[1]: Reached target basic.target - Basic System.
Sep  4 20:32:24.858409 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Sep  4 20:32:24.874844 systemd-fsck[780]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Sep  4 20:32:24.878798 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Sep  4 20:32:24.885355 systemd[1]: Mounting sysroot.mount - /sysroot...
Sep  4 20:32:24.996988 kernel: EXT4-fs (vda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none.
Sep  4 20:32:24.995981 systemd[1]: Mounted sysroot.mount - /sysroot.
Sep  4 20:32:24.996894 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Sep  4 20:32:25.008322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Sep  4 20:32:25.011400 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Sep  4 20:32:25.013340 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent...
Sep  4 20:32:25.021201 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (788)
Sep  4 20:32:25.022130 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Sep  4 20:32:25.024255 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b
Sep  4 20:32:25.024289 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Sep  4 20:32:25.024309 kernel: BTRFS info (device vda6): using free space tree
Sep  4 20:32:25.029966 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Sep  4 20:32:25.030771 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Sep  4 20:32:25.032917 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Sep  4 20:32:25.040169 kernel: BTRFS info (device vda6): auto enabling async discard
Sep  4 20:32:25.041409 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Sep  4 20:32:25.043768 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Sep  4 20:32:25.098774 coreos-metadata[790]: Sep 04 20:32:25.098 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Sep  4 20:32:25.109879 coreos-metadata[790]: Sep 04 20:32:25.109 INFO Fetch successful
Sep  4 20:32:25.110376 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory
Sep  4 20:32:25.114316 coreos-metadata[791]: Sep 04 20:32:25.114 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Sep  4 20:32:25.118607 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully.
Sep  4 20:32:25.119419 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent.
Sep  4 20:32:25.121530 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory
Sep  4 20:32:25.124007 coreos-metadata[791]: Sep 04 20:32:25.123 INFO Fetch successful
Sep  4 20:32:25.128428 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory
Sep  4 20:32:25.130634 coreos-metadata[791]: Sep 04 20:32:25.130 INFO wrote hostname ci-3975.2.1-b-0d33e4c091 to /sysroot/etc/hostname
Sep  4 20:32:25.132956 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Sep  4 20:32:25.136138 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory
Sep  4 20:32:25.237553 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Sep  4 20:32:25.242268 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Sep  4 20:32:25.244344 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Sep  4 20:32:25.258185 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b
Sep  4 20:32:25.279437 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Sep  4 20:32:25.287588 ignition[908]: INFO     : Ignition 2.18.0
Sep  4 20:32:25.288331 ignition[908]: INFO     : Stage: mount
Sep  4 20:32:25.288650 ignition[908]: INFO     : no configs at "/usr/lib/ignition/base.d"
Sep  4 20:32:25.289056 ignition[908]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Sep  4 20:32:25.289953 ignition[908]: INFO     : mount: mount passed
Sep  4 20:32:25.290367 ignition[908]: INFO     : Ignition finished successfully
Sep  4 20:32:25.291241 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Sep  4 20:32:25.295374 systemd[1]: Starting ignition-files.service - Ignition (files)...
Sep  4 20:32:25.480224 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Sep  4 20:32:25.491688 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Sep  4 20:32:25.501216 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (921)
Sep  4 20:32:25.504238 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b
Sep  4 20:32:25.504331 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Sep  4 20:32:25.504354 kernel: BTRFS info (device vda6): using free space tree
Sep  4 20:32:25.507189 kernel: BTRFS info (device vda6): auto enabling async discard
Sep  4 20:32:25.509817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Sep  4 20:32:25.534612 ignition[937]: INFO     : Ignition 2.18.0
Sep  4 20:32:25.534612 ignition[937]: INFO     : Stage: files
Sep  4 20:32:25.536071 ignition[937]: INFO     : no configs at "/usr/lib/ignition/base.d"
Sep  4 20:32:25.536071 ignition[937]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Sep  4 20:32:25.537501 ignition[937]: DEBUG    : files: compiled without relabeling support, skipping
Sep  4 20:32:25.537501 ignition[937]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Sep  4 20:32:25.537501 ignition[937]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Sep  4 20:32:25.540571 ignition[937]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Sep  4 20:32:25.541310 ignition[937]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Sep  4 20:32:25.541310 ignition[937]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Sep  4 20:32:25.541022 unknown[937]: wrote ssh authorized keys file for user: core
Sep  4 20:32:25.543736 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Sep  4 20:32:25.543736 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Sep  4 20:32:25.570357 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Sep  4 20:32:25.618214 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Sep  4 20:32:25.618214 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Sep  4 20:32:25.618214 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Sep  4 20:32:26.032537 systemd-networkd[750]: eth1: Gained IPv6LL
Sep  4 20:32:26.083300 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Sep  4 20:32:26.154210 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Sep  4 20:32:26.154210 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Sep  4 20:32:26.156701 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Sep  4 20:32:26.156701 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Sep  4 20:32:26.158173 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Sep  4 20:32:26.158173 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Sep  4 20:32:26.158173 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Sep  4 20:32:26.158173 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Sep  4 20:32:26.158173 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Sep  4 20:32:26.158173 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Sep  4 20:32:26.158173 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Sep  4 20:32:26.158173 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Sep  4 20:32:26.158173 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Sep  4 20:32:26.158173 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Sep  4 20:32:26.158173 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1
Sep  4 20:32:26.537789 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET result: OK
Sep  4 20:32:26.608965 systemd-networkd[750]: eth0: Gained IPv6LL
Sep  4 20:32:26.755181 ignition[937]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Sep  4 20:32:26.755181 ignition[937]: INFO     : files: op(c): [started]  processing unit "prepare-helm.service"
Sep  4 20:32:26.756803 ignition[937]: INFO     : files: op(c): op(d): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Sep  4 20:32:26.756803 ignition[937]: INFO     : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Sep  4 20:32:26.756803 ignition[937]: INFO     : files: op(c): [finished] processing unit "prepare-helm.service"
Sep  4 20:32:26.756803 ignition[937]: INFO     : files: op(e): [started]  setting preset to enabled for "prepare-helm.service"
Sep  4 20:32:26.756803 ignition[937]: INFO     : files: op(e): [finished] setting preset to enabled for "prepare-helm.service"
Sep  4 20:32:26.756803 ignition[937]: INFO     : files: createResultFile: createFiles: op(f): [started]  writing file "/sysroot/etc/.ignition-result.json"
Sep  4 20:32:26.756803 ignition[937]: INFO     : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json"
Sep  4 20:32:26.756803 ignition[937]: INFO     : files: files passed
Sep  4 20:32:26.756803 ignition[937]: INFO     : Ignition finished successfully
Sep  4 20:32:26.758783 systemd[1]: Finished ignition-files.service - Ignition (files).
Sep  4 20:32:26.771858 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Sep  4 20:32:26.773314 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Sep  4 20:32:26.776626 systemd[1]: ignition-quench.service: Deactivated successfully.
Sep  4 20:32:26.777111 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Sep  4 20:32:26.790703 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Sep  4 20:32:26.790703 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Sep  4 20:32:26.792109 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Sep  4 20:32:26.794254 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Sep  4 20:32:26.794872 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Sep  4 20:32:26.798345 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Sep  4 20:32:26.833007 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Sep  4 20:32:26.833155 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Sep  4 20:32:26.834055 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Sep  4 20:32:26.834578 systemd[1]: Reached target initrd.target - Initrd Default Target.
Sep  4 20:32:26.835317 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Sep  4 20:32:26.836313 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Sep  4 20:32:26.855909 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Sep  4 20:32:26.860340 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Sep  4 20:32:26.880304 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Sep  4 20:32:26.880810 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep  4 20:32:26.881352 systemd[1]: Stopped target timers.target - Timer Units.
Sep  4 20:32:26.882204 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Sep  4 20:32:26.882335 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Sep  4 20:32:26.883197 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Sep  4 20:32:26.883725 systemd[1]: Stopped target basic.target - Basic System.
Sep  4 20:32:26.884551 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Sep  4 20:32:26.885375 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Sep  4 20:32:26.886309 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Sep  4 20:32:26.887163 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Sep  4 20:32:26.887942 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Sep  4 20:32:26.888988 systemd[1]: Stopped target sysinit.target - System Initialization.
Sep  4 20:32:26.889885 systemd[1]: Stopped target local-fs.target - Local File Systems.
Sep  4 20:32:26.890759 systemd[1]: Stopped target swap.target - Swaps.
Sep  4 20:32:26.891431 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Sep  4 20:32:26.891634 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Sep  4 20:32:26.892699 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Sep  4 20:32:26.893410 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Sep  4 20:32:26.894039 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Sep  4 20:32:26.894157 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Sep  4 20:32:26.894731 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Sep  4 20:32:26.894947 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Sep  4 20:32:26.895734 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Sep  4 20:32:26.895950 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Sep  4 20:32:26.896603 systemd[1]: ignition-files.service: Deactivated successfully.
Sep  4 20:32:26.896733 systemd[1]: Stopped ignition-files.service - Ignition (files).
Sep  4 20:32:26.897298 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Sep  4 20:32:26.897423 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Sep  4 20:32:26.906433 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Sep  4 20:32:26.907384 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Sep  4 20:32:26.907603 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Sep  4 20:32:26.912457 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Sep  4 20:32:26.912813 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Sep  4 20:32:26.912936 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Sep  4 20:32:26.913462 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Sep  4 20:32:26.913599 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Sep  4 20:32:26.920449 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Sep  4 20:32:26.920939 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Sep  4 20:32:26.925955 ignition[991]: INFO     : Ignition 2.18.0
Sep  4 20:32:26.925955 ignition[991]: INFO     : Stage: umount
Sep  4 20:32:26.933515 ignition[991]: INFO     : no configs at "/usr/lib/ignition/base.d"
Sep  4 20:32:26.933515 ignition[991]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Sep  4 20:32:26.933515 ignition[991]: INFO     : umount: umount passed
Sep  4 20:32:26.933515 ignition[991]: INFO     : Ignition finished successfully
Sep  4 20:32:26.936897 systemd[1]: ignition-mount.service: Deactivated successfully.
Sep  4 20:32:26.937007 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Sep  4 20:32:26.938536 systemd[1]: ignition-disks.service: Deactivated successfully.
Sep  4 20:32:26.938691 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Sep  4 20:32:26.940119 systemd[1]: ignition-kargs.service: Deactivated successfully.
Sep  4 20:32:26.940189 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Sep  4 20:32:26.940545 systemd[1]: ignition-fetch.service: Deactivated successfully.
Sep  4 20:32:26.940580 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Sep  4 20:32:26.941881 systemd[1]: Stopped target network.target - Network.
Sep  4 20:32:26.942197 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Sep  4 20:32:26.942248 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Sep  4 20:32:26.943032 systemd[1]: Stopped target paths.target - Path Units.
Sep  4 20:32:26.944037 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Sep  4 20:32:26.947405 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Sep  4 20:32:26.948040 systemd[1]: Stopped target slices.target - Slice Units.
Sep  4 20:32:26.948328 systemd[1]: Stopped target sockets.target - Socket Units.
Sep  4 20:32:26.948645 systemd[1]: iscsid.socket: Deactivated successfully.
Sep  4 20:32:26.948704 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Sep  4 20:32:26.949102 systemd[1]: iscsiuio.socket: Deactivated successfully.
Sep  4 20:32:26.949154 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Sep  4 20:32:26.949530 systemd[1]: ignition-setup.service: Deactivated successfully.
Sep  4 20:32:26.949580 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Sep  4 20:32:26.950122 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Sep  4 20:32:26.952875 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Sep  4 20:32:26.953832 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Sep  4 20:32:26.954322 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Sep  4 20:32:26.956077 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Sep  4 20:32:26.956685 systemd[1]: sysroot-boot.service: Deactivated successfully.
Sep  4 20:32:26.956799 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Sep  4 20:32:26.957802 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Sep  4 20:32:26.957894 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Sep  4 20:32:26.960904 systemd[1]: systemd-resolved.service: Deactivated successfully.
Sep  4 20:32:26.961011 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Sep  4 20:32:26.963274 systemd-networkd[750]: eth0: DHCPv6 lease lost
Sep  4 20:32:26.964455 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Sep  4 20:32:26.964550 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Sep  4 20:32:26.966205 systemd-networkd[750]: eth1: DHCPv6 lease lost
Sep  4 20:32:26.967984 systemd[1]: systemd-networkd.service: Deactivated successfully.
Sep  4 20:32:26.968115 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Sep  4 20:32:26.969039 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Sep  4 20:32:26.969075 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Sep  4 20:32:26.981359 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Sep  4 20:32:26.981747 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Sep  4 20:32:26.981819 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Sep  4 20:32:26.982324 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Sep  4 20:32:26.982371 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Sep  4 20:32:26.982750 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Sep  4 20:32:26.982789 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Sep  4 20:32:26.983371 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Sep  4 20:32:26.999641 systemd[1]: systemd-udevd.service: Deactivated successfully.
Sep  4 20:32:27.000344 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Sep  4 20:32:27.001103 systemd[1]: network-cleanup.service: Deactivated successfully.
Sep  4 20:32:27.001199 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Sep  4 20:32:27.002526 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Sep  4 20:32:27.002610 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Sep  4 20:32:27.003206 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Sep  4 20:32:27.003238 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Sep  4 20:32:27.004125 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Sep  4 20:32:27.004192 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Sep  4 20:32:27.005117 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Sep  4 20:32:27.005351 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Sep  4 20:32:27.006392 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Sep  4 20:32:27.006441 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 20:32:27.011344 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Sep  4 20:32:27.011879 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Sep  4 20:32:27.011972 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Sep  4 20:32:27.012457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep  4 20:32:27.012501 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 20:32:27.020694 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Sep  4 20:32:27.020799 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Sep  4 20:32:27.022393 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Sep  4 20:32:27.024032 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Sep  4 20:32:27.038020 systemd[1]: Switching root.
Sep  4 20:32:27.079997 systemd-journald[183]: Journal stopped
Sep  4 20:32:28.026208 systemd-journald[183]: Received SIGTERM from PID 1 (systemd).
Sep  4 20:32:28.026601 kernel: SELinux:  policy capability network_peer_controls=1
Sep  4 20:32:28.026634 kernel: SELinux:  policy capability open_perms=1
Sep  4 20:32:28.026658 kernel: SELinux:  policy capability extended_socket_class=1
Sep  4 20:32:28.026676 kernel: SELinux:  policy capability always_check_network=0
Sep  4 20:32:28.026701 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep  4 20:32:28.026724 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep  4 20:32:28.026741 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Sep  4 20:32:28.026763 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Sep  4 20:32:28.026779 kernel: audit: type=1403 audit(1725481947.232:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Sep  4 20:32:28.026805 systemd[1]: Successfully loaded SELinux policy in 36.557ms.
Sep  4 20:32:28.026828 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.506ms.
Sep  4 20:32:28.026848 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Sep  4 20:32:28.026871 systemd[1]: Detected virtualization kvm.
Sep  4 20:32:28.026893 systemd[1]: Detected architecture x86-64.
Sep  4 20:32:28.026916 systemd[1]: Detected first boot.
Sep  4 20:32:28.026933 systemd[1]: Hostname set to <ci-3975.2.1-b-0d33e4c091>.
Sep  4 20:32:28.026950 systemd[1]: Initializing machine ID from VM UUID.
Sep  4 20:32:28.026965 zram_generator::config[1033]: No configuration found.
Sep  4 20:32:28.026984 systemd[1]: Populated /etc with preset unit settings.
Sep  4 20:32:28.027001 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Sep  4 20:32:28.027023 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Sep  4 20:32:28.027042 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Sep  4 20:32:28.027061 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Sep  4 20:32:28.027079 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Sep  4 20:32:28.027098 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Sep  4 20:32:28.027115 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Sep  4 20:32:28.027132 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Sep  4 20:32:28.027530 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Sep  4 20:32:28.027557 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Sep  4 20:32:28.027584 systemd[1]: Created slice user.slice - User and Session Slice.
Sep  4 20:32:28.027602 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Sep  4 20:32:28.027622 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Sep  4 20:32:28.027640 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Sep  4 20:32:28.027656 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Sep  4 20:32:28.027672 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Sep  4 20:32:28.027688 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Sep  4 20:32:28.027706 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Sep  4 20:32:28.027730 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Sep  4 20:32:28.027749 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Sep  4 20:32:28.027769 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Sep  4 20:32:28.027789 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Sep  4 20:32:28.027807 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Sep  4 20:32:28.027825 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep  4 20:32:28.027845 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Sep  4 20:32:28.027867 systemd[1]: Reached target slices.target - Slice Units.
Sep  4 20:32:28.027884 systemd[1]: Reached target swap.target - Swaps.
Sep  4 20:32:28.027902 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Sep  4 20:32:28.027921 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Sep  4 20:32:28.027938 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Sep  4 20:32:28.027958 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Sep  4 20:32:28.027978 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Sep  4 20:32:28.027996 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Sep  4 20:32:28.028015 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Sep  4 20:32:28.028037 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Sep  4 20:32:28.028055 systemd[1]: Mounting media.mount - External Media Directory...
Sep  4 20:32:28.028072 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 20:32:28.028090 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Sep  4 20:32:28.028106 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Sep  4 20:32:28.028123 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Sep  4 20:32:28.028677 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Sep  4 20:32:28.028718 systemd[1]: Reached target machines.target - Containers.
Sep  4 20:32:28.028746 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Sep  4 20:32:28.028766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 20:32:28.028784 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Sep  4 20:32:28.028802 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Sep  4 20:32:28.028822 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 20:32:28.028868 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Sep  4 20:32:28.028889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Sep  4 20:32:28.028909 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Sep  4 20:32:28.028928 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Sep  4 20:32:28.028955 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Sep  4 20:32:28.028975 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Sep  4 20:32:28.028995 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Sep  4 20:32:28.029015 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Sep  4 20:32:28.029036 systemd[1]: Stopped systemd-fsck-usr.service.
Sep  4 20:32:28.029055 systemd[1]: Starting systemd-journald.service - Journal Service...
Sep  4 20:32:28.029080 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Sep  4 20:32:28.029100 kernel: fuse: init (API version 7.39)
Sep  4 20:32:28.029122 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Sep  4 20:32:28.029447 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Sep  4 20:32:28.029476 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Sep  4 20:32:28.029495 systemd[1]: verity-setup.service: Deactivated successfully.
Sep  4 20:32:28.029513 systemd[1]: Stopped verity-setup.service.
Sep  4 20:32:28.029534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 20:32:28.029551 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Sep  4 20:32:28.029610 systemd-journald[1105]: Collecting audit messages is disabled.
Sep  4 20:32:28.029657 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Sep  4 20:32:28.029677 systemd[1]: Mounted media.mount - External Media Directory.
Sep  4 20:32:28.029698 systemd-journald[1105]: Journal started
Sep  4 20:32:28.029737 systemd-journald[1105]: Runtime Journal (/run/log/journal/56dcaae656294ba88b9d047e779c5f38) is 4.9M, max 39.3M, 34.4M free.
Sep  4 20:32:27.790983 systemd[1]: Queued start job for default target multi-user.target.
Sep  4 20:32:27.809018 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Sep  4 20:32:27.809449 systemd[1]: systemd-journald.service: Deactivated successfully.
Sep  4 20:32:28.032257 kernel: loop: module loaded
Sep  4 20:32:28.036210 systemd[1]: Started systemd-journald.service - Journal Service.
Sep  4 20:32:28.038297 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Sep  4 20:32:28.038769 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Sep  4 20:32:28.039308 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Sep  4 20:32:28.040480 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Sep  4 20:32:28.045935 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Sep  4 20:32:28.046105 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Sep  4 20:32:28.047659 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 20:32:28.047799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 20:32:28.048517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Sep  4 20:32:28.048656 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Sep  4 20:32:28.049256 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Sep  4 20:32:28.049369 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Sep  4 20:32:28.050723 systemd[1]: modprobe@loop.service: Deactivated successfully.
Sep  4 20:32:28.050872 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Sep  4 20:32:28.054127 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Sep  4 20:32:28.064565 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Sep  4 20:32:28.071281 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Sep  4 20:32:28.076629 systemd[1]: Reached target network-pre.target - Preparation for Network.
Sep  4 20:32:28.089279 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Sep  4 20:32:28.092994 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Sep  4 20:32:28.093467 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Sep  4 20:32:28.093505 systemd[1]: Reached target local-fs.target - Local File Systems.
Sep  4 20:32:28.094894 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Sep  4 20:32:28.106312 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Sep  4 20:32:28.111046 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Sep  4 20:32:28.111584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 20:32:28.115344 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Sep  4 20:32:28.118294 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Sep  4 20:32:28.118719 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Sep  4 20:32:28.126945 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Sep  4 20:32:28.127400 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Sep  4 20:32:28.138483 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Sep  4 20:32:28.142359 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Sep  4 20:32:28.145551 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Sep  4 20:32:28.146133 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Sep  4 20:32:28.146735 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Sep  4 20:32:28.178337 systemd-journald[1105]: Time spent on flushing to /var/log/journal/56dcaae656294ba88b9d047e779c5f38 is 46.253ms for 987 entries.
Sep  4 20:32:28.178337 systemd-journald[1105]: System Journal (/var/log/journal/56dcaae656294ba88b9d047e779c5f38) is 8.0M, max 195.6M, 187.6M free.
Sep  4 20:32:28.279303 systemd-journald[1105]: Received client request to flush runtime journal.
Sep  4 20:32:28.279372 kernel: loop0: detected capacity change from 0 to 139904
Sep  4 20:32:28.279396 kernel: block loop0: the capability attribute has been deprecated.
Sep  4 20:32:28.186644 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Sep  4 20:32:28.187199 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Sep  4 20:32:28.194578 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Sep  4 20:32:28.250543 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Sep  4 20:32:28.272800 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Sep  4 20:32:28.293424 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Sep  4 20:32:28.297299 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Sep  4 20:32:28.306342 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Sep  4 20:32:28.308680 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Sep  4 20:32:28.329153 kernel: ACPI: bus type drm_connector registered
Sep  4 20:32:28.329843 systemd[1]: modprobe@drm.service: Deactivated successfully.
Sep  4 20:32:28.331597 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Sep  4 20:32:28.332171 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Sep  4 20:32:28.358681 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Sep  4 20:32:28.362182 kernel: loop1: detected capacity change from 0 to 80568
Sep  4 20:32:28.364501 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Sep  4 20:32:28.381509 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Sep  4 20:32:28.393403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Sep  4 20:32:28.416313 kernel: loop2: detected capacity change from 0 to 8
Sep  4 20:32:28.426564 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Sep  4 20:32:28.441829 kernel: loop3: detected capacity change from 0 to 211296
Sep  4 20:32:28.477269 systemd-tmpfiles[1173]: ACLs are not supported, ignoring.
Sep  4 20:32:28.477296 systemd-tmpfiles[1173]: ACLs are not supported, ignoring.
Sep  4 20:32:28.486487 kernel: loop4: detected capacity change from 0 to 139904
Sep  4 20:32:28.494594 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Sep  4 20:32:28.510177 kernel: loop5: detected capacity change from 0 to 80568
Sep  4 20:32:28.524517 kernel: loop6: detected capacity change from 0 to 8
Sep  4 20:32:28.528316 kernel: loop7: detected capacity change from 0 to 211296
Sep  4 20:32:28.547305 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'.
Sep  4 20:32:28.548770 (sd-merge)[1178]: Merged extensions into '/usr'.
Sep  4 20:32:28.557837 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)...
Sep  4 20:32:28.557851 systemd[1]: Reloading...
Sep  4 20:32:28.662896 zram_generator::config[1201]: No configuration found.
Sep  4 20:32:28.807126 ldconfig[1139]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Sep  4 20:32:28.910779 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 20:32:28.976442 systemd[1]: Reloading finished in 418 ms.
Sep  4 20:32:28.998462 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Sep  4 20:32:29.002052 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Sep  4 20:32:29.011778 systemd[1]: Starting ensure-sysext.service...
Sep  4 20:32:29.014867 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories...
Sep  4 20:32:29.022317 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)...
Sep  4 20:32:29.022335 systemd[1]: Reloading...
Sep  4 20:32:29.085547 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Sep  4 20:32:29.085893 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Sep  4 20:32:29.086775 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Sep  4 20:32:29.087065 systemd-tmpfiles[1247]: ACLs are not supported, ignoring.
Sep  4 20:32:29.087133 systemd-tmpfiles[1247]: ACLs are not supported, ignoring.
Sep  4 20:32:29.096240 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot.
Sep  4 20:32:29.096255 systemd-tmpfiles[1247]: Skipping /boot
Sep  4 20:32:29.127268 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot.
Sep  4 20:32:29.127281 systemd-tmpfiles[1247]: Skipping /boot
Sep  4 20:32:29.153175 zram_generator::config[1271]: No configuration found.
Sep  4 20:32:29.290497 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 20:32:29.346571 systemd[1]: Reloading finished in 323 ms.
Sep  4 20:32:29.365423 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Sep  4 20:32:29.373663 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Sep  4 20:32:29.385377 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Sep  4 20:32:29.390351 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Sep  4 20:32:29.392333 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Sep  4 20:32:29.397362 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Sep  4 20:32:29.400407 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Sep  4 20:32:29.404501 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Sep  4 20:32:29.413438 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 20:32:29.413635 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 20:32:29.422637 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 20:32:29.428495 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Sep  4 20:32:29.433809 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Sep  4 20:32:29.435403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 20:32:29.435616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 20:32:29.446608 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Sep  4 20:32:29.448553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 20:32:29.449225 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 20:32:29.455988 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 20:32:29.457256 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 20:32:29.464614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 20:32:29.465210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 20:32:29.465386 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 20:32:29.471187 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 20:32:29.471412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 20:32:29.480504 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Sep  4 20:32:29.481366 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 20:32:29.481533 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 20:32:29.483321 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Sep  4 20:32:29.483944 systemd-udevd[1322]: Using default interface naming scheme 'v255'.
Sep  4 20:32:29.491648 systemd[1]: Finished ensure-sysext.service.
Sep  4 20:32:29.505409 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Sep  4 20:32:29.505915 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Sep  4 20:32:29.506448 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Sep  4 20:32:29.512982 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Sep  4 20:32:29.524415 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Sep  4 20:32:29.565682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 20:32:29.567199 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 20:32:29.569068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Sep  4 20:32:29.570824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Sep  4 20:32:29.577993 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Sep  4 20:32:29.591572 systemd[1]: modprobe@drm.service: Deactivated successfully.
Sep  4 20:32:29.591851 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Sep  4 20:32:29.594025 augenrules[1367]: No rules
Sep  4 20:32:29.596256 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Sep  4 20:32:29.609444 systemd[1]: modprobe@loop.service: Deactivated successfully.
Sep  4 20:32:29.609691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Sep  4 20:32:29.618863 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Sep  4 20:32:29.634916 systemd[1]: Mounting media-configdrive.mount - /media/configdrive...
Sep  4 20:32:29.635495 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 20:32:29.635739 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 20:32:29.646449 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 20:32:29.657446 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Sep  4 20:32:29.659165 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1354)
Sep  4 20:32:29.659618 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 20:32:29.668422 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Sep  4 20:32:29.670259 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Sep  4 20:32:29.670314 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 20:32:29.672171 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1351)
Sep  4 20:32:29.673726 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Sep  4 20:32:29.695267 kernel: ISO 9660 Extensions: RRIP_1991A
Sep  4 20:32:29.697070 systemd[1]: Mounted media-configdrive.mount - /media/configdrive.
Sep  4 20:32:29.714248 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Sep  4 20:32:29.715797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 20:32:29.716825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 20:32:29.720030 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Sep  4 20:32:29.725882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Sep  4 20:32:29.727321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Sep  4 20:32:29.728473 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Sep  4 20:32:29.730918 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Sep  4 20:32:29.798179 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Sep  4 20:32:29.810214 kernel: ACPI: button: Power Button [PWRF]
Sep  4 20:32:29.846209 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Sep  4 20:32:30.014642 systemd-networkd[1346]: lo: Link UP
Sep  4 20:32:30.014654 systemd-networkd[1346]: lo: Gained carrier
Sep  4 20:32:30.018563 systemd-networkd[1346]: Enumeration completed
Sep  4 20:32:30.018917 systemd[1]: Started systemd-networkd.service - Network Configuration.
Sep  4 20:32:30.021010 systemd-networkd[1346]: eth0: Configuring with /run/systemd/network/10-f6:37:51:63:67:e6.network.
Sep  4 20:32:30.027436 systemd-networkd[1346]: eth1: Configuring with /run/systemd/network/10-56:5f:4a:2f:a3:4e.network.
Sep  4 20:32:30.027681 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Sep  4 20:32:30.030124 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Sep  4 20:32:30.030813 systemd[1]: Reached target time-set.target - System Time Set.
Sep  4 20:32:30.032315 systemd-networkd[1346]: eth0: Link UP
Sep  4 20:32:30.032321 systemd-networkd[1346]: eth0: Gained carrier
Sep  4 20:32:30.036344 systemd-networkd[1346]: eth1: Link UP
Sep  4 20:32:30.036490 systemd-networkd[1346]: eth1: Gained carrier
Sep  4 20:32:30.041743 systemd-resolved[1320]: Positive Trust Anchors:
Sep  4 20:32:30.041766 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Sep  4 20:32:30.041803 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test
Sep  4 20:32:30.051127 systemd-resolved[1320]: Using system hostname 'ci-3975.2.1-b-0d33e4c091'.
Sep  4 20:32:30.083203 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection.
Sep  4 20:32:30.095951 systemd-timesyncd[1338]: Contacted time server 204.2.134.163:123 (0.flatcar.pool.ntp.org).
Sep  4 20:32:30.096013 systemd-timesyncd[1338]: Initial clock synchronization to Wed 2024-09-04 20:32:29.946991 UTC.
Sep  4 20:32:30.096961 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Sep  4 20:32:30.099837 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Sep  4 20:32:30.103299 systemd[1]: Reached target network.target - Network.
Sep  4 20:32:30.103974 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Sep  4 20:32:30.107161 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
Sep  4 20:32:30.111798 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Sep  4 20:32:30.115573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 20:32:30.117160 kernel: mousedev: PS/2 mouse device common for all mice
Sep  4 20:32:30.165715 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Sep  4 20:32:30.190375 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Sep  4 20:32:30.190451 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Sep  4 20:32:30.194180 kernel: Console: switching to colour dummy device 80x25
Sep  4 20:32:30.194267 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Sep  4 20:32:30.194285 kernel: [drm] features: -context_init
Sep  4 20:32:30.198177 kernel: [drm] number of scanouts: 1
Sep  4 20:32:30.198312 kernel: [drm] number of cap sets: 0
Sep  4 20:32:30.202230 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0
Sep  4 20:32:30.212243 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Sep  4 20:32:30.212340 kernel: Console: switching to colour frame buffer device 128x48
Sep  4 20:32:30.228168 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Sep  4 20:32:30.234707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep  4 20:32:30.235182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 20:32:30.253578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 20:32:30.264511 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep  4 20:32:30.265097 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 20:32:30.282684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 20:32:30.296677 kernel: EDAC MC: Ver: 3.0.0
Sep  4 20:32:30.337462 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Sep  4 20:32:30.339211 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 20:32:30.348546 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Sep  4 20:32:30.368175 lvm[1428]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Sep  4 20:32:30.411085 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Sep  4 20:32:30.412916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Sep  4 20:32:30.413163 systemd[1]: Reached target sysinit.target - System Initialization.
Sep  4 20:32:30.413474 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Sep  4 20:32:30.413673 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Sep  4 20:32:30.414713 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Sep  4 20:32:30.415133 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Sep  4 20:32:30.415495 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Sep  4 20:32:30.415803 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Sep  4 20:32:30.415967 systemd[1]: Reached target paths.target - Path Units.
Sep  4 20:32:30.416250 systemd[1]: Reached target timers.target - Timer Units.
Sep  4 20:32:30.419448 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Sep  4 20:32:30.424032 systemd[1]: Starting docker.socket - Docker Socket for the API...
Sep  4 20:32:30.433294 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Sep  4 20:32:30.438369 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Sep  4 20:32:30.442276 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Sep  4 20:32:30.443035 systemd[1]: Reached target sockets.target - Socket Units.
Sep  4 20:32:30.443641 systemd[1]: Reached target basic.target - Basic System.
Sep  4 20:32:30.445310 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Sep  4 20:32:30.445456 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Sep  4 20:32:30.453234 systemd[1]: Starting containerd.service - containerd container runtime...
Sep  4 20:32:30.458010 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Sep  4 20:32:30.469120 lvm[1432]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Sep  4 20:32:30.466347 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Sep  4 20:32:30.479375 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Sep  4 20:32:30.492616 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Sep  4 20:32:30.496928 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Sep  4 20:32:30.509559 jq[1436]: false
Sep  4 20:32:30.500496 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Sep  4 20:32:30.512452 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Sep  4 20:32:30.526504 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Sep  4 20:32:30.543589 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Sep  4 20:32:30.557497 systemd[1]: Starting systemd-logind.service - User Login Management...
Sep  4 20:32:30.562362 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Sep  4 20:32:30.563401 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Sep  4 20:32:30.576441 systemd[1]: Starting update-engine.service - Update Engine...
Sep  4 20:32:30.585346 dbus-daemon[1435]: [system] SELinux support is enabled
Sep  4 20:32:30.592505 coreos-metadata[1434]: Sep 04 20:32:30.588 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Sep  4 20:32:30.592505 coreos-metadata[1434]: Sep 04 20:32:30.588 INFO Fetch successful
Sep  4 20:32:30.602351 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Sep  4 20:32:30.613103 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Sep  4 20:32:30.626395 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Sep  4 20:32:30.638832 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Sep  4 20:32:30.639111 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Sep  4 20:32:30.653554 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Sep  4 20:32:30.653943 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Sep  4 20:32:30.679046 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Sep  4 20:32:30.679130 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Sep  4 20:32:30.681433 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Sep  4 20:32:30.681578 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean).
Sep  4 20:32:30.681617 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Sep  4 20:32:30.696186 update_engine[1445]: I0904 20:32:30.689111  1445 main.cc:92] Flatcar Update Engine starting
Sep  4 20:32:30.700268 jq[1446]: true
Sep  4 20:32:30.700740 update_engine[1445]: I0904 20:32:30.700194  1445 update_check_scheduler.cc:74] Next update check in 4m32s
Sep  4 20:32:30.712972 systemd[1]: motdgen.service: Deactivated successfully.
Sep  4 20:32:30.715414 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Sep  4 20:32:30.720726 systemd-logind[1444]: New seat seat0.
Sep  4 20:32:30.724873 systemd[1]: Started update-engine.service - Update Engine.
Sep  4 20:32:30.727135 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button)
Sep  4 20:32:30.728349 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Sep  4 20:32:30.731706 systemd[1]: Started systemd-logind.service - User Login Management.
Sep  4 20:32:30.744438 extend-filesystems[1437]: Found loop4
Sep  4 20:32:30.757279 extend-filesystems[1437]: Found loop5
Sep  4 20:32:30.757279 extend-filesystems[1437]: Found loop6
Sep  4 20:32:30.757279 extend-filesystems[1437]: Found loop7
Sep  4 20:32:30.757279 extend-filesystems[1437]: Found vda
Sep  4 20:32:30.757279 extend-filesystems[1437]: Found vda1
Sep  4 20:32:30.757279 extend-filesystems[1437]: Found vda2
Sep  4 20:32:30.757279 extend-filesystems[1437]: Found vda3
Sep  4 20:32:30.757279 extend-filesystems[1437]: Found usr
Sep  4 20:32:30.757279 extend-filesystems[1437]: Found vda4
Sep  4 20:32:30.757279 extend-filesystems[1437]: Found vda6
Sep  4 20:32:30.757279 extend-filesystems[1437]: Found vda7
Sep  4 20:32:30.757279 extend-filesystems[1437]: Found vda9
Sep  4 20:32:30.757279 extend-filesystems[1437]: Checking size of /dev/vda9
Sep  4 20:32:30.841880 extend-filesystems[1437]: Resized partition /dev/vda9
Sep  4 20:32:30.758788 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Sep  4 20:32:30.857496 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks
Sep  4 20:32:30.859688 extend-filesystems[1479]: resize2fs 1.47.0 (5-Feb-2023)
Sep  4 20:32:30.784754 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Sep  4 20:32:30.866368 tar[1451]: linux-amd64/helm
Sep  4 20:32:30.869729 jq[1465]: true
Sep  4 20:32:30.872607 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Sep  4 20:32:30.882825 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Sep  4 20:32:30.968342 bash[1497]: Updated "/home/core/.ssh/authorized_keys"
Sep  4 20:32:30.974053 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Sep  4 20:32:30.987852 systemd[1]: Starting sshkeys.service...
Sep  4 20:32:30.998203 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1352)
Sep  4 20:32:31.031075 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Sep  4 20:32:31.042706 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Sep  4 20:32:31.116443 kernel: EXT4-fs (vda9): resized filesystem to 15121403
Sep  4 20:32:31.132437 extend-filesystems[1479]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Sep  4 20:32:31.132437 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 8
Sep  4 20:32:31.132437 extend-filesystems[1479]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long.
Sep  4 20:32:31.136241 coreos-metadata[1500]: Sep 04 20:32:31.132 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Sep  4 20:32:31.130574 systemd[1]: extend-filesystems.service: Deactivated successfully.
Sep  4 20:32:31.136566 extend-filesystems[1437]: Resized filesystem in /dev/vda9
Sep  4 20:32:31.136566 extend-filesystems[1437]: Found vdb
Sep  4 20:32:31.130751 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Sep  4 20:32:31.149284 coreos-metadata[1500]: Sep 04 20:32:31.149 INFO Fetch successful
Sep  4 20:32:31.162924 unknown[1500]: wrote ssh authorized keys file for user: core
Sep  4 20:32:31.193372 update-ssh-keys[1508]: Updated "/home/core/.ssh/authorized_keys"
Sep  4 20:32:31.195576 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Sep  4 20:32:31.199555 systemd[1]: Finished sshkeys.service.
Sep  4 20:32:31.251873 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Sep  4 20:32:31.425846 containerd[1467]: time="2024-09-04T20:32:31.425686768Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17
Sep  4 20:32:31.456578 containerd[1467]: time="2024-09-04T20:32:31.456516703Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Sep  4 20:32:31.456755 containerd[1467]: time="2024-09-04T20:32:31.456732227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Sep  4 20:32:31.460799 containerd[1467]: time="2024-09-04T20:32:31.460741849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Sep  4 20:32:31.460947 containerd[1467]: time="2024-09-04T20:32:31.460930297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Sep  4 20:32:31.461287 containerd[1467]: time="2024-09-04T20:32:31.461261297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 20:32:31.461353 containerd[1467]: time="2024-09-04T20:32:31.461343507Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Sep  4 20:32:31.461490 containerd[1467]: time="2024-09-04T20:32:31.461477154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Sep  4 20:32:31.461585 containerd[1467]: time="2024-09-04T20:32:31.461571429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 20:32:31.462271 containerd[1467]: time="2024-09-04T20:32:31.461620221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Sep  4 20:32:31.462271 containerd[1467]: time="2024-09-04T20:32:31.461683572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Sep  4 20:32:31.462271 containerd[1467]: time="2024-09-04T20:32:31.461874691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Sep  4 20:32:31.462271 containerd[1467]: time="2024-09-04T20:32:31.461891349Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Sep  4 20:32:31.462271 containerd[1467]: time="2024-09-04T20:32:31.461900946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Sep  4 20:32:31.462271 containerd[1467]: time="2024-09-04T20:32:31.462014868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 20:32:31.462271 containerd[1467]: time="2024-09-04T20:32:31.462030318Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Sep  4 20:32:31.462271 containerd[1467]: time="2024-09-04T20:32:31.462079265Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Sep  4 20:32:31.462271 containerd[1467]: time="2024-09-04T20:32:31.462090212Z" level=info msg="metadata content store policy set" policy=shared
Sep  4 20:32:31.466374 containerd[1467]: time="2024-09-04T20:32:31.466316468Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Sep  4 20:32:31.466786 containerd[1467]: time="2024-09-04T20:32:31.466648388Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Sep  4 20:32:31.466995 containerd[1467]: time="2024-09-04T20:32:31.466969552Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Sep  4 20:32:31.467194 containerd[1467]: time="2024-09-04T20:32:31.467177141Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Sep  4 20:32:31.467287 containerd[1467]: time="2024-09-04T20:32:31.467276256Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Sep  4 20:32:31.467355 containerd[1467]: time="2024-09-04T20:32:31.467345369Z" level=info msg="NRI interface is disabled by configuration."
Sep  4 20:32:31.467400 containerd[1467]: time="2024-09-04T20:32:31.467391908Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Sep  4 20:32:31.467668 containerd[1467]: time="2024-09-04T20:32:31.467640606Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Sep  4 20:32:31.467774 containerd[1467]: time="2024-09-04T20:32:31.467756464Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Sep  4 20:32:31.467854 containerd[1467]: time="2024-09-04T20:32:31.467836661Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Sep  4 20:32:31.467923 containerd[1467]: time="2024-09-04T20:32:31.467907919Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Sep  4 20:32:31.467978 containerd[1467]: time="2024-09-04T20:32:31.467967553Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468172082Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468192318Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468208440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468224894Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468238371Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468253199Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468265564Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468420147Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468670842Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468699482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468746144Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468776104Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468839416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.469159 containerd[1467]: time="2024-09-04T20:32:31.468854673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.469493 containerd[1467]: time="2024-09-04T20:32:31.468867471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.469493 containerd[1467]: time="2024-09-04T20:32:31.468878446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.469493 containerd[1467]: time="2024-09-04T20:32:31.468898013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.469493 containerd[1467]: time="2024-09-04T20:32:31.468917294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.469493 containerd[1467]: time="2024-09-04T20:32:31.468934628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.469493 containerd[1467]: time="2024-09-04T20:32:31.468951859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.469493 containerd[1467]: time="2024-09-04T20:32:31.468967279Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Sep  4 20:32:31.469493 containerd[1467]: time="2024-09-04T20:32:31.469120954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.469866 containerd[1467]: time="2024-09-04T20:32:31.469845893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.470131 containerd[1467]: time="2024-09-04T20:32:31.470017980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.470224 containerd[1467]: time="2024-09-04T20:32:31.470211990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.470284 containerd[1467]: time="2024-09-04T20:32:31.470271140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.470354 containerd[1467]: time="2024-09-04T20:32:31.470338612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.470424 containerd[1467]: time="2024-09-04T20:32:31.470409096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.470494 containerd[1467]: time="2024-09-04T20:32:31.470475297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Sep  4 20:32:31.470970 containerd[1467]: time="2024-09-04T20:32:31.470899823Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Sep  4 20:32:31.471267 containerd[1467]: time="2024-09-04T20:32:31.471246834Z" level=info msg="Connect containerd service"
Sep  4 20:32:31.471890 containerd[1467]: time="2024-09-04T20:32:31.471354466Z" level=info msg="using legacy CRI server"
Sep  4 20:32:31.471890 containerd[1467]: time="2024-09-04T20:32:31.471370371Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Sep  4 20:32:31.471890 containerd[1467]: time="2024-09-04T20:32:31.471495697Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Sep  4 20:32:31.472474 containerd[1467]: time="2024-09-04T20:32:31.472447038Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Sep  4 20:32:31.472671 containerd[1467]: time="2024-09-04T20:32:31.472654489Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Sep  4 20:32:31.472812 containerd[1467]: time="2024-09-04T20:32:31.472795691Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Sep  4 20:32:31.472884 containerd[1467]: time="2024-09-04T20:32:31.472873456Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Sep  4 20:32:31.473085 containerd[1467]: time="2024-09-04T20:32:31.473064791Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Sep  4 20:32:31.473256 containerd[1467]: time="2024-09-04T20:32:31.472754680Z" level=info msg="Start subscribing containerd event"
Sep  4 20:32:31.473344 containerd[1467]: time="2024-09-04T20:32:31.473330126Z" level=info msg="Start recovering state"
Sep  4 20:32:31.473489 containerd[1467]: time="2024-09-04T20:32:31.473472361Z" level=info msg="Start event monitor"
Sep  4 20:32:31.473561 containerd[1467]: time="2024-09-04T20:32:31.473548650Z" level=info msg="Start snapshots syncer"
Sep  4 20:32:31.473942 containerd[1467]: time="2024-09-04T20:32:31.473623381Z" level=info msg="Start cni network conf syncer for default"
Sep  4 20:32:31.473942 containerd[1467]: time="2024-09-04T20:32:31.473639196Z" level=info msg="Start streaming server"
Sep  4 20:32:31.474395 containerd[1467]: time="2024-09-04T20:32:31.474377057Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Sep  4 20:32:31.474498 containerd[1467]: time="2024-09-04T20:32:31.474485936Z" level=info msg=serving... address=/run/containerd/containerd.sock
Sep  4 20:32:31.474644 containerd[1467]: time="2024-09-04T20:32:31.474631168Z" level=info msg="containerd successfully booted in 0.051063s"
Sep  4 20:32:31.474811 systemd[1]: Started containerd.service - containerd container runtime.
Sep  4 20:32:31.555231 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Sep  4 20:32:31.582630 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Sep  4 20:32:31.592624 systemd[1]: Starting issuegen.service - Generate /run/issue...
Sep  4 20:32:31.600342 systemd-networkd[1346]: eth0: Gained IPv6LL
Sep  4 20:32:31.604670 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Sep  4 20:32:31.608453 systemd[1]: issuegen.service: Deactivated successfully.
Sep  4 20:32:31.609474 systemd[1]: Finished issuegen.service - Generate /run/issue.
Sep  4 20:32:31.614657 systemd[1]: Reached target network-online.target - Network is Online.
Sep  4 20:32:31.624602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 20:32:31.635438 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Sep  4 20:32:31.641315 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Sep  4 20:32:31.674969 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Sep  4 20:32:31.685949 systemd[1]: Started getty@tty1.service - Getty on tty1.
Sep  4 20:32:31.698690 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Sep  4 20:32:31.701538 systemd[1]: Reached target getty.target - Login Prompts.
Sep  4 20:32:31.704078 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Sep  4 20:32:31.728386 systemd-networkd[1346]: eth1: Gained IPv6LL
Sep  4 20:32:31.848861 tar[1451]: linux-amd64/LICENSE
Sep  4 20:32:31.849074 tar[1451]: linux-amd64/README.md
Sep  4 20:32:31.862802 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Sep  4 20:32:32.494702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 20:32:32.497230 systemd[1]: Reached target multi-user.target - Multi-User System.
Sep  4 20:32:32.498615 systemd[1]: Startup finished in 1.125s (kernel) + 5.544s (initrd) + 5.301s (userspace) = 11.971s.
Sep  4 20:32:32.506270 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Sep  4 20:32:33.171730 kubelet[1557]: E0904 20:32:33.171647    1557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Sep  4 20:32:33.174825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep  4 20:32:33.174976 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep  4 20:32:33.175300 systemd[1]: kubelet.service: Consumed 1.170s CPU time.
Sep  4 20:32:35.186008 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Sep  4 20:32:35.187324 systemd[1]: Started sshd@0-209.38.64.58:22-139.178.68.195:48166.service - OpenSSH per-connection server daemon (139.178.68.195:48166).
Sep  4 20:32:35.259213 sshd[1571]: Accepted publickey for core from 139.178.68.195 port 48166 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:32:35.261388 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:32:35.272381 systemd-logind[1444]: New session 1 of user core.
Sep  4 20:32:35.273342 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Sep  4 20:32:35.281490 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Sep  4 20:32:35.296866 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Sep  4 20:32:35.304553 systemd[1]: Starting user@500.service - User Manager for UID 500...
Sep  4 20:32:35.315472 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:32:35.449105 systemd[1575]: Queued start job for default target default.target.
Sep  4 20:32:35.458799 systemd[1575]: Created slice app.slice - User Application Slice.
Sep  4 20:32:35.458870 systemd[1575]: Reached target paths.target - Paths.
Sep  4 20:32:35.458897 systemd[1575]: Reached target timers.target - Timers.
Sep  4 20:32:35.461666 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket...
Sep  4 20:32:35.490505 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Sep  4 20:32:35.490735 systemd[1575]: Reached target sockets.target - Sockets.
Sep  4 20:32:35.490753 systemd[1575]: Reached target basic.target - Basic System.
Sep  4 20:32:35.490815 systemd[1575]: Reached target default.target - Main User Target.
Sep  4 20:32:35.490853 systemd[1575]: Startup finished in 164ms.
Sep  4 20:32:35.491391 systemd[1]: Started user@500.service - User Manager for UID 500.
Sep  4 20:32:35.504590 systemd[1]: Started session-1.scope - Session 1 of User core.
Sep  4 20:32:35.580057 systemd[1]: Started sshd@1-209.38.64.58:22-139.178.68.195:48172.service - OpenSSH per-connection server daemon (139.178.68.195:48172).
Sep  4 20:32:35.633335 sshd[1586]: Accepted publickey for core from 139.178.68.195 port 48172 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:32:35.634469 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:32:35.640846 systemd-logind[1444]: New session 2 of user core.
Sep  4 20:32:35.656512 systemd[1]: Started session-2.scope - Session 2 of User core.
Sep  4 20:32:35.719793 sshd[1586]: pam_unix(sshd:session): session closed for user core
Sep  4 20:32:35.732271 systemd[1]: sshd@1-209.38.64.58:22-139.178.68.195:48172.service: Deactivated successfully.
Sep  4 20:32:35.734873 systemd[1]: session-2.scope: Deactivated successfully.
Sep  4 20:32:35.738496 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit.
Sep  4 20:32:35.742781 systemd[1]: Started sshd@2-209.38.64.58:22-139.178.68.195:48186.service - OpenSSH per-connection server daemon (139.178.68.195:48186).
Sep  4 20:32:35.745283 systemd-logind[1444]: Removed session 2.
Sep  4 20:32:35.793033 sshd[1593]: Accepted publickey for core from 139.178.68.195 port 48186 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:32:35.794993 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:32:35.800852 systemd-logind[1444]: New session 3 of user core.
Sep  4 20:32:35.811546 systemd[1]: Started session-3.scope - Session 3 of User core.
Sep  4 20:32:35.870401 sshd[1593]: pam_unix(sshd:session): session closed for user core
Sep  4 20:32:35.883993 systemd[1]: sshd@2-209.38.64.58:22-139.178.68.195:48186.service: Deactivated successfully.
Sep  4 20:32:35.885934 systemd[1]: session-3.scope: Deactivated successfully.
Sep  4 20:32:35.890286 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit.
Sep  4 20:32:35.894571 systemd[1]: Started sshd@3-209.38.64.58:22-139.178.68.195:48198.service - OpenSSH per-connection server daemon (139.178.68.195:48198).
Sep  4 20:32:35.896970 systemd-logind[1444]: Removed session 3.
Sep  4 20:32:35.934488 sshd[1600]: Accepted publickey for core from 139.178.68.195 port 48198 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:32:35.936119 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:32:35.941698 systemd-logind[1444]: New session 4 of user core.
Sep  4 20:32:35.947420 systemd[1]: Started session-4.scope - Session 4 of User core.
Sep  4 20:32:36.009879 sshd[1600]: pam_unix(sshd:session): session closed for user core
Sep  4 20:32:36.022630 systemd[1]: sshd@3-209.38.64.58:22-139.178.68.195:48198.service: Deactivated successfully.
Sep  4 20:32:36.024626 systemd[1]: session-4.scope: Deactivated successfully.
Sep  4 20:32:36.025237 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit.
Sep  4 20:32:36.031549 systemd[1]: Started sshd@4-209.38.64.58:22-139.178.68.195:48212.service - OpenSSH per-connection server daemon (139.178.68.195:48212).
Sep  4 20:32:36.033559 systemd-logind[1444]: Removed session 4.
Sep  4 20:32:36.083247 sshd[1607]: Accepted publickey for core from 139.178.68.195 port 48212 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:32:36.084928 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:32:36.090701 systemd-logind[1444]: New session 5 of user core.
Sep  4 20:32:36.096387 systemd[1]: Started session-5.scope - Session 5 of User core.
Sep  4 20:32:36.164150 sudo[1610]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Sep  4 20:32:36.164485 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Sep  4 20:32:36.182805 sudo[1610]: pam_unix(sudo:session): session closed for user root
Sep  4 20:32:36.186479 sshd[1607]: pam_unix(sshd:session): session closed for user core
Sep  4 20:32:36.195186 systemd[1]: sshd@4-209.38.64.58:22-139.178.68.195:48212.service: Deactivated successfully.
Sep  4 20:32:36.197348 systemd[1]: session-5.scope: Deactivated successfully.
Sep  4 20:32:36.199377 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit.
Sep  4 20:32:36.205538 systemd[1]: Started sshd@5-209.38.64.58:22-139.178.68.195:48222.service - OpenSSH per-connection server daemon (139.178.68.195:48222).
Sep  4 20:32:36.207556 systemd-logind[1444]: Removed session 5.
Sep  4 20:32:36.244567 sshd[1615]: Accepted publickey for core from 139.178.68.195 port 48222 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:32:36.246399 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:32:36.252268 systemd-logind[1444]: New session 6 of user core.
Sep  4 20:32:36.263453 systemd[1]: Started session-6.scope - Session 6 of User core.
Sep  4 20:32:36.323055 sudo[1619]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Sep  4 20:32:36.323788 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Sep  4 20:32:36.328340 sudo[1619]: pam_unix(sudo:session): session closed for user root
Sep  4 20:32:36.335051 sudo[1618]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Sep  4 20:32:36.335500 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Sep  4 20:32:36.358619 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Sep  4 20:32:36.360880 auditctl[1622]: No rules
Sep  4 20:32:36.361259 systemd[1]: audit-rules.service: Deactivated successfully.
Sep  4 20:32:36.361452 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Sep  4 20:32:36.368593 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Sep  4 20:32:36.396764 augenrules[1640]: No rules
Sep  4 20:32:36.398290 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Sep  4 20:32:36.399929 sudo[1618]: pam_unix(sudo:session): session closed for user root
Sep  4 20:32:36.403407 sshd[1615]: pam_unix(sshd:session): session closed for user core
Sep  4 20:32:36.415559 systemd[1]: sshd@5-209.38.64.58:22-139.178.68.195:48222.service: Deactivated successfully.
Sep  4 20:32:36.418296 systemd[1]: session-6.scope: Deactivated successfully.
Sep  4 20:32:36.418911 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit.
Sep  4 20:32:36.424489 systemd[1]: Started sshd@6-209.38.64.58:22-139.178.68.195:56288.service - OpenSSH per-connection server daemon (139.178.68.195:56288).
Sep  4 20:32:36.425951 systemd-logind[1444]: Removed session 6.
Sep  4 20:32:36.473805 sshd[1648]: Accepted publickey for core from 139.178.68.195 port 56288 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:32:36.475406 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:32:36.480198 systemd-logind[1444]: New session 7 of user core.
Sep  4 20:32:36.483358 systemd[1]: Started session-7.scope - Session 7 of User core.
Sep  4 20:32:36.540564 sudo[1651]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Sep  4 20:32:36.541532 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Sep  4 20:32:36.658545 systemd[1]: Starting docker.service - Docker Application Container Engine...
Sep  4 20:32:36.659012 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Sep  4 20:32:37.058348 dockerd[1660]: time="2024-09-04T20:32:37.058263872Z" level=info msg="Starting up"
Sep  4 20:32:37.078885 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4138301037-merged.mount: Deactivated successfully.
Sep  4 20:32:37.085548 systemd[1]: var-lib-docker-metacopy\x2dcheck3515869951-merged.mount: Deactivated successfully.
Sep  4 20:32:37.110957 dockerd[1660]: time="2024-09-04T20:32:37.110739939Z" level=info msg="Loading containers: start."
Sep  4 20:32:37.274428 kernel: Initializing XFRM netlink socket
Sep  4 20:32:37.394625 systemd-networkd[1346]: docker0: Link UP
Sep  4 20:32:37.412187 dockerd[1660]: time="2024-09-04T20:32:37.411982008Z" level=info msg="Loading containers: done."
Sep  4 20:32:37.505487 dockerd[1660]: time="2024-09-04T20:32:37.505270860Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Sep  4 20:32:37.505776 dockerd[1660]: time="2024-09-04T20:32:37.505740819Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9
Sep  4 20:32:37.505926 dockerd[1660]: time="2024-09-04T20:32:37.505902520Z" level=info msg="Daemon has completed initialization"
Sep  4 20:32:37.548383 dockerd[1660]: time="2024-09-04T20:32:37.547634141Z" level=info msg="API listen on /run/docker.sock"
Sep  4 20:32:37.548215 systemd[1]: Started docker.service - Docker Application Container Engine.
Sep  4 20:32:38.441134 containerd[1467]: time="2024-09-04T20:32:38.440957834Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\""
Sep  4 20:32:38.990620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount833357541.mount: Deactivated successfully.
Sep  4 20:32:41.015575 containerd[1467]: time="2024-09-04T20:32:41.015468948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:41.017304 containerd[1467]: time="2024-09-04T20:32:41.017230561Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=35232949"
Sep  4 20:32:41.017469 containerd[1467]: time="2024-09-04T20:32:41.017286135Z" level=info msg="ImageCreate event name:\"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:41.021680 containerd[1467]: time="2024-09-04T20:32:41.021541400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:41.023905 containerd[1467]: time="2024-09-04T20:32:41.023549113Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"35229749\" in 2.58253204s"
Sep  4 20:32:41.023905 containerd[1467]: time="2024-09-04T20:32:41.023637829Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\""
Sep  4 20:32:41.071839 containerd[1467]: time="2024-09-04T20:32:41.071711233Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\""
Sep  4 20:32:42.987507 containerd[1467]: time="2024-09-04T20:32:42.987451039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:42.989712 containerd[1467]: time="2024-09-04T20:32:42.989626425Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=32206206"
Sep  4 20:32:42.991198 containerd[1467]: time="2024-09-04T20:32:42.990879415Z" level=info msg="ImageCreate event name:\"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:42.993715 containerd[1467]: time="2024-09-04T20:32:42.993646249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:42.995054 containerd[1467]: time="2024-09-04T20:32:42.994834731Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"33756152\" in 1.923074545s"
Sep  4 20:32:42.995054 containerd[1467]: time="2024-09-04T20:32:42.994884688Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\""
Sep  4 20:32:43.030651 containerd[1467]: time="2024-09-04T20:32:43.030612655Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\""
Sep  4 20:32:43.425442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Sep  4 20:32:43.433427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 20:32:43.559727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 20:32:43.570689 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Sep  4 20:32:43.642573 kubelet[1877]: E0904 20:32:43.642394    1877 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Sep  4 20:32:43.647814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep  4 20:32:43.647989 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep  4 20:32:44.389914 containerd[1467]: time="2024-09-04T20:32:44.388756561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:44.389914 containerd[1467]: time="2024-09-04T20:32:44.389622003Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=17321507"
Sep  4 20:32:44.389914 containerd[1467]: time="2024-09-04T20:32:44.389848888Z" level=info msg="ImageCreate event name:\"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:44.393495 containerd[1467]: time="2024-09-04T20:32:44.393448926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:44.395168 containerd[1467]: time="2024-09-04T20:32:44.395088172Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"18871471\" in 1.364435655s"
Sep  4 20:32:44.395363 containerd[1467]: time="2024-09-04T20:32:44.395336937Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\""
Sep  4 20:32:44.426762 containerd[1467]: time="2024-09-04T20:32:44.426716806Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\""
Sep  4 20:32:45.562989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3922360291.mount: Deactivated successfully.
Sep  4 20:32:46.038165 containerd[1467]: time="2024-09-04T20:32:46.037410519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:46.038557 containerd[1467]: time="2024-09-04T20:32:46.038277806Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=28600380"
Sep  4 20:32:46.039054 containerd[1467]: time="2024-09-04T20:32:46.039017075Z" level=info msg="ImageCreate event name:\"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:46.041619 containerd[1467]: time="2024-09-04T20:32:46.041571256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:46.042621 containerd[1467]: time="2024-09-04T20:32:46.042451020Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"28599399\" in 1.615494885s"
Sep  4 20:32:46.042621 containerd[1467]: time="2024-09-04T20:32:46.042504366Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\""
Sep  4 20:32:46.076922 containerd[1467]: time="2024-09-04T20:32:46.076884541Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Sep  4 20:32:46.619296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041547430.mount: Deactivated successfully.
Sep  4 20:32:47.518224 containerd[1467]: time="2024-09-04T20:32:47.518131356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:47.519888 containerd[1467]: time="2024-09-04T20:32:47.519827867Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761"
Sep  4 20:32:47.520096 containerd[1467]: time="2024-09-04T20:32:47.519880159Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:47.524562 containerd[1467]: time="2024-09-04T20:32:47.524386442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:47.527471 containerd[1467]: time="2024-09-04T20:32:47.526256585Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.449168485s"
Sep  4 20:32:47.527471 containerd[1467]: time="2024-09-04T20:32:47.526315245Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""
Sep  4 20:32:47.576206 containerd[1467]: time="2024-09-04T20:32:47.576135964Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Sep  4 20:32:48.091171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617086982.mount: Deactivated successfully.
Sep  4 20:32:48.095510 containerd[1467]: time="2024-09-04T20:32:48.094572294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:48.095510 containerd[1467]: time="2024-09-04T20:32:48.095408417Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290"
Sep  4 20:32:48.095510 containerd[1467]: time="2024-09-04T20:32:48.095460170Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:48.097917 containerd[1467]: time="2024-09-04T20:32:48.097872090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:48.099128 containerd[1467]: time="2024-09-04T20:32:48.098966777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 522.770679ms"
Sep  4 20:32:48.099128 containerd[1467]: time="2024-09-04T20:32:48.099001366Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Sep  4 20:32:48.136187 containerd[1467]: time="2024-09-04T20:32:48.135894454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Sep  4 20:32:48.674257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2980188766.mount: Deactivated successfully.
Sep  4 20:32:50.345376 containerd[1467]: time="2024-09-04T20:32:50.345009967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:50.346175 containerd[1467]: time="2024-09-04T20:32:50.346099755Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625"
Sep  4 20:32:50.346912 containerd[1467]: time="2024-09-04T20:32:50.346578023Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:50.349930 containerd[1467]: time="2024-09-04T20:32:50.349864286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:32:50.352441 containerd[1467]: time="2024-09-04T20:32:50.351809814Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.215869237s"
Sep  4 20:32:50.352441 containerd[1467]: time="2024-09-04T20:32:50.351873881Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\""
Sep  4 20:32:53.847029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Sep  4 20:32:53.857637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 20:32:53.873348 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Sep  4 20:32:53.873430 systemd[1]: kubelet.service: Failed with result 'signal'.
Sep  4 20:32:53.873911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 20:32:53.879393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 20:32:53.914196 systemd[1]: Reloading requested from client PID 2072 ('systemctl') (unit session-7.scope)...
Sep  4 20:32:53.914219 systemd[1]: Reloading...
Sep  4 20:32:54.019408 zram_generator::config[2106]: No configuration found.
Sep  4 20:32:54.203182 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 20:32:54.326841 systemd[1]: Reloading finished in 412 ms.
Sep  4 20:32:54.402869 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Sep  4 20:32:54.403214 systemd[1]: kubelet.service: Failed with result 'signal'.
Sep  4 20:32:54.403695 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 20:32:54.411560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 20:32:54.559114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 20:32:54.570644 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Sep  4 20:32:54.629534 kubelet[2164]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 20:32:54.629534 kubelet[2164]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Sep  4 20:32:54.629534 kubelet[2164]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 20:32:54.630776 kubelet[2164]: I0904 20:32:54.630716    2164 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Sep  4 20:32:55.490205 kubelet[2164]: I0904 20:32:55.489677    2164 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Sep  4 20:32:55.490205 kubelet[2164]: I0904 20:32:55.489723    2164 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep  4 20:32:55.490205 kubelet[2164]: I0904 20:32:55.490032    2164 server.go:919] "Client rotation is on, will bootstrap in background"
Sep  4 20:32:55.511919 kubelet[2164]: E0904 20:32:55.511871    2164 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://209.38.64.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:55.513681 kubelet[2164]: I0904 20:32:55.513525    2164 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep  4 20:32:55.528603 kubelet[2164]: I0904 20:32:55.528574    2164 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Sep  4 20:32:55.529748 kubelet[2164]: I0904 20:32:55.529546    2164 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Sep  4 20:32:55.530600 kubelet[2164]: I0904 20:32:55.530504    2164 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Sep  4 20:32:55.531773 kubelet[2164]: I0904 20:32:55.531259    2164 topology_manager.go:138] "Creating topology manager with none policy"
Sep  4 20:32:55.531773 kubelet[2164]: I0904 20:32:55.531293    2164 container_manager_linux.go:301] "Creating device plugin manager"
Sep  4 20:32:55.531773 kubelet[2164]: I0904 20:32:55.531426    2164 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 20:32:55.531773 kubelet[2164]: I0904 20:32:55.531552    2164 kubelet.go:396] "Attempting to sync node with API server"
Sep  4 20:32:55.531773 kubelet[2164]: I0904 20:32:55.531574    2164 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Sep  4 20:32:55.531773 kubelet[2164]: I0904 20:32:55.531611    2164 kubelet.go:312] "Adding apiserver pod source"
Sep  4 20:32:55.531773 kubelet[2164]: I0904 20:32:55.531643    2164 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Sep  4 20:32:55.535962 kubelet[2164]: W0904 20:32:55.535897    2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://209.38.64.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:55.536680 kubelet[2164]: E0904 20:32:55.536197    2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://209.38.64.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:55.536680 kubelet[2164]: W0904 20:32:55.536596    2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://209.38.64.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-b-0d33e4c091&limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:55.536680 kubelet[2164]: E0904 20:32:55.536650    2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://209.38.64.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-b-0d33e4c091&limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:55.537795 kubelet[2164]: I0904 20:32:55.537456    2164 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1"
Sep  4 20:32:55.544240 kubelet[2164]: I0904 20:32:55.544199    2164 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Sep  4 20:32:55.546003 kubelet[2164]: W0904 20:32:55.545647    2164 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Sep  4 20:32:55.546787 kubelet[2164]: I0904 20:32:55.546758    2164 server.go:1256] "Started kubelet"
Sep  4 20:32:55.553173 kubelet[2164]: I0904 20:32:55.552234    2164 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Sep  4 20:32:55.553950 kubelet[2164]: E0904 20:32:55.553918    2164 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.64.58:6443/api/v1/namespaces/default/events\": dial tcp 209.38.64.58:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.2.1-b-0d33e4c091.17f224b6facd4457  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.2.1-b-0d33e4c091,UID:ci-3975.2.1-b-0d33e4c091,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.2.1-b-0d33e4c091,},FirstTimestamp:2024-09-04 20:32:55.546725463 +0000 UTC m=+0.968568980,LastTimestamp:2024-09-04 20:32:55.546725463 +0000 UTC m=+0.968568980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.2.1-b-0d33e4c091,}"
Sep  4 20:32:55.554208 kubelet[2164]: I0904 20:32:55.554184    2164 server.go:461] "Adding debug handlers to kubelet server"
Sep  4 20:32:55.559806 kubelet[2164]: I0904 20:32:55.559770    2164 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Sep  4 20:32:55.560417 kubelet[2164]: I0904 20:32:55.560393    2164 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Sep  4 20:32:55.562166 kubelet[2164]: I0904 20:32:55.561811    2164 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Sep  4 20:32:55.563031 kubelet[2164]: E0904 20:32:55.562997    2164 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Sep  4 20:32:55.563811 kubelet[2164]: I0904 20:32:55.563437    2164 volume_manager.go:291] "Starting Kubelet Volume Manager"
Sep  4 20:32:55.565092 kubelet[2164]: I0904 20:32:55.565060    2164 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Sep  4 20:32:55.566212 kubelet[2164]: I0904 20:32:55.565882    2164 reconciler_new.go:29] "Reconciler: start to sync state"
Sep  4 20:32:55.573222 kubelet[2164]: E0904 20:32:55.573115    2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.64.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-b-0d33e4c091?timeout=10s\": dial tcp 209.38.64.58:6443: connect: connection refused" interval="200ms"
Sep  4 20:32:55.574531 kubelet[2164]: W0904 20:32:55.574441    2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://209.38.64.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:55.574531 kubelet[2164]: E0904 20:32:55.574525    2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://209.38.64.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:55.574733 kubelet[2164]: I0904 20:32:55.574713    2164 factory.go:221] Registration of the systemd container factory successfully
Sep  4 20:32:55.574911 kubelet[2164]: I0904 20:32:55.574826    2164 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Sep  4 20:32:55.577237 kubelet[2164]: I0904 20:32:55.576824    2164 factory.go:221] Registration of the containerd container factory successfully
Sep  4 20:32:55.594309 kubelet[2164]: I0904 20:32:55.593252    2164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Sep  4 20:32:55.596206 kubelet[2164]: I0904 20:32:55.594512    2164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Sep  4 20:32:55.596206 kubelet[2164]: I0904 20:32:55.594550    2164 status_manager.go:217] "Starting to sync pod status with apiserver"
Sep  4 20:32:55.596206 kubelet[2164]: I0904 20:32:55.594576    2164 kubelet.go:2329] "Starting kubelet main sync loop"
Sep  4 20:32:55.596206 kubelet[2164]: E0904 20:32:55.594630    2164 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Sep  4 20:32:55.604297 kubelet[2164]: W0904 20:32:55.604233    2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://209.38.64.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:55.604544 kubelet[2164]: E0904 20:32:55.604524    2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://209.38.64.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:55.612576 kubelet[2164]: I0904 20:32:55.612515    2164 cpu_manager.go:214] "Starting CPU manager" policy="none"
Sep  4 20:32:55.612576 kubelet[2164]: I0904 20:32:55.612565    2164 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Sep  4 20:32:55.612576 kubelet[2164]: I0904 20:32:55.612586    2164 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 20:32:55.613950 kubelet[2164]: I0904 20:32:55.613910    2164 policy_none.go:49] "None policy: Start"
Sep  4 20:32:55.614610 kubelet[2164]: I0904 20:32:55.614590    2164 memory_manager.go:170] "Starting memorymanager" policy="None"
Sep  4 20:32:55.615154 kubelet[2164]: I0904 20:32:55.614729    2164 state_mem.go:35] "Initializing new in-memory state store"
Sep  4 20:32:55.622763 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Sep  4 20:32:55.635930 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Sep  4 20:32:55.639002 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Sep  4 20:32:55.651473 kubelet[2164]: I0904 20:32:55.650565    2164 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Sep  4 20:32:55.651473 kubelet[2164]: I0904 20:32:55.651120    2164 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Sep  4 20:32:55.654470 kubelet[2164]: E0904 20:32:55.654446    2164 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.2.1-b-0d33e4c091\" not found"
Sep  4 20:32:55.666279 kubelet[2164]: I0904 20:32:55.666251    2164 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.667073 kubelet[2164]: E0904 20:32:55.667051    2164 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.64.58:6443/api/v1/nodes\": dial tcp 209.38.64.58:6443: connect: connection refused" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.695421 kubelet[2164]: I0904 20:32:55.695366    2164 topology_manager.go:215] "Topology Admit Handler" podUID="ea666bc13bddc8cc4081dc946bc695ff" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.697228 kubelet[2164]: I0904 20:32:55.696733    2164 topology_manager.go:215] "Topology Admit Handler" podUID="e7486b68429c05c1ae6888b11a25f11d" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.697919 kubelet[2164]: I0904 20:32:55.697896    2164 topology_manager.go:215] "Topology Admit Handler" podUID="4bb314d7358da70175ca25f6faa41dee" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.707531 systemd[1]: Created slice kubepods-burstable-podea666bc13bddc8cc4081dc946bc695ff.slice - libcontainer container kubepods-burstable-podea666bc13bddc8cc4081dc946bc695ff.slice.
Sep  4 20:32:55.732630 systemd[1]: Created slice kubepods-burstable-pode7486b68429c05c1ae6888b11a25f11d.slice - libcontainer container kubepods-burstable-pode7486b68429c05c1ae6888b11a25f11d.slice.
Sep  4 20:32:55.745340 systemd[1]: Created slice kubepods-burstable-pod4bb314d7358da70175ca25f6faa41dee.slice - libcontainer container kubepods-burstable-pod4bb314d7358da70175ca25f6faa41dee.slice.
Sep  4 20:32:55.775640 kubelet[2164]: E0904 20:32:55.775600    2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.64.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-b-0d33e4c091?timeout=10s\": dial tcp 209.38.64.58:6443: connect: connection refused" interval="400ms"
Sep  4 20:32:55.867139 kubelet[2164]: I0904 20:32:55.866808    2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7486b68429c05c1ae6888b11a25f11d-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-b-0d33e4c091\" (UID: \"e7486b68429c05c1ae6888b11a25f11d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.867139 kubelet[2164]: I0904 20:32:55.866859    2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7486b68429c05c1ae6888b11a25f11d-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-b-0d33e4c091\" (UID: \"e7486b68429c05c1ae6888b11a25f11d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.867139 kubelet[2164]: I0904 20:32:55.866880    2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7486b68429c05c1ae6888b11a25f11d-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-b-0d33e4c091\" (UID: \"e7486b68429c05c1ae6888b11a25f11d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.867139 kubelet[2164]: I0904 20:32:55.866903    2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7486b68429c05c1ae6888b11a25f11d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-b-0d33e4c091\" (UID: \"e7486b68429c05c1ae6888b11a25f11d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.867139 kubelet[2164]: I0904 20:32:55.866925    2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4bb314d7358da70175ca25f6faa41dee-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-b-0d33e4c091\" (UID: \"4bb314d7358da70175ca25f6faa41dee\") " pod="kube-system/kube-scheduler-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.867490 kubelet[2164]: I0904 20:32:55.866945    2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea666bc13bddc8cc4081dc946bc695ff-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-b-0d33e4c091\" (UID: \"ea666bc13bddc8cc4081dc946bc695ff\") " pod="kube-system/kube-apiserver-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.867490 kubelet[2164]: I0904 20:32:55.866964    2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea666bc13bddc8cc4081dc946bc695ff-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-b-0d33e4c091\" (UID: \"ea666bc13bddc8cc4081dc946bc695ff\") " pod="kube-system/kube-apiserver-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.867490 kubelet[2164]: I0904 20:32:55.866982    2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea666bc13bddc8cc4081dc946bc695ff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-b-0d33e4c091\" (UID: \"ea666bc13bddc8cc4081dc946bc695ff\") " pod="kube-system/kube-apiserver-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.867490 kubelet[2164]: I0904 20:32:55.867000    2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7486b68429c05c1ae6888b11a25f11d-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-b-0d33e4c091\" (UID: \"e7486b68429c05c1ae6888b11a25f11d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.868275 kubelet[2164]: I0904 20:32:55.868229    2164 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:55.868654 kubelet[2164]: E0904 20:32:55.868632    2164 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.64.58:6443/api/v1/nodes\": dial tcp 209.38.64.58:6443: connect: connection refused" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:56.028091 kubelet[2164]: E0904 20:32:56.027827    2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:32:56.028755 containerd[1467]: time="2024-09-04T20:32:56.028591012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-b-0d33e4c091,Uid:ea666bc13bddc8cc4081dc946bc695ff,Namespace:kube-system,Attempt:0,}"
Sep  4 20:32:56.043255 kubelet[2164]: E0904 20:32:56.043204    2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:32:56.047789 containerd[1467]: time="2024-09-04T20:32:56.047540013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-b-0d33e4c091,Uid:e7486b68429c05c1ae6888b11a25f11d,Namespace:kube-system,Attempt:0,}"
Sep  4 20:32:56.050628 kubelet[2164]: E0904 20:32:56.050559    2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:32:56.051451 containerd[1467]: time="2024-09-04T20:32:56.051412599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-b-0d33e4c091,Uid:4bb314d7358da70175ca25f6faa41dee,Namespace:kube-system,Attempt:0,}"
Sep  4 20:32:56.176453 kubelet[2164]: E0904 20:32:56.176405    2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.64.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-b-0d33e4c091?timeout=10s\": dial tcp 209.38.64.58:6443: connect: connection refused" interval="800ms"
Sep  4 20:32:56.269938 kubelet[2164]: I0904 20:32:56.269904    2164 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:56.270429 kubelet[2164]: E0904 20:32:56.270349    2164 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.64.58:6443/api/v1/nodes\": dial tcp 209.38.64.58:6443: connect: connection refused" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:56.494514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923459151.mount: Deactivated successfully.
Sep  4 20:32:56.498783 containerd[1467]: time="2024-09-04T20:32:56.498735832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 20:32:56.499646 containerd[1467]: time="2024-09-04T20:32:56.499608028Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Sep  4 20:32:56.500683 containerd[1467]: time="2024-09-04T20:32:56.500638144Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 20:32:56.501259 containerd[1467]: time="2024-09-04T20:32:56.501132424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Sep  4 20:32:56.502184 containerd[1467]: time="2024-09-04T20:32:56.502050325Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056"
Sep  4 20:32:56.504108 containerd[1467]: time="2024-09-04T20:32:56.503111362Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 20:32:56.505597 containerd[1467]: time="2024-09-04T20:32:56.505279063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 457.629121ms"
Sep  4 20:32:56.506673 containerd[1467]: time="2024-09-04T20:32:56.506628545Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 20:32:56.508041 containerd[1467]: time="2024-09-04T20:32:56.507942285Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 476.756785ms"
Sep  4 20:32:56.511836 containerd[1467]: time="2024-09-04T20:32:56.511429208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 20:32:56.511836 containerd[1467]: time="2024-09-04T20:32:56.511806331Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 460.261367ms"
Sep  4 20:32:56.539422 kubelet[2164]: W0904 20:32:56.539362    2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://209.38.64.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:56.539422 kubelet[2164]: E0904 20:32:56.539401    2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://209.38.64.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:56.629416 kubelet[2164]: W0904 20:32:56.629337    2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://209.38.64.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:56.630024 kubelet[2164]: E0904 20:32:56.629744    2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://209.38.64.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:56.658836 containerd[1467]: time="2024-09-04T20:32:56.657533103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 20:32:56.658836 containerd[1467]: time="2024-09-04T20:32:56.657581047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:32:56.658836 containerd[1467]: time="2024-09-04T20:32:56.657609443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 20:32:56.658836 containerd[1467]: time="2024-09-04T20:32:56.657623642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:32:56.658836 containerd[1467]: time="2024-09-04T20:32:56.657077744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 20:32:56.658836 containerd[1467]: time="2024-09-04T20:32:56.657180439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:32:56.658836 containerd[1467]: time="2024-09-04T20:32:56.657211022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 20:32:56.658836 containerd[1467]: time="2024-09-04T20:32:56.657225699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:32:56.666753 kubelet[2164]: W0904 20:32:56.666676    2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://209.38.64.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:56.666753 kubelet[2164]: E0904 20:32:56.666723    2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://209.38.64.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:56.679881 containerd[1467]: time="2024-09-04T20:32:56.679727004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 20:32:56.679881 containerd[1467]: time="2024-09-04T20:32:56.679841094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:32:56.679881 containerd[1467]: time="2024-09-04T20:32:56.679869896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 20:32:56.679881 containerd[1467]: time="2024-09-04T20:32:56.679890999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:32:56.699433 systemd[1]: Started cri-containerd-a9ca4814c4c48f1d0345d63a232a96a799db0f4d2e33bd9370a6031b4ca765e5.scope - libcontainer container a9ca4814c4c48f1d0345d63a232a96a799db0f4d2e33bd9370a6031b4ca765e5.
Sep  4 20:32:56.706729 systemd[1]: Started cri-containerd-5e1688f2499c641d586a181f298b0a6146eccca74a21fbe5bb0a45f8f92b8112.scope - libcontainer container 5e1688f2499c641d586a181f298b0a6146eccca74a21fbe5bb0a45f8f92b8112.
Sep  4 20:32:56.718212 systemd[1]: Started cri-containerd-131049661ca0e7a4ca616fb10b9fdb0fe7a70d3954b3b6ec0f93bf7ebe4228a2.scope - libcontainer container 131049661ca0e7a4ca616fb10b9fdb0fe7a70d3954b3b6ec0f93bf7ebe4228a2.
Sep  4 20:32:56.816111 containerd[1467]: time="2024-09-04T20:32:56.815940057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-b-0d33e4c091,Uid:ea666bc13bddc8cc4081dc946bc695ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9ca4814c4c48f1d0345d63a232a96a799db0f4d2e33bd9370a6031b4ca765e5\""
Sep  4 20:32:56.824836 kubelet[2164]: E0904 20:32:56.824516    2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:32:56.830718 containerd[1467]: time="2024-09-04T20:32:56.830632497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-b-0d33e4c091,Uid:4bb314d7358da70175ca25f6faa41dee,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e1688f2499c641d586a181f298b0a6146eccca74a21fbe5bb0a45f8f92b8112\""
Sep  4 20:32:56.832641 kubelet[2164]: E0904 20:32:56.831993    2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:32:56.832807 containerd[1467]: time="2024-09-04T20:32:56.832576118Z" level=info msg="CreateContainer within sandbox \"a9ca4814c4c48f1d0345d63a232a96a799db0f4d2e33bd9370a6031b4ca765e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Sep  4 20:32:56.850749 containerd[1467]: time="2024-09-04T20:32:56.850701211Z" level=info msg="CreateContainer within sandbox \"5e1688f2499c641d586a181f298b0a6146eccca74a21fbe5bb0a45f8f92b8112\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Sep  4 20:32:56.851566 containerd[1467]: time="2024-09-04T20:32:56.851529324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-b-0d33e4c091,Uid:e7486b68429c05c1ae6888b11a25f11d,Namespace:kube-system,Attempt:0,} returns sandbox id \"131049661ca0e7a4ca616fb10b9fdb0fe7a70d3954b3b6ec0f93bf7ebe4228a2\""
Sep  4 20:32:56.853203 kubelet[2164]: E0904 20:32:56.853038    2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:32:56.856114 containerd[1467]: time="2024-09-04T20:32:56.855942730Z" level=info msg="CreateContainer within sandbox \"a9ca4814c4c48f1d0345d63a232a96a799db0f4d2e33bd9370a6031b4ca765e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"df1cec6a643eeb17d5b97ab62b1fac4533c7dc803dc4f0d6b13d17cddb229d1d\""
Sep  4 20:32:56.856723 containerd[1467]: time="2024-09-04T20:32:56.856694176Z" level=info msg="StartContainer for \"df1cec6a643eeb17d5b97ab62b1fac4533c7dc803dc4f0d6b13d17cddb229d1d\""
Sep  4 20:32:56.859461 containerd[1467]: time="2024-09-04T20:32:56.858849649Z" level=info msg="CreateContainer within sandbox \"131049661ca0e7a4ca616fb10b9fdb0fe7a70d3954b3b6ec0f93bf7ebe4228a2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Sep  4 20:32:56.868769 containerd[1467]: time="2024-09-04T20:32:56.868713625Z" level=info msg="CreateContainer within sandbox \"5e1688f2499c641d586a181f298b0a6146eccca74a21fbe5bb0a45f8f92b8112\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6d5f43ae289e85e197415e9150dca819be1dfb7cb11d96f90f0f84ecbf84d687\""
Sep  4 20:32:56.869630 containerd[1467]: time="2024-09-04T20:32:56.869602012Z" level=info msg="StartContainer for \"6d5f43ae289e85e197415e9150dca819be1dfb7cb11d96f90f0f84ecbf84d687\""
Sep  4 20:32:56.875550 containerd[1467]: time="2024-09-04T20:32:56.875503015Z" level=info msg="CreateContainer within sandbox \"131049661ca0e7a4ca616fb10b9fdb0fe7a70d3954b3b6ec0f93bf7ebe4228a2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"584b042131eb7ddc452a2e536ec7fc19f7b785092df1c585b7a64cbe6fd19aaa\""
Sep  4 20:32:56.878349 containerd[1467]: time="2024-09-04T20:32:56.878308209Z" level=info msg="StartContainer for \"584b042131eb7ddc452a2e536ec7fc19f7b785092df1c585b7a64cbe6fd19aaa\""
Sep  4 20:32:56.896396 systemd[1]: Started cri-containerd-df1cec6a643eeb17d5b97ab62b1fac4533c7dc803dc4f0d6b13d17cddb229d1d.scope - libcontainer container df1cec6a643eeb17d5b97ab62b1fac4533c7dc803dc4f0d6b13d17cddb229d1d.
Sep  4 20:32:56.928381 systemd[1]: Started cri-containerd-6d5f43ae289e85e197415e9150dca819be1dfb7cb11d96f90f0f84ecbf84d687.scope - libcontainer container 6d5f43ae289e85e197415e9150dca819be1dfb7cb11d96f90f0f84ecbf84d687.
Sep  4 20:32:56.946594 systemd[1]: Started cri-containerd-584b042131eb7ddc452a2e536ec7fc19f7b785092df1c585b7a64cbe6fd19aaa.scope - libcontainer container 584b042131eb7ddc452a2e536ec7fc19f7b785092df1c585b7a64cbe6fd19aaa.
Sep  4 20:32:56.978607 kubelet[2164]: E0904 20:32:56.978498    2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.64.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-b-0d33e4c091?timeout=10s\": dial tcp 209.38.64.58:6443: connect: connection refused" interval="1.6s"
Sep  4 20:32:56.980075 containerd[1467]: time="2024-09-04T20:32:56.979922130Z" level=info msg="StartContainer for \"df1cec6a643eeb17d5b97ab62b1fac4533c7dc803dc4f0d6b13d17cddb229d1d\" returns successfully"
Sep  4 20:32:57.026168 containerd[1467]: time="2024-09-04T20:32:57.024806046Z" level=info msg="StartContainer for \"584b042131eb7ddc452a2e536ec7fc19f7b785092df1c585b7a64cbe6fd19aaa\" returns successfully"
Sep  4 20:32:57.026168 containerd[1467]: time="2024-09-04T20:32:57.024806254Z" level=info msg="StartContainer for \"6d5f43ae289e85e197415e9150dca819be1dfb7cb11d96f90f0f84ecbf84d687\" returns successfully"
Sep  4 20:32:57.028809 kubelet[2164]: W0904 20:32:57.028712    2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://209.38.64.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-b-0d33e4c091&limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:57.028809 kubelet[2164]: E0904 20:32:57.028784    2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://209.38.64.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-b-0d33e4c091&limit=500&resourceVersion=0": dial tcp 209.38.64.58:6443: connect: connection refused
Sep  4 20:32:57.072861 kubelet[2164]: I0904 20:32:57.072513    2164 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:57.086350 kubelet[2164]: E0904 20:32:57.085685    2164 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.64.58:6443/api/v1/nodes\": dial tcp 209.38.64.58:6443: connect: connection refused" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:57.616532 kubelet[2164]: E0904 20:32:57.616390    2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:32:57.620939 kubelet[2164]: E0904 20:32:57.620409    2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:32:57.624421 kubelet[2164]: E0904 20:32:57.624332    2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:32:58.626931 kubelet[2164]: E0904 20:32:58.626900    2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:32:58.687477 kubelet[2164]: I0904 20:32:58.687446    2164 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:58.731022 kubelet[2164]: I0904 20:32:58.730945    2164 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:32:58.755628 kubelet[2164]: E0904 20:32:58.755569    2164 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-b-0d33e4c091\" not found"
Sep  4 20:32:58.855869 kubelet[2164]: E0904 20:32:58.855812    2164 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-b-0d33e4c091\" not found"
Sep  4 20:32:59.537940 kubelet[2164]: I0904 20:32:59.537889    2164 apiserver.go:52] "Watching apiserver"
Sep  4 20:32:59.566711 kubelet[2164]: I0904 20:32:59.566660    2164 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Sep  4 20:33:01.789415 systemd[1]: Reloading requested from client PID 2439 ('systemctl') (unit session-7.scope)...
Sep  4 20:33:01.789438 systemd[1]: Reloading...
Sep  4 20:33:01.913277 zram_generator::config[2476]: No configuration found.
Sep  4 20:33:02.103176 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 20:33:02.233224 systemd[1]: Reloading finished in 443 ms.
Sep  4 20:33:02.287939 kubelet[2164]: I0904 20:33:02.287883    2164 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep  4 20:33:02.288197 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 20:33:02.300226 systemd[1]: kubelet.service: Deactivated successfully.
Sep  4 20:33:02.300528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 20:33:02.300601 systemd[1]: kubelet.service: Consumed 1.436s CPU time, 111.3M memory peak, 0B memory swap peak.
Sep  4 20:33:02.311775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 20:33:02.445959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 20:33:02.459119 (kubelet)[2527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Sep  4 20:33:02.563178 kubelet[2527]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 20:33:02.563178 kubelet[2527]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Sep  4 20:33:02.563178 kubelet[2527]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 20:33:02.563961 kubelet[2527]: I0904 20:33:02.563883    2527 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Sep  4 20:33:02.574370 kubelet[2527]: I0904 20:33:02.574307    2527 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Sep  4 20:33:02.574370 kubelet[2527]: I0904 20:33:02.574344    2527 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep  4 20:33:02.574632 kubelet[2527]: I0904 20:33:02.574593    2527 server.go:919] "Client rotation is on, will bootstrap in background"
Sep  4 20:33:02.577985 kubelet[2527]: I0904 20:33:02.577937    2527 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Sep  4 20:33:02.582057 kubelet[2527]: I0904 20:33:02.581744    2527 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep  4 20:33:02.584661 sudo[2540]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Sep  4 20:33:02.584957 sudo[2540]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep  4 20:33:02.603100 kubelet[2527]: I0904 20:33:02.602080    2527 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Sep  4 20:33:02.603100 kubelet[2527]: I0904 20:33:02.602540    2527 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Sep  4 20:33:02.603100 kubelet[2527]: I0904 20:33:02.602813    2527 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Sep  4 20:33:02.603100 kubelet[2527]: I0904 20:33:02.602859    2527 topology_manager.go:138] "Creating topology manager with none policy"
Sep  4 20:33:02.603100 kubelet[2527]: I0904 20:33:02.602875    2527 container_manager_linux.go:301] "Creating device plugin manager"
Sep  4 20:33:02.603100 kubelet[2527]: I0904 20:33:02.602935    2527 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 20:33:02.603852 kubelet[2527]: I0904 20:33:02.603792    2527 kubelet.go:396] "Attempting to sync node with API server"
Sep  4 20:33:02.604009 kubelet[2527]: I0904 20:33:02.603990    2527 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Sep  4 20:33:02.604188 kubelet[2527]: I0904 20:33:02.604124    2527 kubelet.go:312] "Adding apiserver pod source"
Sep  4 20:33:02.604318 kubelet[2527]: I0904 20:33:02.604302    2527 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Sep  4 20:33:02.607875 kubelet[2527]: I0904 20:33:02.607819    2527 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1"
Sep  4 20:33:02.610319 kubelet[2527]: I0904 20:33:02.608084    2527 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Sep  4 20:33:02.610319 kubelet[2527]: I0904 20:33:02.608625    2527 server.go:1256] "Started kubelet"
Sep  4 20:33:02.621202 kubelet[2527]: I0904 20:33:02.618689    2527 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Sep  4 20:33:02.635369 kubelet[2527]: I0904 20:33:02.635309    2527 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Sep  4 20:33:02.640577 kubelet[2527]: I0904 20:33:02.640527    2527 server.go:461] "Adding debug handlers to kubelet server"
Sep  4 20:33:02.641610 kubelet[2527]: I0904 20:33:02.641562    2527 volume_manager.go:291] "Starting Kubelet Volume Manager"
Sep  4 20:33:02.645041 kubelet[2527]: I0904 20:33:02.644988    2527 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Sep  4 20:33:02.645458 kubelet[2527]: I0904 20:33:02.645421    2527 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Sep  4 20:33:02.646026 kubelet[2527]: I0904 20:33:02.645992    2527 reconciler_new.go:29] "Reconciler: start to sync state"
Sep  4 20:33:02.646579 kubelet[2527]: I0904 20:33:02.646416    2527 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Sep  4 20:33:02.651040 kubelet[2527]: I0904 20:33:02.650863    2527 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Sep  4 20:33:02.653864 kubelet[2527]: I0904 20:33:02.653820    2527 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Sep  4 20:33:02.654241 kubelet[2527]: I0904 20:33:02.654214    2527 status_manager.go:217] "Starting to sync pod status with apiserver"
Sep  4 20:33:02.654508 kubelet[2527]: I0904 20:33:02.654485    2527 kubelet.go:2329] "Starting kubelet main sync loop"
Sep  4 20:33:02.654908 kubelet[2527]: E0904 20:33:02.654819    2527 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Sep  4 20:33:02.679525 kubelet[2527]: I0904 20:33:02.679474    2527 factory.go:221] Registration of the systemd container factory successfully
Sep  4 20:33:02.680402 kubelet[2527]: I0904 20:33:02.679939    2527 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Sep  4 20:33:02.688372 kubelet[2527]: E0904 20:33:02.687717    2527 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Sep  4 20:33:02.688963 kubelet[2527]: I0904 20:33:02.688941    2527 factory.go:221] Registration of the containerd container factory successfully
Sep  4 20:33:02.748914 kubelet[2527]: I0904 20:33:02.748261    2527 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:02.757189 kubelet[2527]: E0904 20:33:02.756139    2527 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Sep  4 20:33:02.785470 kubelet[2527]: I0904 20:33:02.785432    2527 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:02.787884 kubelet[2527]: I0904 20:33:02.787428    2527 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:02.792810 kubelet[2527]: I0904 20:33:02.792494    2527 cpu_manager.go:214] "Starting CPU manager" policy="none"
Sep  4 20:33:02.792810 kubelet[2527]: I0904 20:33:02.792523    2527 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Sep  4 20:33:02.792810 kubelet[2527]: I0904 20:33:02.792556    2527 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 20:33:02.793342 kubelet[2527]: I0904 20:33:02.793325    2527 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Sep  4 20:33:02.793528 kubelet[2527]: I0904 20:33:02.793516    2527 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Sep  4 20:33:02.793708 kubelet[2527]: I0904 20:33:02.793626    2527 policy_none.go:49] "None policy: Start"
Sep  4 20:33:02.794930 kubelet[2527]: I0904 20:33:02.794875    2527 memory_manager.go:170] "Starting memorymanager" policy="None"
Sep  4 20:33:02.795283 kubelet[2527]: I0904 20:33:02.795171    2527 state_mem.go:35] "Initializing new in-memory state store"
Sep  4 20:33:02.795808 kubelet[2527]: I0904 20:33:02.795643    2527 state_mem.go:75] "Updated machine memory state"
Sep  4 20:33:02.813603 kubelet[2527]: I0904 20:33:02.813531    2527 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Sep  4 20:33:02.821129 kubelet[2527]: I0904 20:33:02.820993    2527 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Sep  4 20:33:02.957363 kubelet[2527]: I0904 20:33:02.957310    2527 topology_manager.go:215] "Topology Admit Handler" podUID="ea666bc13bddc8cc4081dc946bc695ff" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:02.958096 kubelet[2527]: I0904 20:33:02.958033    2527 topology_manager.go:215] "Topology Admit Handler" podUID="e7486b68429c05c1ae6888b11a25f11d" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:02.959466 kubelet[2527]: I0904 20:33:02.959356    2527 topology_manager.go:215] "Topology Admit Handler" podUID="4bb314d7358da70175ca25f6faa41dee" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:02.974756 kubelet[2527]: W0904 20:33:02.974578    2527 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Sep  4 20:33:02.977703 kubelet[2527]: W0904 20:33:02.977271    2527 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Sep  4 20:33:02.978288 kubelet[2527]: W0904 20:33:02.978074    2527 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Sep  4 20:33:03.048955 kubelet[2527]: I0904 20:33:03.048065    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea666bc13bddc8cc4081dc946bc695ff-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-b-0d33e4c091\" (UID: \"ea666bc13bddc8cc4081dc946bc695ff\") " pod="kube-system/kube-apiserver-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:03.048955 kubelet[2527]: I0904 20:33:03.048115    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7486b68429c05c1ae6888b11a25f11d-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-b-0d33e4c091\" (UID: \"e7486b68429c05c1ae6888b11a25f11d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:03.048955 kubelet[2527]: I0904 20:33:03.048157    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7486b68429c05c1ae6888b11a25f11d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-b-0d33e4c091\" (UID: \"e7486b68429c05c1ae6888b11a25f11d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:03.048955 kubelet[2527]: I0904 20:33:03.048178    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4bb314d7358da70175ca25f6faa41dee-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-b-0d33e4c091\" (UID: \"4bb314d7358da70175ca25f6faa41dee\") " pod="kube-system/kube-scheduler-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:03.048955 kubelet[2527]: I0904 20:33:03.048198    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea666bc13bddc8cc4081dc946bc695ff-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-b-0d33e4c091\" (UID: \"ea666bc13bddc8cc4081dc946bc695ff\") " pod="kube-system/kube-apiserver-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:03.049210 kubelet[2527]: I0904 20:33:03.048839    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea666bc13bddc8cc4081dc946bc695ff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-b-0d33e4c091\" (UID: \"ea666bc13bddc8cc4081dc946bc695ff\") " pod="kube-system/kube-apiserver-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:03.049210 kubelet[2527]: I0904 20:33:03.048869    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7486b68429c05c1ae6888b11a25f11d-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-b-0d33e4c091\" (UID: \"e7486b68429c05c1ae6888b11a25f11d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:03.049210 kubelet[2527]: I0904 20:33:03.048890    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7486b68429c05c1ae6888b11a25f11d-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-b-0d33e4c091\" (UID: \"e7486b68429c05c1ae6888b11a25f11d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:03.049210 kubelet[2527]: I0904 20:33:03.048912    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7486b68429c05c1ae6888b11a25f11d-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-b-0d33e4c091\" (UID: \"e7486b68429c05c1ae6888b11a25f11d\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-b-0d33e4c091"
Sep  4 20:33:03.277034 kubelet[2527]: E0904 20:33:03.276990    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:03.279709 kubelet[2527]: E0904 20:33:03.279661    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:03.280202 kubelet[2527]: E0904 20:33:03.280100    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:03.458513 sudo[2540]: pam_unix(sudo:session): session closed for user root
Sep  4 20:33:03.606010 kubelet[2527]: I0904 20:33:03.605949    2527 apiserver.go:52] "Watching apiserver"
Sep  4 20:33:03.646843 kubelet[2527]: I0904 20:33:03.646779    2527 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Sep  4 20:33:03.729467 kubelet[2527]: E0904 20:33:03.729326    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:03.730891 kubelet[2527]: E0904 20:33:03.730844    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:03.735173 kubelet[2527]: E0904 20:33:03.733272    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:03.791042 kubelet[2527]: I0904 20:33:03.790942    2527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.2.1-b-0d33e4c091" podStartSLOduration=1.790878941 podStartE2EDuration="1.790878941s" podCreationTimestamp="2024-09-04 20:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:33:03.78154296 +0000 UTC m=+1.315666001" watchObservedRunningTime="2024-09-04 20:33:03.790878941 +0000 UTC m=+1.325001962"
Sep  4 20:33:03.792516 kubelet[2527]: I0904 20:33:03.792431    2527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.2.1-b-0d33e4c091" podStartSLOduration=1.792377798 podStartE2EDuration="1.792377798s" podCreationTimestamp="2024-09-04 20:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:33:03.791923478 +0000 UTC m=+1.326046502" watchObservedRunningTime="2024-09-04 20:33:03.792377798 +0000 UTC m=+1.326500830"
Sep  4 20:33:03.806676 kubelet[2527]: I0904 20:33:03.805803    2527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.2.1-b-0d33e4c091" podStartSLOduration=1.805754404 podStartE2EDuration="1.805754404s" podCreationTimestamp="2024-09-04 20:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:33:03.804609637 +0000 UTC m=+1.338732676" watchObservedRunningTime="2024-09-04 20:33:03.805754404 +0000 UTC m=+1.339877445"
Sep  4 20:33:04.731066 kubelet[2527]: E0904 20:33:04.731025    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:04.732825 kubelet[2527]: E0904 20:33:04.732726    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:04.927828 sudo[1651]: pam_unix(sudo:session): session closed for user root
Sep  4 20:33:04.932454 sshd[1648]: pam_unix(sshd:session): session closed for user core
Sep  4 20:33:04.938195 systemd[1]: sshd@6-209.38.64.58:22-139.178.68.195:56288.service: Deactivated successfully.
Sep  4 20:33:04.940952 systemd[1]: session-7.scope: Deactivated successfully.
Sep  4 20:33:04.941258 systemd[1]: session-7.scope: Consumed 5.889s CPU time, 136.6M memory peak, 0B memory swap peak.
Sep  4 20:33:04.943401 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit.
Sep  4 20:33:04.945712 systemd-logind[1444]: Removed session 7.
Sep  4 20:33:09.697171 kubelet[2527]: E0904 20:33:09.695276    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:09.740707 kubelet[2527]: E0904 20:33:09.740670    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:10.742470 kubelet[2527]: E0904 20:33:10.742440    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:11.820736 kubelet[2527]: E0904 20:33:11.820397    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:12.745594 kubelet[2527]: E0904 20:33:12.745320    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:14.084704 kubelet[2527]: E0904 20:33:14.084653    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:15.536763 update_engine[1445]: I0904 20:33:15.536427  1445 update_attempter.cc:509] Updating boot flags...
Sep  4 20:33:15.576256 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2604)
Sep  4 20:33:15.628172 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2603)
Sep  4 20:33:15.695177 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2603)
Sep  4 20:33:16.369505 kubelet[2527]: I0904 20:33:16.369393    2527 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Sep  4 20:33:16.370930 containerd[1467]: time="2024-09-04T20:33:16.370291602Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Sep  4 20:33:16.371307 kubelet[2527]: I0904 20:33:16.370529    2527 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Sep  4 20:33:16.823179 kubelet[2527]: I0904 20:33:16.823127    2527 topology_manager.go:215] "Topology Admit Handler" podUID="783ac130-edab-4bd6-9e7d-eb0f17073f71" podNamespace="kube-system" podName="kube-proxy-qvcdx"
Sep  4 20:33:16.832884 kubelet[2527]: I0904 20:33:16.832837    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxsw5\" (UniqueName: \"kubernetes.io/projected/783ac130-edab-4bd6-9e7d-eb0f17073f71-kube-api-access-xxsw5\") pod \"kube-proxy-qvcdx\" (UID: \"783ac130-edab-4bd6-9e7d-eb0f17073f71\") " pod="kube-system/kube-proxy-qvcdx"
Sep  4 20:33:16.833010 kubelet[2527]: I0904 20:33:16.832901    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/783ac130-edab-4bd6-9e7d-eb0f17073f71-kube-proxy\") pod \"kube-proxy-qvcdx\" (UID: \"783ac130-edab-4bd6-9e7d-eb0f17073f71\") " pod="kube-system/kube-proxy-qvcdx"
Sep  4 20:33:16.833010 kubelet[2527]: I0904 20:33:16.832923    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/783ac130-edab-4bd6-9e7d-eb0f17073f71-xtables-lock\") pod \"kube-proxy-qvcdx\" (UID: \"783ac130-edab-4bd6-9e7d-eb0f17073f71\") " pod="kube-system/kube-proxy-qvcdx"
Sep  4 20:33:16.833010 kubelet[2527]: I0904 20:33:16.832942    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/783ac130-edab-4bd6-9e7d-eb0f17073f71-lib-modules\") pod \"kube-proxy-qvcdx\" (UID: \"783ac130-edab-4bd6-9e7d-eb0f17073f71\") " pod="kube-system/kube-proxy-qvcdx"
Sep  4 20:33:16.836807 systemd[1]: Created slice kubepods-besteffort-pod783ac130_edab_4bd6_9e7d_eb0f17073f71.slice - libcontainer container kubepods-besteffort-pod783ac130_edab_4bd6_9e7d_eb0f17073f71.slice.
Sep  4 20:33:16.841045 kubelet[2527]: I0904 20:33:16.839814    2527 topology_manager.go:215] "Topology Admit Handler" podUID="7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" podNamespace="kube-system" podName="cilium-6v22j"
Sep  4 20:33:16.859235 systemd[1]: Created slice kubepods-burstable-pod7f337ba1_ab65_48e5_9f50_a2cf1e60a92a.slice - libcontainer container kubepods-burstable-pod7f337ba1_ab65_48e5_9f50_a2cf1e60a92a.slice.
Sep  4 20:33:16.933448 kubelet[2527]: I0904 20:33:16.933396    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-hubble-tls\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.933814 kubelet[2527]: I0904 20:33:16.933798    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-lib-modules\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.933924 kubelet[2527]: I0904 20:33:16.933914    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cilium-config-path\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.934004 kubelet[2527]: I0904 20:33:16.933996    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-xtables-lock\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.934095 kubelet[2527]: I0904 20:33:16.934087    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-hostproc\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.934197 kubelet[2527]: I0904 20:33:16.934189    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cni-path\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.934308 kubelet[2527]: I0904 20:33:16.934295    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-host-proc-sys-net\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.934416 kubelet[2527]: I0904 20:33:16.934407    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cilium-run\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.934496 kubelet[2527]: I0904 20:33:16.934487    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-bpf-maps\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.934573 kubelet[2527]: I0904 20:33:16.934566    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cilium-cgroup\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.934648 kubelet[2527]: I0904 20:33:16.934642    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-host-proc-sys-kernel\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.934722 kubelet[2527]: I0904 20:33:16.934715    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-etc-cni-netd\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.934796 kubelet[2527]: I0904 20:33:16.934790    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-clustermesh-secrets\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:16.934924 kubelet[2527]: I0904 20:33:16.934914    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnc5v\" (UniqueName: \"kubernetes.io/projected/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-kube-api-access-dnc5v\") pod \"cilium-6v22j\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") " pod="kube-system/cilium-6v22j"
Sep  4 20:33:17.149202 kubelet[2527]: E0904 20:33:17.149069    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:17.152418 containerd[1467]: time="2024-09-04T20:33:17.152174218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qvcdx,Uid:783ac130-edab-4bd6-9e7d-eb0f17073f71,Namespace:kube-system,Attempt:0,}"
Sep  4 20:33:17.165977 kubelet[2527]: E0904 20:33:17.165939    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:17.171717 containerd[1467]: time="2024-09-04T20:33:17.171652763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6v22j,Uid:7f337ba1-ab65-48e5-9f50-a2cf1e60a92a,Namespace:kube-system,Attempt:0,}"
Sep  4 20:33:17.195417 containerd[1467]: time="2024-09-04T20:33:17.195198231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 20:33:17.195417 containerd[1467]: time="2024-09-04T20:33:17.195256788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:33:17.195417 containerd[1467]: time="2024-09-04T20:33:17.195272238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 20:33:17.195417 containerd[1467]: time="2024-09-04T20:33:17.195298285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:33:17.208051 containerd[1467]: time="2024-09-04T20:33:17.207410258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 20:33:17.208051 containerd[1467]: time="2024-09-04T20:33:17.207485120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:33:17.208051 containerd[1467]: time="2024-09-04T20:33:17.207501891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 20:33:17.208051 containerd[1467]: time="2024-09-04T20:33:17.207511352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:33:17.223527 systemd[1]: Started cri-containerd-da910b7df0c0d101d15955b3867b27db098a85b46985fa7d7d2b54e7e5c84746.scope - libcontainer container da910b7df0c0d101d15955b3867b27db098a85b46985fa7d7d2b54e7e5c84746.
Sep  4 20:33:17.241569 systemd[1]: Started cri-containerd-b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd.scope - libcontainer container b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd.
Sep  4 20:33:17.271387 containerd[1467]: time="2024-09-04T20:33:17.271204559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qvcdx,Uid:783ac130-edab-4bd6-9e7d-eb0f17073f71,Namespace:kube-system,Attempt:0,} returns sandbox id \"da910b7df0c0d101d15955b3867b27db098a85b46985fa7d7d2b54e7e5c84746\""
Sep  4 20:33:17.272217 kubelet[2527]: E0904 20:33:17.272114    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:17.277296 containerd[1467]: time="2024-09-04T20:33:17.277062503Z" level=info msg="CreateContainer within sandbox \"da910b7df0c0d101d15955b3867b27db098a85b46985fa7d7d2b54e7e5c84746\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Sep  4 20:33:17.281884 containerd[1467]: time="2024-09-04T20:33:17.281551918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6v22j,Uid:7f337ba1-ab65-48e5-9f50-a2cf1e60a92a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\""
Sep  4 20:33:17.282578 kubelet[2527]: E0904 20:33:17.282553    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:17.283851 containerd[1467]: time="2024-09-04T20:33:17.283815090Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Sep  4 20:33:17.295020 containerd[1467]: time="2024-09-04T20:33:17.294913886Z" level=info msg="CreateContainer within sandbox \"da910b7df0c0d101d15955b3867b27db098a85b46985fa7d7d2b54e7e5c84746\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"184156dcd472e460a918303dc765c55e96c846fddd07fe22b1792ef18f3181f7\""
Sep  4 20:33:17.296322 containerd[1467]: time="2024-09-04T20:33:17.296284573Z" level=info msg="StartContainer for \"184156dcd472e460a918303dc765c55e96c846fddd07fe22b1792ef18f3181f7\""
Sep  4 20:33:17.332552 systemd[1]: Started cri-containerd-184156dcd472e460a918303dc765c55e96c846fddd07fe22b1792ef18f3181f7.scope - libcontainer container 184156dcd472e460a918303dc765c55e96c846fddd07fe22b1792ef18f3181f7.
Sep  4 20:33:17.369796 containerd[1467]: time="2024-09-04T20:33:17.369751787Z" level=info msg="StartContainer for \"184156dcd472e460a918303dc765c55e96c846fddd07fe22b1792ef18f3181f7\" returns successfully"
Sep  4 20:33:17.381503 kubelet[2527]: I0904 20:33:17.381457    2527 topology_manager.go:215] "Topology Admit Handler" podUID="76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0" podNamespace="kube-system" podName="cilium-operator-5cc964979-lx7c4"
Sep  4 20:33:17.392684 systemd[1]: Created slice kubepods-besteffort-pod76ffbe5a_d9ad_4b35_bc49_fad5a9558bb0.slice - libcontainer container kubepods-besteffort-pod76ffbe5a_d9ad_4b35_bc49_fad5a9558bb0.slice.
Sep  4 20:33:17.438422 kubelet[2527]: I0904 20:33:17.438235    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0-cilium-config-path\") pod \"cilium-operator-5cc964979-lx7c4\" (UID: \"76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0\") " pod="kube-system/cilium-operator-5cc964979-lx7c4"
Sep  4 20:33:17.438422 kubelet[2527]: I0904 20:33:17.438388    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66cs6\" (UniqueName: \"kubernetes.io/projected/76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0-kube-api-access-66cs6\") pod \"cilium-operator-5cc964979-lx7c4\" (UID: \"76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0\") " pod="kube-system/cilium-operator-5cc964979-lx7c4"
Sep  4 20:33:17.696938 kubelet[2527]: E0904 20:33:17.696340    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:17.697688 containerd[1467]: time="2024-09-04T20:33:17.697647366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-lx7c4,Uid:76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0,Namespace:kube-system,Attempt:0,}"
Sep  4 20:33:17.728303 containerd[1467]: time="2024-09-04T20:33:17.728127322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 20:33:17.728994 containerd[1467]: time="2024-09-04T20:33:17.728705290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:33:17.728994 containerd[1467]: time="2024-09-04T20:33:17.728782946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 20:33:17.728994 containerd[1467]: time="2024-09-04T20:33:17.728812520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:33:17.754401 systemd[1]: Started cri-containerd-60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c.scope - libcontainer container 60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c.
Sep  4 20:33:17.761534 kubelet[2527]: E0904 20:33:17.761500    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:17.816069 containerd[1467]: time="2024-09-04T20:33:17.816027818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-lx7c4,Uid:76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c\""
Sep  4 20:33:17.817389 kubelet[2527]: E0904 20:33:17.816907    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:22.682423 kubelet[2527]: I0904 20:33:22.682360    2527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qvcdx" podStartSLOduration=6.682270623 podStartE2EDuration="6.682270623s" podCreationTimestamp="2024-09-04 20:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:33:17.776404701 +0000 UTC m=+15.310527741" watchObservedRunningTime="2024-09-04 20:33:22.682270623 +0000 UTC m=+20.216393666"
Sep  4 20:33:25.527901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608365859.mount: Deactivated successfully.
Sep  4 20:33:27.824387 containerd[1467]: time="2024-09-04T20:33:27.824258236Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:33:27.825497 containerd[1467]: time="2024-09-04T20:33:27.825422343Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735351"
Sep  4 20:33:27.827287 containerd[1467]: time="2024-09-04T20:33:27.826945698Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:33:27.829342 containerd[1467]: time="2024-09-04T20:33:27.829137806Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.545279893s"
Sep  4 20:33:27.829342 containerd[1467]: time="2024-09-04T20:33:27.829192630Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Sep  4 20:33:27.831467 containerd[1467]: time="2024-09-04T20:33:27.831340818Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Sep  4 20:33:27.847479 containerd[1467]: time="2024-09-04T20:33:27.846967308Z" level=info msg="CreateContainer within sandbox \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Sep  4 20:33:27.931753 containerd[1467]: time="2024-09-04T20:33:27.931676835Z" level=info msg="CreateContainer within sandbox \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360\""
Sep  4 20:33:27.932544 containerd[1467]: time="2024-09-04T20:33:27.932501689Z" level=info msg="StartContainer for \"17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360\""
Sep  4 20:33:28.017596 systemd[1]: run-containerd-runc-k8s.io-17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360-runc.xBunc4.mount: Deactivated successfully.
Sep  4 20:33:28.027665 systemd[1]: Started cri-containerd-17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360.scope - libcontainer container 17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360.
Sep  4 20:33:28.068079 containerd[1467]: time="2024-09-04T20:33:28.067831044Z" level=info msg="StartContainer for \"17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360\" returns successfully"
Sep  4 20:33:28.086175 systemd[1]: cri-containerd-17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360.scope: Deactivated successfully.
Sep  4 20:33:28.178325 containerd[1467]: time="2024-09-04T20:33:28.154526004Z" level=info msg="shim disconnected" id=17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360 namespace=k8s.io
Sep  4 20:33:28.178325 containerd[1467]: time="2024-09-04T20:33:28.178320250Z" level=warning msg="cleaning up after shim disconnected" id=17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360 namespace=k8s.io
Sep  4 20:33:28.178325 containerd[1467]: time="2024-09-04T20:33:28.178348828Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 20:33:28.785099 kubelet[2527]: E0904 20:33:28.784787    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:28.789880 containerd[1467]: time="2024-09-04T20:33:28.789826450Z" level=info msg="CreateContainer within sandbox \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Sep  4 20:33:28.807556 containerd[1467]: time="2024-09-04T20:33:28.806240821Z" level=info msg="CreateContainer within sandbox \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799\""
Sep  4 20:33:28.808164 containerd[1467]: time="2024-09-04T20:33:28.808043730Z" level=info msg="StartContainer for \"a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799\""
Sep  4 20:33:28.856401 systemd[1]: Started cri-containerd-a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799.scope - libcontainer container a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799.
Sep  4 20:33:28.892496 containerd[1467]: time="2024-09-04T20:33:28.892216890Z" level=info msg="StartContainer for \"a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799\" returns successfully"
Sep  4 20:33:28.917131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360-rootfs.mount: Deactivated successfully.
Sep  4 20:33:28.922701 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Sep  4 20:33:28.922945 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Sep  4 20:33:28.923024 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Sep  4 20:33:28.930311 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Sep  4 20:33:28.933445 systemd[1]: cri-containerd-a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799.scope: Deactivated successfully.
Sep  4 20:33:28.955276 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Sep  4 20:33:28.970415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799-rootfs.mount: Deactivated successfully.
Sep  4 20:33:28.971953 containerd[1467]: time="2024-09-04T20:33:28.971674188Z" level=info msg="shim disconnected" id=a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799 namespace=k8s.io
Sep  4 20:33:28.971953 containerd[1467]: time="2024-09-04T20:33:28.971730854Z" level=warning msg="cleaning up after shim disconnected" id=a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799 namespace=k8s.io
Sep  4 20:33:28.971953 containerd[1467]: time="2024-09-04T20:33:28.971741427Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 20:33:29.209765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2507891569.mount: Deactivated successfully.
Sep  4 20:33:29.753002 containerd[1467]: time="2024-09-04T20:33:29.752937493Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:33:29.753851 containerd[1467]: time="2024-09-04T20:33:29.753790531Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907233"
Sep  4 20:33:29.754564 containerd[1467]: time="2024-09-04T20:33:29.754451632Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 20:33:29.756789 containerd[1467]: time="2024-09-04T20:33:29.756656333Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.925277392s"
Sep  4 20:33:29.756789 containerd[1467]: time="2024-09-04T20:33:29.756698801Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Sep  4 20:33:29.760101 containerd[1467]: time="2024-09-04T20:33:29.759923660Z" level=info msg="CreateContainer within sandbox \"60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Sep  4 20:33:29.784374 containerd[1467]: time="2024-09-04T20:33:29.784304040Z" level=info msg="CreateContainer within sandbox \"60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\""
Sep  4 20:33:29.784986 containerd[1467]: time="2024-09-04T20:33:29.784960071Z" level=info msg="StartContainer for \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\""
Sep  4 20:33:29.794253 kubelet[2527]: E0904 20:33:29.791691    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:29.799787 containerd[1467]: time="2024-09-04T20:33:29.795278932Z" level=info msg="CreateContainer within sandbox \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Sep  4 20:33:29.872403 systemd[1]: Started cri-containerd-61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1.scope - libcontainer container 61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1.
Sep  4 20:33:29.883174 containerd[1467]: time="2024-09-04T20:33:29.880968366Z" level=info msg="CreateContainer within sandbox \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de\""
Sep  4 20:33:29.908494 containerd[1467]: time="2024-09-04T20:33:29.908447635Z" level=info msg="StartContainer for \"20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de\""
Sep  4 20:33:29.978348 systemd[1]: Started cri-containerd-20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de.scope - libcontainer container 20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de.
Sep  4 20:33:29.989043 containerd[1467]: time="2024-09-04T20:33:29.989003009Z" level=info msg="StartContainer for \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\" returns successfully"
Sep  4 20:33:30.031745 containerd[1467]: time="2024-09-04T20:33:30.030280159Z" level=info msg="StartContainer for \"20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de\" returns successfully"
Sep  4 20:33:30.036813 systemd[1]: cri-containerd-20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de.scope: Deactivated successfully.
Sep  4 20:33:30.086416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de-rootfs.mount: Deactivated successfully.
Sep  4 20:33:30.087321 containerd[1467]: time="2024-09-04T20:33:30.086551625Z" level=info msg="shim disconnected" id=20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de namespace=k8s.io
Sep  4 20:33:30.087321 containerd[1467]: time="2024-09-04T20:33:30.086649942Z" level=warning msg="cleaning up after shim disconnected" id=20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de namespace=k8s.io
Sep  4 20:33:30.087321 containerd[1467]: time="2024-09-04T20:33:30.086667090Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 20:33:30.799271 kubelet[2527]: E0904 20:33:30.798308    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:30.806183 kubelet[2527]: E0904 20:33:30.805848    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:30.820497 containerd[1467]: time="2024-09-04T20:33:30.820441178Z" level=info msg="CreateContainer within sandbox \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Sep  4 20:33:30.841183 containerd[1467]: time="2024-09-04T20:33:30.840788746Z" level=info msg="CreateContainer within sandbox \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29\""
Sep  4 20:33:30.844269 containerd[1467]: time="2024-09-04T20:33:30.844229838Z" level=info msg="StartContainer for \"878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29\""
Sep  4 20:33:30.911445 systemd[1]: Started cri-containerd-878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29.scope - libcontainer container 878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29.
Sep  4 20:33:30.965138 kubelet[2527]: I0904 20:33:30.965102    2527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-lx7c4" podStartSLOduration=2.025707798 podStartE2EDuration="13.965052809s" podCreationTimestamp="2024-09-04 20:33:17 +0000 UTC" firstStartedPulling="2024-09-04 20:33:17.817651351 +0000 UTC m=+15.351774370" lastFinishedPulling="2024-09-04 20:33:29.756996345 +0000 UTC m=+27.291119381" observedRunningTime="2024-09-04 20:33:30.872875448 +0000 UTC m=+28.406998490" watchObservedRunningTime="2024-09-04 20:33:30.965052809 +0000 UTC m=+28.499175850"
Sep  4 20:33:30.991817 containerd[1467]: time="2024-09-04T20:33:30.991642080Z" level=info msg="StartContainer for \"878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29\" returns successfully"
Sep  4 20:33:30.995421 systemd[1]: cri-containerd-878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29.scope: Deactivated successfully.
Sep  4 20:33:31.030290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29-rootfs.mount: Deactivated successfully.
Sep  4 20:33:31.032567 containerd[1467]: time="2024-09-04T20:33:31.032153866Z" level=info msg="shim disconnected" id=878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29 namespace=k8s.io
Sep  4 20:33:31.032567 containerd[1467]: time="2024-09-04T20:33:31.032211189Z" level=warning msg="cleaning up after shim disconnected" id=878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29 namespace=k8s.io
Sep  4 20:33:31.032567 containerd[1467]: time="2024-09-04T20:33:31.032221071Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 20:33:31.808661 kubelet[2527]: E0904 20:33:31.808043    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:31.808661 kubelet[2527]: E0904 20:33:31.808509    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:31.823333 containerd[1467]: time="2024-09-04T20:33:31.822647812Z" level=info msg="CreateContainer within sandbox \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Sep  4 20:33:31.856440 containerd[1467]: time="2024-09-04T20:33:31.856390782Z" level=info msg="CreateContainer within sandbox \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\""
Sep  4 20:33:31.857119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224922269.mount: Deactivated successfully.
Sep  4 20:33:31.860001 containerd[1467]: time="2024-09-04T20:33:31.857592542Z" level=info msg="StartContainer for \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\""
Sep  4 20:33:31.891390 systemd[1]: Started cri-containerd-236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d.scope - libcontainer container 236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d.
Sep  4 20:33:31.930077 containerd[1467]: time="2024-09-04T20:33:31.930025482Z" level=info msg="StartContainer for \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\" returns successfully"
Sep  4 20:33:32.192616 kubelet[2527]: I0904 20:33:32.191485    2527 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Sep  4 20:33:32.226230 kubelet[2527]: I0904 20:33:32.226114    2527 topology_manager.go:215] "Topology Admit Handler" podUID="cdf69e3f-3949-449a-b61b-b7c717cdad7c" podNamespace="kube-system" podName="coredns-76f75df574-rmrhn"
Sep  4 20:33:32.231644 kubelet[2527]: I0904 20:33:32.231575    2527 topology_manager.go:215] "Topology Admit Handler" podUID="0e9f4c9e-e5f8-46ca-91a0-73545b1d313b" podNamespace="kube-system" podName="coredns-76f75df574-pwqkk"
Sep  4 20:33:32.248253 systemd[1]: Created slice kubepods-burstable-podcdf69e3f_3949_449a_b61b_b7c717cdad7c.slice - libcontainer container kubepods-burstable-podcdf69e3f_3949_449a_b61b_b7c717cdad7c.slice.
Sep  4 20:33:32.263926 systemd[1]: Created slice kubepods-burstable-pod0e9f4c9e_e5f8_46ca_91a0_73545b1d313b.slice - libcontainer container kubepods-burstable-pod0e9f4c9e_e5f8_46ca_91a0_73545b1d313b.slice.
Sep  4 20:33:32.339577 kubelet[2527]: I0904 20:33:32.339521    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdf69e3f-3949-449a-b61b-b7c717cdad7c-config-volume\") pod \"coredns-76f75df574-rmrhn\" (UID: \"cdf69e3f-3949-449a-b61b-b7c717cdad7c\") " pod="kube-system/coredns-76f75df574-rmrhn"
Sep  4 20:33:32.339577 kubelet[2527]: I0904 20:33:32.339571    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt5hx\" (UniqueName: \"kubernetes.io/projected/0e9f4c9e-e5f8-46ca-91a0-73545b1d313b-kube-api-access-nt5hx\") pod \"coredns-76f75df574-pwqkk\" (UID: \"0e9f4c9e-e5f8-46ca-91a0-73545b1d313b\") " pod="kube-system/coredns-76f75df574-pwqkk"
Sep  4 20:33:32.339577 kubelet[2527]: I0904 20:33:32.339595    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e9f4c9e-e5f8-46ca-91a0-73545b1d313b-config-volume\") pod \"coredns-76f75df574-pwqkk\" (UID: \"0e9f4c9e-e5f8-46ca-91a0-73545b1d313b\") " pod="kube-system/coredns-76f75df574-pwqkk"
Sep  4 20:33:32.339932 kubelet[2527]: I0904 20:33:32.339620    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-474ld\" (UniqueName: \"kubernetes.io/projected/cdf69e3f-3949-449a-b61b-b7c717cdad7c-kube-api-access-474ld\") pod \"coredns-76f75df574-rmrhn\" (UID: \"cdf69e3f-3949-449a-b61b-b7c717cdad7c\") " pod="kube-system/coredns-76f75df574-rmrhn"
Sep  4 20:33:32.559107 kubelet[2527]: E0904 20:33:32.558749    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:32.560116 containerd[1467]: time="2024-09-04T20:33:32.560069011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rmrhn,Uid:cdf69e3f-3949-449a-b61b-b7c717cdad7c,Namespace:kube-system,Attempt:0,}"
Sep  4 20:33:32.573226 kubelet[2527]: E0904 20:33:32.570990    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:32.579401 containerd[1467]: time="2024-09-04T20:33:32.578437717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pwqkk,Uid:0e9f4c9e-e5f8-46ca-91a0-73545b1d313b,Namespace:kube-system,Attempt:0,}"
Sep  4 20:33:32.817477 kubelet[2527]: E0904 20:33:32.815611    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:33.818797 kubelet[2527]: E0904 20:33:33.818762    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:34.390972 systemd-networkd[1346]: cilium_host: Link UP
Sep  4 20:33:34.391344 systemd-networkd[1346]: cilium_net: Link UP
Sep  4 20:33:34.393385 systemd-networkd[1346]: cilium_net: Gained carrier
Sep  4 20:33:34.394207 systemd-networkd[1346]: cilium_host: Gained carrier
Sep  4 20:33:34.394417 systemd-networkd[1346]: cilium_net: Gained IPv6LL
Sep  4 20:33:34.395416 systemd-networkd[1346]: cilium_host: Gained IPv6LL
Sep  4 20:33:34.548794 systemd-networkd[1346]: cilium_vxlan: Link UP
Sep  4 20:33:34.548806 systemd-networkd[1346]: cilium_vxlan: Gained carrier
Sep  4 20:33:34.820125 kubelet[2527]: E0904 20:33:34.819644    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:34.943189 kernel: NET: Registered PF_ALG protocol family
Sep  4 20:33:35.843756 systemd-networkd[1346]: lxc_health: Link UP
Sep  4 20:33:35.849361 systemd-networkd[1346]: lxc_health: Gained carrier
Sep  4 20:33:36.178203 systemd-networkd[1346]: lxcf73f69b135dc: Link UP
Sep  4 20:33:36.186377 kernel: eth0: renamed from tmpd47b7
Sep  4 20:33:36.196013 systemd-networkd[1346]: lxcf73f69b135dc: Gained carrier
Sep  4 20:33:36.225089 systemd-networkd[1346]: lxc7c4291380df0: Link UP
Sep  4 20:33:36.230656 kernel: eth0: renamed from tmp293bb
Sep  4 20:33:36.238636 systemd-networkd[1346]: lxc7c4291380df0: Gained carrier
Sep  4 20:33:36.266768 kubelet[2527]: E0904 20:33:36.265667    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:36.368372 systemd-networkd[1346]: cilium_vxlan: Gained IPv6LL
Sep  4 20:33:37.171166 kubelet[2527]: E0904 20:33:37.171116    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:37.191442 kubelet[2527]: I0904 20:33:37.191393    2527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6v22j" podStartSLOduration=10.645169166 podStartE2EDuration="21.191343833s" podCreationTimestamp="2024-09-04 20:33:16 +0000 UTC" firstStartedPulling="2024-09-04 20:33:17.283384696 +0000 UTC m=+14.817507716" lastFinishedPulling="2024-09-04 20:33:27.829559362 +0000 UTC m=+25.363682383" observedRunningTime="2024-09-04 20:33:32.866030277 +0000 UTC m=+30.400153313" watchObservedRunningTime="2024-09-04 20:33:37.191343833 +0000 UTC m=+34.725466886"
Sep  4 20:33:37.264366 systemd-networkd[1346]: lxc_health: Gained IPv6LL
Sep  4 20:33:37.827877 kubelet[2527]: E0904 20:33:37.826740    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:37.904367 systemd-networkd[1346]: lxcf73f69b135dc: Gained IPv6LL
Sep  4 20:33:38.224429 systemd-networkd[1346]: lxc7c4291380df0: Gained IPv6LL
Sep  4 20:33:40.314715 containerd[1467]: time="2024-09-04T20:33:40.314433892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 20:33:40.314715 containerd[1467]: time="2024-09-04T20:33:40.314531529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:33:40.314715 containerd[1467]: time="2024-09-04T20:33:40.314560660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 20:33:40.314715 containerd[1467]: time="2024-09-04T20:33:40.314576694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:33:40.347652 systemd[1]: run-containerd-runc-k8s.io-d47b720790c6514973aff0c5fb965c23bb9a189bfe80295e5c2526710f150297-runc.RrWK7V.mount: Deactivated successfully.
Sep  4 20:33:40.358453 systemd[1]: Started cri-containerd-d47b720790c6514973aff0c5fb965c23bb9a189bfe80295e5c2526710f150297.scope - libcontainer container d47b720790c6514973aff0c5fb965c23bb9a189bfe80295e5c2526710f150297.
Sep  4 20:33:40.434344 containerd[1467]: time="2024-09-04T20:33:40.434203366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pwqkk,Uid:0e9f4c9e-e5f8-46ca-91a0-73545b1d313b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d47b720790c6514973aff0c5fb965c23bb9a189bfe80295e5c2526710f150297\""
Sep  4 20:33:40.435703 kubelet[2527]: E0904 20:33:40.435130    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:40.439604 containerd[1467]: time="2024-09-04T20:33:40.439204532Z" level=info msg="CreateContainer within sandbox \"d47b720790c6514973aff0c5fb965c23bb9a189bfe80295e5c2526710f150297\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Sep  4 20:33:40.454202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196862210.mount: Deactivated successfully.
Sep  4 20:33:40.458317 containerd[1467]: time="2024-09-04T20:33:40.458267337Z" level=info msg="CreateContainer within sandbox \"d47b720790c6514973aff0c5fb965c23bb9a189bfe80295e5c2526710f150297\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"909a6e9c69e84a39f222be419fd9eb7faa523120da4c6e7a7c567a0e2ee0b0b6\""
Sep  4 20:33:40.460175 containerd[1467]: time="2024-09-04T20:33:40.458846830Z" level=info msg="StartContainer for \"909a6e9c69e84a39f222be419fd9eb7faa523120da4c6e7a7c567a0e2ee0b0b6\""
Sep  4 20:33:40.500436 systemd[1]: Started cri-containerd-909a6e9c69e84a39f222be419fd9eb7faa523120da4c6e7a7c567a0e2ee0b0b6.scope - libcontainer container 909a6e9c69e84a39f222be419fd9eb7faa523120da4c6e7a7c567a0e2ee0b0b6.
Sep  4 20:33:40.519687 containerd[1467]: time="2024-09-04T20:33:40.518418213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 20:33:40.519687 containerd[1467]: time="2024-09-04T20:33:40.519095726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:33:40.519687 containerd[1467]: time="2024-09-04T20:33:40.519117387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 20:33:40.519687 containerd[1467]: time="2024-09-04T20:33:40.519127835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:33:40.542782 containerd[1467]: time="2024-09-04T20:33:40.542690025Z" level=info msg="StartContainer for \"909a6e9c69e84a39f222be419fd9eb7faa523120da4c6e7a7c567a0e2ee0b0b6\" returns successfully"
Sep  4 20:33:40.545415 systemd[1]: Started cri-containerd-293bb62a3768f48e4f663d03a104cefba229e60bd7bf812e60321a8a88934744.scope - libcontainer container 293bb62a3768f48e4f663d03a104cefba229e60bd7bf812e60321a8a88934744.
Sep  4 20:33:40.612910 containerd[1467]: time="2024-09-04T20:33:40.612856882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rmrhn,Uid:cdf69e3f-3949-449a-b61b-b7c717cdad7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"293bb62a3768f48e4f663d03a104cefba229e60bd7bf812e60321a8a88934744\""
Sep  4 20:33:40.614861 kubelet[2527]: E0904 20:33:40.614821    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:40.619094 containerd[1467]: time="2024-09-04T20:33:40.618667713Z" level=info msg="CreateContainer within sandbox \"293bb62a3768f48e4f663d03a104cefba229e60bd7bf812e60321a8a88934744\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Sep  4 20:33:40.630261 containerd[1467]: time="2024-09-04T20:33:40.630201692Z" level=info msg="CreateContainer within sandbox \"293bb62a3768f48e4f663d03a104cefba229e60bd7bf812e60321a8a88934744\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"608fafd488a7af45bac290e3c93e9d4cb8ef97f2639af0845acb029111aabff6\""
Sep  4 20:33:40.631225 containerd[1467]: time="2024-09-04T20:33:40.630834418Z" level=info msg="StartContainer for \"608fafd488a7af45bac290e3c93e9d4cb8ef97f2639af0845acb029111aabff6\""
Sep  4 20:33:40.668364 systemd[1]: Started cri-containerd-608fafd488a7af45bac290e3c93e9d4cb8ef97f2639af0845acb029111aabff6.scope - libcontainer container 608fafd488a7af45bac290e3c93e9d4cb8ef97f2639af0845acb029111aabff6.
Sep  4 20:33:40.705874 containerd[1467]: time="2024-09-04T20:33:40.705799048Z" level=info msg="StartContainer for \"608fafd488a7af45bac290e3c93e9d4cb8ef97f2639af0845acb029111aabff6\" returns successfully"
Sep  4 20:33:40.836175 kubelet[2527]: E0904 20:33:40.836099    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:40.839565 kubelet[2527]: E0904 20:33:40.839388    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:40.852034 kubelet[2527]: I0904 20:33:40.851997    2527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rmrhn" podStartSLOduration=23.851951313 podStartE2EDuration="23.851951313s" podCreationTimestamp="2024-09-04 20:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:33:40.85084077 +0000 UTC m=+38.384963811" watchObservedRunningTime="2024-09-04 20:33:40.851951313 +0000 UTC m=+38.386074346"
Sep  4 20:33:40.887136 kubelet[2527]: I0904 20:33:40.886999    2527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pwqkk" podStartSLOduration=23.886953458 podStartE2EDuration="23.886953458s" podCreationTimestamp="2024-09-04 20:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:33:40.885367268 +0000 UTC m=+38.419490309" watchObservedRunningTime="2024-09-04 20:33:40.886953458 +0000 UTC m=+38.421076499"
Sep  4 20:33:41.843219 kubelet[2527]: E0904 20:33:41.841656    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:41.843219 kubelet[2527]: E0904 20:33:41.841948    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:42.844043 kubelet[2527]: E0904 20:33:42.843961    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:42.845655 kubelet[2527]: E0904 20:33:42.844279    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:33:52.421538 systemd[1]: Started sshd@7-209.38.64.58:22-139.178.68.195:53886.service - OpenSSH per-connection server daemon (139.178.68.195:53886).
Sep  4 20:33:52.512815 sshd[3908]: Accepted publickey for core from 139.178.68.195 port 53886 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:33:52.513791 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:33:52.519445 systemd-logind[1444]: New session 8 of user core.
Sep  4 20:33:52.531472 systemd[1]: Started session-8.scope - Session 8 of User core.
Sep  4 20:33:53.081466 sshd[3908]: pam_unix(sshd:session): session closed for user core
Sep  4 20:33:53.086601 systemd[1]: sshd@7-209.38.64.58:22-139.178.68.195:53886.service: Deactivated successfully.
Sep  4 20:33:53.089475 systemd[1]: session-8.scope: Deactivated successfully.
Sep  4 20:33:53.093454 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit.
Sep  4 20:33:53.094815 systemd-logind[1444]: Removed session 8.
Sep  4 20:33:58.094978 systemd[1]: Started sshd@8-209.38.64.58:22-139.178.68.195:47976.service - OpenSSH per-connection server daemon (139.178.68.195:47976).
Sep  4 20:33:58.153086 sshd[3923]: Accepted publickey for core from 139.178.68.195 port 47976 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:33:58.155288 sshd[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:33:58.162535 systemd-logind[1444]: New session 9 of user core.
Sep  4 20:33:58.167498 systemd[1]: Started session-9.scope - Session 9 of User core.
Sep  4 20:33:58.314803 sshd[3923]: pam_unix(sshd:session): session closed for user core
Sep  4 20:33:58.317895 systemd[1]: sshd@8-209.38.64.58:22-139.178.68.195:47976.service: Deactivated successfully.
Sep  4 20:33:58.320092 systemd[1]: session-9.scope: Deactivated successfully.
Sep  4 20:33:58.323863 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit.
Sep  4 20:33:58.325168 systemd-logind[1444]: Removed session 9.
Sep  4 20:34:03.334606 systemd[1]: Started sshd@9-209.38.64.58:22-139.178.68.195:47988.service - OpenSSH per-connection server daemon (139.178.68.195:47988).
Sep  4 20:34:03.381727 sshd[3939]: Accepted publickey for core from 139.178.68.195 port 47988 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:03.383391 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:03.388680 systemd-logind[1444]: New session 10 of user core.
Sep  4 20:34:03.392475 systemd[1]: Started session-10.scope - Session 10 of User core.
Sep  4 20:34:03.526556 sshd[3939]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:03.530911 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit.
Sep  4 20:34:03.531748 systemd[1]: sshd@9-209.38.64.58:22-139.178.68.195:47988.service: Deactivated successfully.
Sep  4 20:34:03.534471 systemd[1]: session-10.scope: Deactivated successfully.
Sep  4 20:34:03.535832 systemd-logind[1444]: Removed session 10.
Sep  4 20:34:08.539891 systemd[1]: Started sshd@10-209.38.64.58:22-139.178.68.195:50634.service - OpenSSH per-connection server daemon (139.178.68.195:50634).
Sep  4 20:34:08.599261 sshd[3953]: Accepted publickey for core from 139.178.68.195 port 50634 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:08.601240 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:08.607680 systemd-logind[1444]: New session 11 of user core.
Sep  4 20:34:08.613391 systemd[1]: Started session-11.scope - Session 11 of User core.
Sep  4 20:34:08.754086 sshd[3953]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:08.759176 systemd[1]: sshd@10-209.38.64.58:22-139.178.68.195:50634.service: Deactivated successfully.
Sep  4 20:34:08.761810 systemd[1]: session-11.scope: Deactivated successfully.
Sep  4 20:34:08.763201 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit.
Sep  4 20:34:08.764794 systemd-logind[1444]: Removed session 11.
Sep  4 20:34:13.773792 systemd[1]: Started sshd@11-209.38.64.58:22-139.178.68.195:50646.service - OpenSSH per-connection server daemon (139.178.68.195:50646).
Sep  4 20:34:13.837991 sshd[3967]: Accepted publickey for core from 139.178.68.195 port 50646 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:13.840261 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:13.847464 systemd-logind[1444]: New session 12 of user core.
Sep  4 20:34:13.858515 systemd[1]: Started session-12.scope - Session 12 of User core.
Sep  4 20:34:14.010591 sshd[3967]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:14.027057 systemd[1]: sshd@11-209.38.64.58:22-139.178.68.195:50646.service: Deactivated successfully.
Sep  4 20:34:14.030062 systemd[1]: session-12.scope: Deactivated successfully.
Sep  4 20:34:14.031173 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit.
Sep  4 20:34:14.039705 systemd[1]: Started sshd@12-209.38.64.58:22-139.178.68.195:50660.service - OpenSSH per-connection server daemon (139.178.68.195:50660).
Sep  4 20:34:14.041281 systemd-logind[1444]: Removed session 12.
Sep  4 20:34:14.103599 sshd[3981]: Accepted publickey for core from 139.178.68.195 port 50660 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:14.105043 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:14.110603 systemd-logind[1444]: New session 13 of user core.
Sep  4 20:34:14.117488 systemd[1]: Started session-13.scope - Session 13 of User core.
Sep  4 20:34:14.343872 sshd[3981]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:14.359624 systemd[1]: sshd@12-209.38.64.58:22-139.178.68.195:50660.service: Deactivated successfully.
Sep  4 20:34:14.361931 systemd[1]: session-13.scope: Deactivated successfully.
Sep  4 20:34:14.365928 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit.
Sep  4 20:34:14.374815 systemd[1]: Started sshd@13-209.38.64.58:22-139.178.68.195:50670.service - OpenSSH per-connection server daemon (139.178.68.195:50670).
Sep  4 20:34:14.380960 systemd-logind[1444]: Removed session 13.
Sep  4 20:34:14.433083 sshd[3992]: Accepted publickey for core from 139.178.68.195 port 50670 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:14.434872 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:14.440970 systemd-logind[1444]: New session 14 of user core.
Sep  4 20:34:14.447538 systemd[1]: Started session-14.scope - Session 14 of User core.
Sep  4 20:34:14.581563 sshd[3992]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:14.586257 systemd[1]: sshd@13-209.38.64.58:22-139.178.68.195:50670.service: Deactivated successfully.
Sep  4 20:34:14.588656 systemd[1]: session-14.scope: Deactivated successfully.
Sep  4 20:34:14.590121 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit.
Sep  4 20:34:14.591218 systemd-logind[1444]: Removed session 14.
Sep  4 20:34:19.602569 systemd[1]: Started sshd@14-209.38.64.58:22-139.178.68.195:53028.service - OpenSSH per-connection server daemon (139.178.68.195:53028).
Sep  4 20:34:19.652070 sshd[4007]: Accepted publickey for core from 139.178.68.195 port 53028 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:19.653633 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:19.658263 systemd-logind[1444]: New session 15 of user core.
Sep  4 20:34:19.663339 systemd[1]: Started session-15.scope - Session 15 of User core.
Sep  4 20:34:19.786180 sshd[4007]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:19.790136 systemd[1]: sshd@14-209.38.64.58:22-139.178.68.195:53028.service: Deactivated successfully.
Sep  4 20:34:19.792202 systemd[1]: session-15.scope: Deactivated successfully.
Sep  4 20:34:19.792907 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit.
Sep  4 20:34:19.793756 systemd-logind[1444]: Removed session 15.
Sep  4 20:34:24.808685 systemd[1]: Started sshd@15-209.38.64.58:22-139.178.68.195:53040.service - OpenSSH per-connection server daemon (139.178.68.195:53040).
Sep  4 20:34:24.854133 sshd[4019]: Accepted publickey for core from 139.178.68.195 port 53040 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:24.857175 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:24.863295 systemd-logind[1444]: New session 16 of user core.
Sep  4 20:34:24.867384 systemd[1]: Started session-16.scope - Session 16 of User core.
Sep  4 20:34:25.029582 sshd[4019]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:25.032810 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit.
Sep  4 20:34:25.033580 systemd[1]: sshd@15-209.38.64.58:22-139.178.68.195:53040.service: Deactivated successfully.
Sep  4 20:34:25.035940 systemd[1]: session-16.scope: Deactivated successfully.
Sep  4 20:34:25.038822 systemd-logind[1444]: Removed session 16.
Sep  4 20:34:25.656262 kubelet[2527]: E0904 20:34:25.655955    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:34:28.656702 kubelet[2527]: E0904 20:34:28.656203    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:34:29.656538 kubelet[2527]: E0904 20:34:29.656418    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:34:30.050812 systemd[1]: Started sshd@16-209.38.64.58:22-139.178.68.195:50100.service - OpenSSH per-connection server daemon (139.178.68.195:50100).
Sep  4 20:34:30.109634 sshd[4032]: Accepted publickey for core from 139.178.68.195 port 50100 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:30.111930 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:30.118010 systemd-logind[1444]: New session 17 of user core.
Sep  4 20:34:30.122506 systemd[1]: Started session-17.scope - Session 17 of User core.
Sep  4 20:34:30.270207 sshd[4032]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:30.274002 systemd[1]: sshd@16-209.38.64.58:22-139.178.68.195:50100.service: Deactivated successfully.
Sep  4 20:34:30.277822 systemd[1]: session-17.scope: Deactivated successfully.
Sep  4 20:34:30.281401 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit.
Sep  4 20:34:30.283629 systemd-logind[1444]: Removed session 17.
Sep  4 20:34:35.289724 systemd[1]: Started sshd@17-209.38.64.58:22-139.178.68.195:50112.service - OpenSSH per-connection server daemon (139.178.68.195:50112).
Sep  4 20:34:35.338617 sshd[4045]: Accepted publickey for core from 139.178.68.195 port 50112 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:35.341065 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:35.346850 systemd-logind[1444]: New session 18 of user core.
Sep  4 20:34:35.350454 systemd[1]: Started session-18.scope - Session 18 of User core.
Sep  4 20:34:35.494410 sshd[4045]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:35.504238 systemd[1]: sshd@17-209.38.64.58:22-139.178.68.195:50112.service: Deactivated successfully.
Sep  4 20:34:35.507261 systemd[1]: session-18.scope: Deactivated successfully.
Sep  4 20:34:35.510803 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit.
Sep  4 20:34:35.516679 systemd[1]: Started sshd@18-209.38.64.58:22-139.178.68.195:50114.service - OpenSSH per-connection server daemon (139.178.68.195:50114).
Sep  4 20:34:35.518803 systemd-logind[1444]: Removed session 18.
Sep  4 20:34:35.561492 sshd[4058]: Accepted publickey for core from 139.178.68.195 port 50114 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:35.563640 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:35.569400 systemd-logind[1444]: New session 19 of user core.
Sep  4 20:34:35.573368 systemd[1]: Started session-19.scope - Session 19 of User core.
Sep  4 20:34:35.882077 sshd[4058]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:35.892737 systemd[1]: sshd@18-209.38.64.58:22-139.178.68.195:50114.service: Deactivated successfully.
Sep  4 20:34:35.895827 systemd[1]: session-19.scope: Deactivated successfully.
Sep  4 20:34:35.898029 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit.
Sep  4 20:34:35.902510 systemd[1]: Started sshd@19-209.38.64.58:22-139.178.68.195:50124.service - OpenSSH per-connection server daemon (139.178.68.195:50124).
Sep  4 20:34:35.904648 systemd-logind[1444]: Removed session 19.
Sep  4 20:34:35.953709 sshd[4069]: Accepted publickey for core from 139.178.68.195 port 50124 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:35.955563 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:35.961478 systemd-logind[1444]: New session 20 of user core.
Sep  4 20:34:35.967395 systemd[1]: Started session-20.scope - Session 20 of User core.
Sep  4 20:34:37.916725 sshd[4069]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:37.936735 systemd[1]: sshd@19-209.38.64.58:22-139.178.68.195:50124.service: Deactivated successfully.
Sep  4 20:34:37.940189 systemd[1]: session-20.scope: Deactivated successfully.
Sep  4 20:34:37.948769 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit.
Sep  4 20:34:37.959743 systemd[1]: Started sshd@20-209.38.64.58:22-139.178.68.195:40526.service - OpenSSH per-connection server daemon (139.178.68.195:40526).
Sep  4 20:34:37.968828 systemd-logind[1444]: Removed session 20.
Sep  4 20:34:38.040982 sshd[4086]: Accepted publickey for core from 139.178.68.195 port 40526 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:38.043325 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:38.050075 systemd-logind[1444]: New session 21 of user core.
Sep  4 20:34:38.054371 systemd[1]: Started session-21.scope - Session 21 of User core.
Sep  4 20:34:38.411878 sshd[4086]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:38.422959 systemd[1]: sshd@20-209.38.64.58:22-139.178.68.195:40526.service: Deactivated successfully.
Sep  4 20:34:38.425903 systemd[1]: session-21.scope: Deactivated successfully.
Sep  4 20:34:38.428015 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit.
Sep  4 20:34:38.433546 systemd[1]: Started sshd@21-209.38.64.58:22-139.178.68.195:40530.service - OpenSSH per-connection server daemon (139.178.68.195:40530).
Sep  4 20:34:38.435410 systemd-logind[1444]: Removed session 21.
Sep  4 20:34:38.476709 sshd[4098]: Accepted publickey for core from 139.178.68.195 port 40530 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:38.478588 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:38.483987 systemd-logind[1444]: New session 22 of user core.
Sep  4 20:34:38.491447 systemd[1]: Started session-22.scope - Session 22 of User core.
Sep  4 20:34:38.623243 sshd[4098]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:38.626771 systemd[1]: sshd@21-209.38.64.58:22-139.178.68.195:40530.service: Deactivated successfully.
Sep  4 20:34:38.629036 systemd[1]: session-22.scope: Deactivated successfully.
Sep  4 20:34:38.631710 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit.
Sep  4 20:34:38.632867 systemd-logind[1444]: Removed session 22.
Sep  4 20:34:40.658000 kubelet[2527]: E0904 20:34:40.656492    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:34:42.657364 kubelet[2527]: E0904 20:34:42.657240    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:34:43.644625 systemd[1]: Started sshd@22-209.38.64.58:22-139.178.68.195:40542.service - OpenSSH per-connection server daemon (139.178.68.195:40542).
Sep  4 20:34:43.686071 sshd[4111]: Accepted publickey for core from 139.178.68.195 port 40542 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:43.688574 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:43.695439 systemd-logind[1444]: New session 23 of user core.
Sep  4 20:34:43.702523 systemd[1]: Started session-23.scope - Session 23 of User core.
Sep  4 20:34:43.854058 sshd[4111]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:43.860584 systemd[1]: sshd@22-209.38.64.58:22-139.178.68.195:40542.service: Deactivated successfully.
Sep  4 20:34:43.864685 systemd[1]: session-23.scope: Deactivated successfully.
Sep  4 20:34:43.866194 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit.
Sep  4 20:34:43.867462 systemd-logind[1444]: Removed session 23.
Sep  4 20:34:48.875684 systemd[1]: Started sshd@23-209.38.64.58:22-139.178.68.195:52386.service - OpenSSH per-connection server daemon (139.178.68.195:52386).
Sep  4 20:34:48.920854 sshd[4129]: Accepted publickey for core from 139.178.68.195 port 52386 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:48.923093 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:48.928377 systemd-logind[1444]: New session 24 of user core.
Sep  4 20:34:48.935657 systemd[1]: Started session-24.scope - Session 24 of User core.
Sep  4 20:34:49.074459 sshd[4129]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:49.081149 systemd[1]: sshd@23-209.38.64.58:22-139.178.68.195:52386.service: Deactivated successfully.
Sep  4 20:34:49.085621 systemd[1]: session-24.scope: Deactivated successfully.
Sep  4 20:34:49.087721 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit.
Sep  4 20:34:49.089859 systemd-logind[1444]: Removed session 24.
Sep  4 20:34:52.657363 kubelet[2527]: E0904 20:34:52.656006    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:34:54.094552 systemd[1]: Started sshd@24-209.38.64.58:22-139.178.68.195:52392.service - OpenSSH per-connection server daemon (139.178.68.195:52392).
Sep  4 20:34:54.133651 sshd[4141]: Accepted publickey for core from 139.178.68.195 port 52392 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:54.135257 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:54.141265 systemd-logind[1444]: New session 25 of user core.
Sep  4 20:34:54.146376 systemd[1]: Started session-25.scope - Session 25 of User core.
Sep  4 20:34:54.299323 sshd[4141]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:54.305380 systemd[1]: sshd@24-209.38.64.58:22-139.178.68.195:52392.service: Deactivated successfully.
Sep  4 20:34:54.308287 systemd[1]: session-25.scope: Deactivated successfully.
Sep  4 20:34:54.309796 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit.
Sep  4 20:34:54.310950 systemd-logind[1444]: Removed session 25.
Sep  4 20:34:58.657196 kubelet[2527]: E0904 20:34:58.656308    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:34:59.320540 systemd[1]: Started sshd@25-209.38.64.58:22-139.178.68.195:54240.service - OpenSSH per-connection server daemon (139.178.68.195:54240).
Sep  4 20:34:59.360669 sshd[4153]: Accepted publickey for core from 139.178.68.195 port 54240 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:34:59.362261 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:34:59.366884 systemd-logind[1444]: New session 26 of user core.
Sep  4 20:34:59.372398 systemd[1]: Started session-26.scope - Session 26 of User core.
Sep  4 20:34:59.508125 sshd[4153]: pam_unix(sshd:session): session closed for user core
Sep  4 20:34:59.512490 systemd[1]: sshd@25-209.38.64.58:22-139.178.68.195:54240.service: Deactivated successfully.
Sep  4 20:34:59.514854 systemd[1]: session-26.scope: Deactivated successfully.
Sep  4 20:34:59.515562 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit.
Sep  4 20:34:59.516704 systemd-logind[1444]: Removed session 26.
Sep  4 20:35:00.656513 kubelet[2527]: E0904 20:35:00.656283    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:04.529624 systemd[1]: Started sshd@26-209.38.64.58:22-139.178.68.195:54244.service - OpenSSH per-connection server daemon (139.178.68.195:54244).
Sep  4 20:35:04.571987 sshd[4168]: Accepted publickey for core from 139.178.68.195 port 54244 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:35:04.573781 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:35:04.580394 systemd-logind[1444]: New session 27 of user core.
Sep  4 20:35:04.584435 systemd[1]: Started session-27.scope - Session 27 of User core.
Sep  4 20:35:04.737342 sshd[4168]: pam_unix(sshd:session): session closed for user core
Sep  4 20:35:04.742871 systemd[1]: sshd@26-209.38.64.58:22-139.178.68.195:54244.service: Deactivated successfully.
Sep  4 20:35:04.745690 systemd[1]: session-27.scope: Deactivated successfully.
Sep  4 20:35:04.747025 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit.
Sep  4 20:35:04.748775 systemd-logind[1444]: Removed session 27.
Sep  4 20:35:09.765546 systemd[1]: Started sshd@27-209.38.64.58:22-139.178.68.195:57050.service - OpenSSH per-connection server daemon (139.178.68.195:57050).
Sep  4 20:35:09.808625 sshd[4181]: Accepted publickey for core from 139.178.68.195 port 57050 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:35:09.810338 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:35:09.815401 systemd-logind[1444]: New session 28 of user core.
Sep  4 20:35:09.823428 systemd[1]: Started session-28.scope - Session 28 of User core.
Sep  4 20:35:09.948512 sshd[4181]: pam_unix(sshd:session): session closed for user core
Sep  4 20:35:09.952600 systemd-logind[1444]: Session 28 logged out. Waiting for processes to exit.
Sep  4 20:35:09.953375 systemd[1]: sshd@27-209.38.64.58:22-139.178.68.195:57050.service: Deactivated successfully.
Sep  4 20:35:09.955941 systemd[1]: session-28.scope: Deactivated successfully.
Sep  4 20:35:09.957449 systemd-logind[1444]: Removed session 28.
Sep  4 20:35:14.970482 systemd[1]: Started sshd@28-209.38.64.58:22-139.178.68.195:57060.service - OpenSSH per-connection server daemon (139.178.68.195:57060).
Sep  4 20:35:15.023703 sshd[4195]: Accepted publickey for core from 139.178.68.195 port 57060 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:35:15.025337 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:35:15.030030 systemd-logind[1444]: New session 29 of user core.
Sep  4 20:35:15.036395 systemd[1]: Started session-29.scope - Session 29 of User core.
Sep  4 20:35:15.164659 sshd[4195]: pam_unix(sshd:session): session closed for user core
Sep  4 20:35:15.176660 systemd[1]: sshd@28-209.38.64.58:22-139.178.68.195:57060.service: Deactivated successfully.
Sep  4 20:35:15.179696 systemd[1]: session-29.scope: Deactivated successfully.
Sep  4 20:35:15.181247 systemd-logind[1444]: Session 29 logged out. Waiting for processes to exit.
Sep  4 20:35:15.189670 systemd[1]: Started sshd@29-209.38.64.58:22-139.178.68.195:57068.service - OpenSSH per-connection server daemon (139.178.68.195:57068).
Sep  4 20:35:15.192461 systemd-logind[1444]: Removed session 29.
Sep  4 20:35:15.237479 sshd[4208]: Accepted publickey for core from 139.178.68.195 port 57068 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:35:15.239604 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:35:15.244838 systemd-logind[1444]: New session 30 of user core.
Sep  4 20:35:15.253407 systemd[1]: Started session-30.scope - Session 30 of User core.
Sep  4 20:35:16.754228 systemd[1]: run-containerd-runc-k8s.io-236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d-runc.ii7dCR.mount: Deactivated successfully.
Sep  4 20:35:16.759642 containerd[1467]: time="2024-09-04T20:35:16.759584491Z" level=info msg="StopContainer for \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\" with timeout 30 (s)"
Sep  4 20:35:16.771586 containerd[1467]: time="2024-09-04T20:35:16.770456999Z" level=info msg="Stop container \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\" with signal terminated"
Sep  4 20:35:16.793442 systemd[1]: cri-containerd-61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1.scope: Deactivated successfully.
Sep  4 20:35:16.809936 containerd[1467]: time="2024-09-04T20:35:16.809742635Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Sep  4 20:35:16.815979 containerd[1467]: time="2024-09-04T20:35:16.815677233Z" level=info msg="StopContainer for \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\" with timeout 2 (s)"
Sep  4 20:35:16.816477 containerd[1467]: time="2024-09-04T20:35:16.816349405Z" level=info msg="Stop container \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\" with signal terminated"
Sep  4 20:35:16.836128 systemd-networkd[1346]: lxc_health: Link DOWN
Sep  4 20:35:16.836189 systemd-networkd[1346]: lxc_health: Lost carrier
Sep  4 20:35:16.870720 systemd[1]: cri-containerd-236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d.scope: Deactivated successfully.
Sep  4 20:35:16.872132 systemd[1]: cri-containerd-236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d.scope: Consumed 8.186s CPU time.
Sep  4 20:35:16.880340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1-rootfs.mount: Deactivated successfully.
Sep  4 20:35:16.887284 containerd[1467]: time="2024-09-04T20:35:16.887022354Z" level=info msg="shim disconnected" id=61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1 namespace=k8s.io
Sep  4 20:35:16.887532 containerd[1467]: time="2024-09-04T20:35:16.887214237Z" level=warning msg="cleaning up after shim disconnected" id=61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1 namespace=k8s.io
Sep  4 20:35:16.887532 containerd[1467]: time="2024-09-04T20:35:16.887425691Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 20:35:16.920083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d-rootfs.mount: Deactivated successfully.
Sep  4 20:35:16.922751 containerd[1467]: time="2024-09-04T20:35:16.922552422Z" level=info msg="shim disconnected" id=236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d namespace=k8s.io
Sep  4 20:35:16.922751 containerd[1467]: time="2024-09-04T20:35:16.922745728Z" level=warning msg="cleaning up after shim disconnected" id=236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d namespace=k8s.io
Sep  4 20:35:16.923225 containerd[1467]: time="2024-09-04T20:35:16.922763960Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 20:35:16.940833 containerd[1467]: time="2024-09-04T20:35:16.940367564Z" level=info msg="StopContainer for \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\" returns successfully"
Sep  4 20:35:16.941553 containerd[1467]: time="2024-09-04T20:35:16.941353455Z" level=info msg="StopPodSandbox for \"60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c\""
Sep  4 20:35:16.941553 containerd[1467]: time="2024-09-04T20:35:16.941406457Z" level=info msg="Container to stop \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep  4 20:35:16.945690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c-shm.mount: Deactivated successfully.
Sep  4 20:35:16.951232 containerd[1467]: time="2024-09-04T20:35:16.950240554Z" level=warning msg="cleanup warnings time=\"2024-09-04T20:35:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Sep  4 20:35:16.953846 containerd[1467]: time="2024-09-04T20:35:16.953802621Z" level=info msg="StopContainer for \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\" returns successfully"
Sep  4 20:35:16.954707 containerd[1467]: time="2024-09-04T20:35:16.954671921Z" level=info msg="StopPodSandbox for \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\""
Sep  4 20:35:16.954961 containerd[1467]: time="2024-09-04T20:35:16.954902410Z" level=info msg="Container to stop \"17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep  4 20:35:16.955113 containerd[1467]: time="2024-09-04T20:35:16.955098450Z" level=info msg="Container to stop \"20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep  4 20:35:16.955244 containerd[1467]: time="2024-09-04T20:35:16.955229628Z" level=info msg="Container to stop \"878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep  4 20:35:16.955318 containerd[1467]: time="2024-09-04T20:35:16.955299085Z" level=info msg="Container to stop \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep  4 20:35:16.955372 containerd[1467]: time="2024-09-04T20:35:16.955360626Z" level=info msg="Container to stop \"a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep  4 20:35:16.963083 systemd[1]: cri-containerd-60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c.scope: Deactivated successfully.
Sep  4 20:35:16.973398 systemd[1]: cri-containerd-b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd.scope: Deactivated successfully.
Sep  4 20:35:17.006600 containerd[1467]: time="2024-09-04T20:35:17.006246163Z" level=info msg="shim disconnected" id=60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c namespace=k8s.io
Sep  4 20:35:17.006600 containerd[1467]: time="2024-09-04T20:35:17.006322365Z" level=warning msg="cleaning up after shim disconnected" id=60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c namespace=k8s.io
Sep  4 20:35:17.006600 containerd[1467]: time="2024-09-04T20:35:17.006332864Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 20:35:17.009617 containerd[1467]: time="2024-09-04T20:35:17.009058916Z" level=info msg="shim disconnected" id=b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd namespace=k8s.io
Sep  4 20:35:17.009617 containerd[1467]: time="2024-09-04T20:35:17.009122280Z" level=warning msg="cleaning up after shim disconnected" id=b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd namespace=k8s.io
Sep  4 20:35:17.010034 containerd[1467]: time="2024-09-04T20:35:17.009136045Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 20:35:17.031305 containerd[1467]: time="2024-09-04T20:35:17.031246617Z" level=warning msg="cleanup warnings time=\"2024-09-04T20:35:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Sep  4 20:35:17.033721 containerd[1467]: time="2024-09-04T20:35:17.033630469Z" level=info msg="TearDown network for sandbox \"60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c\" successfully"
Sep  4 20:35:17.035068 containerd[1467]: time="2024-09-04T20:35:17.034985463Z" level=info msg="StopPodSandbox for \"60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c\" returns successfully"
Sep  4 20:35:17.035068 containerd[1467]: time="2024-09-04T20:35:17.034954771Z" level=info msg="TearDown network for sandbox \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\" successfully"
Sep  4 20:35:17.035068 containerd[1467]: time="2024-09-04T20:35:17.035047422Z" level=info msg="StopPodSandbox for \"b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd\" returns successfully"
Sep  4 20:35:17.058874 kubelet[2527]: I0904 20:35:17.058469    2527 scope.go:117] "RemoveContainer" containerID="236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d"
Sep  4 20:35:17.063431 containerd[1467]: time="2024-09-04T20:35:17.062250825Z" level=info msg="RemoveContainer for \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\""
Sep  4 20:35:17.088434 containerd[1467]: time="2024-09-04T20:35:17.088354975Z" level=info msg="RemoveContainer for \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\" returns successfully"
Sep  4 20:35:17.090123 kubelet[2527]: I0904 20:35:17.089229    2527 scope.go:117] "RemoveContainer" containerID="878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29"
Sep  4 20:35:17.090736 containerd[1467]: time="2024-09-04T20:35:17.090702322Z" level=info msg="RemoveContainer for \"878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29\""
Sep  4 20:35:17.093073 containerd[1467]: time="2024-09-04T20:35:17.093027326Z" level=info msg="RemoveContainer for \"878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29\" returns successfully"
Sep  4 20:35:17.093546 kubelet[2527]: I0904 20:35:17.093333    2527 scope.go:117] "RemoveContainer" containerID="20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de"
Sep  4 20:35:17.095189 containerd[1467]: time="2024-09-04T20:35:17.094824237Z" level=info msg="RemoveContainer for \"20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de\""
Sep  4 20:35:17.097810 containerd[1467]: time="2024-09-04T20:35:17.097764353Z" level=info msg="RemoveContainer for \"20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de\" returns successfully"
Sep  4 20:35:17.098405 kubelet[2527]: I0904 20:35:17.098269    2527 scope.go:117] "RemoveContainer" containerID="a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799"
Sep  4 20:35:17.100185 containerd[1467]: time="2024-09-04T20:35:17.099923907Z" level=info msg="RemoveContainer for \"a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799\""
Sep  4 20:35:17.101813 containerd[1467]: time="2024-09-04T20:35:17.101776100Z" level=info msg="RemoveContainer for \"a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799\" returns successfully"
Sep  4 20:35:17.102169 kubelet[2527]: I0904 20:35:17.102117    2527 scope.go:117] "RemoveContainer" containerID="17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360"
Sep  4 20:35:17.103397 containerd[1467]: time="2024-09-04T20:35:17.103323989Z" level=info msg="RemoveContainer for \"17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360\""
Sep  4 20:35:17.105354 containerd[1467]: time="2024-09-04T20:35:17.105310515Z" level=info msg="RemoveContainer for \"17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360\" returns successfully"
Sep  4 20:35:17.105549 kubelet[2527]: I0904 20:35:17.105514    2527 scope.go:117] "RemoveContainer" containerID="236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d"
Sep  4 20:35:17.105740 containerd[1467]: time="2024-09-04T20:35:17.105698658Z" level=error msg="ContainerStatus for \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\": not found"
Sep  4 20:35:17.106038 kubelet[2527]: E0904 20:35:17.105920    2527 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\": not found" containerID="236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d"
Sep  4 20:35:17.111785 kubelet[2527]: I0904 20:35:17.111729    2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d"} err="failed to get container status \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\": rpc error: code = NotFound desc = an error occurred when try to find container \"236bb302596d47cbfd84d838e98f0f78021f25a2101a542e27eea56f808c641d\": not found"
Sep  4 20:35:17.111785 kubelet[2527]: I0904 20:35:17.111780    2527 scope.go:117] "RemoveContainer" containerID="878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29"
Sep  4 20:35:17.112172 containerd[1467]: time="2024-09-04T20:35:17.112105370Z" level=error msg="ContainerStatus for \"878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29\": not found"
Sep  4 20:35:17.112308 kubelet[2527]: E0904 20:35:17.112290    2527 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29\": not found" containerID="878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29"
Sep  4 20:35:17.112379 kubelet[2527]: I0904 20:35:17.112328    2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29"} err="failed to get container status \"878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29\": rpc error: code = NotFound desc = an error occurred when try to find container \"878fc915b3d7fe4ab984be5d1e46f35912c8a4af83f595af3c0ef820300f2b29\": not found"
Sep  4 20:35:17.112379 kubelet[2527]: I0904 20:35:17.112342    2527 scope.go:117] "RemoveContainer" containerID="20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de"
Sep  4 20:35:17.112632 containerd[1467]: time="2024-09-04T20:35:17.112602256Z" level=error msg="ContainerStatus for \"20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de\": not found"
Sep  4 20:35:17.112841 kubelet[2527]: E0904 20:35:17.112823    2527 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de\": not found" containerID="20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de"
Sep  4 20:35:17.112902 kubelet[2527]: I0904 20:35:17.112852    2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de"} err="failed to get container status \"20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de\": rpc error: code = NotFound desc = an error occurred when try to find container \"20200b5d631c041e3ced4b1e648f332a64fbd8eaca0afbd34ea3f21641e441de\": not found"
Sep  4 20:35:17.112902 kubelet[2527]: I0904 20:35:17.112863    2527 scope.go:117] "RemoveContainer" containerID="a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799"
Sep  4 20:35:17.113273 kubelet[2527]: E0904 20:35:17.113137    2527 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799\": not found" containerID="a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799"
Sep  4 20:35:17.113273 kubelet[2527]: I0904 20:35:17.113196    2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799"} err="failed to get container status \"a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799\": not found"
Sep  4 20:35:17.113273 kubelet[2527]: I0904 20:35:17.113206    2527 scope.go:117] "RemoveContainer" containerID="17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360"
Sep  4 20:35:17.113431 containerd[1467]: time="2024-09-04T20:35:17.112995777Z" level=error msg="ContainerStatus for \"a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9a1408777518d02e58f07af78a95a03914d471dacb8be5418d79c25c58f5799\": not found"
Sep  4 20:35:17.113467 containerd[1467]: time="2024-09-04T20:35:17.113421554Z" level=error msg="ContainerStatus for \"17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360\": not found"
Sep  4 20:35:17.113563 kubelet[2527]: E0904 20:35:17.113547    2527 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360\": not found" containerID="17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360"
Sep  4 20:35:17.113632 kubelet[2527]: I0904 20:35:17.113573    2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360"} err="failed to get container status \"17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360\": rpc error: code = NotFound desc = an error occurred when try to find container \"17f7b3ce409395f8038cadd241f0fb2b3b2e93d18539e6e7cc52875f6c7ea360\": not found"
Sep  4 20:35:17.113632 kubelet[2527]: I0904 20:35:17.113583    2527 scope.go:117] "RemoveContainer" containerID="61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1"
Sep  4 20:35:17.115250 containerd[1467]: time="2024-09-04T20:35:17.115179146Z" level=info msg="RemoveContainer for \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\""
Sep  4 20:35:17.117655 containerd[1467]: time="2024-09-04T20:35:17.117557110Z" level=info msg="RemoveContainer for \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\" returns successfully"
Sep  4 20:35:17.117994 kubelet[2527]: I0904 20:35:17.117806    2527 scope.go:117] "RemoveContainer" containerID="61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1"
Sep  4 20:35:17.118039 containerd[1467]: time="2024-09-04T20:35:17.118014329Z" level=error msg="ContainerStatus for \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\": not found"
Sep  4 20:35:17.118184 kubelet[2527]: E0904 20:35:17.118166    2527 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\": not found" containerID="61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1"
Sep  4 20:35:17.118242 kubelet[2527]: I0904 20:35:17.118205    2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1"} err="failed to get container status \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"61932c9f89cbd2f88c5b86fe759743214cc5488bdc002d7628add8f45c3a74f1\": not found"
Sep  4 20:35:17.208972 kubelet[2527]: I0904 20:35:17.208925    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cilium-run\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.208972 kubelet[2527]: I0904 20:35:17.209030    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cilium-config-path\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.208972 kubelet[2527]: I0904 20:35:17.209064    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cilium-cgroup\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.208972 kubelet[2527]: I0904 20:35:17.209067    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 20:35:17.208972 kubelet[2527]: I0904 20:35:17.209093    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-etc-cni-netd\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.208972 kubelet[2527]: I0904 20:35:17.209129    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-hubble-tls\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.211068 kubelet[2527]: I0904 20:35:17.209133    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 20:35:17.211068 kubelet[2527]: I0904 20:35:17.209200    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-lib-modules\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.211068 kubelet[2527]: I0904 20:35:17.209232    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0-cilium-config-path\") pod \"76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0\" (UID: \"76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0\") "
Sep  4 20:35:17.211068 kubelet[2527]: I0904 20:35:17.209264    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnc5v\" (UniqueName: \"kubernetes.io/projected/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-kube-api-access-dnc5v\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.211068 kubelet[2527]: I0904 20:35:17.209294    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-xtables-lock\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.211068 kubelet[2527]: I0904 20:35:17.209325    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-bpf-maps\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.211389 kubelet[2527]: I0904 20:35:17.209355    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-host-proc-sys-kernel\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.211389 kubelet[2527]: I0904 20:35:17.209385    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-hostproc\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.211389 kubelet[2527]: I0904 20:35:17.209417    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66cs6\" (UniqueName: \"kubernetes.io/projected/76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0-kube-api-access-66cs6\") pod \"76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0\" (UID: \"76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0\") "
Sep  4 20:35:17.211389 kubelet[2527]: I0904 20:35:17.209462    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cni-path\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.211389 kubelet[2527]: I0904 20:35:17.209489    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-host-proc-sys-net\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.211389 kubelet[2527]: I0904 20:35:17.209520    2527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-clustermesh-secrets\") pod \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\" (UID: \"7f337ba1-ab65-48e5-9f50-a2cf1e60a92a\") "
Sep  4 20:35:17.211662 kubelet[2527]: I0904 20:35:17.209574    2527 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cilium-run\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.211662 kubelet[2527]: I0904 20:35:17.209592    2527 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cilium-cgroup\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.211806 kubelet[2527]: I0904 20:35:17.211710    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Sep  4 20:35:17.211857 kubelet[2527]: I0904 20:35:17.211811    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 20:35:17.211903 kubelet[2527]: I0904 20:35:17.211855    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 20:35:17.213890 kubelet[2527]: I0904 20:35:17.212573    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 20:35:17.214906 kubelet[2527]: I0904 20:35:17.214491    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 20:35:17.214906 kubelet[2527]: I0904 20:35:17.214543    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 20:35:17.214906 kubelet[2527]: I0904 20:35:17.214563    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-hostproc" (OuterVolumeSpecName: "hostproc") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 20:35:17.215878 kubelet[2527]: I0904 20:35:17.215187    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cni-path" (OuterVolumeSpecName: "cni-path") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 20:35:17.215878 kubelet[2527]: I0904 20:35:17.215236    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 20:35:17.219304 kubelet[2527]: I0904 20:35:17.217690    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0" (UID: "76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Sep  4 20:35:17.222892 kubelet[2527]: I0904 20:35:17.222857    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Sep  4 20:35:17.225054 kubelet[2527]: I0904 20:35:17.225023    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep  4 20:35:17.226188 kubelet[2527]: I0904 20:35:17.226103    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-kube-api-access-dnc5v" (OuterVolumeSpecName: "kube-api-access-dnc5v") pod "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" (UID: "7f337ba1-ab65-48e5-9f50-a2cf1e60a92a"). InnerVolumeSpecName "kube-api-access-dnc5v". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep  4 20:35:17.226374 kubelet[2527]: I0904 20:35:17.226343    2527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0-kube-api-access-66cs6" (OuterVolumeSpecName: "kube-api-access-66cs6") pod "76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0" (UID: "76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0"). InnerVolumeSpecName "kube-api-access-66cs6". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep  4 20:35:17.310884 kubelet[2527]: I0904 20:35:17.310744    2527 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-clustermesh-secrets\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.310884 kubelet[2527]: I0904 20:35:17.310784    2527 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cni-path\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.310884 kubelet[2527]: I0904 20:35:17.310796    2527 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-host-proc-sys-net\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.310884 kubelet[2527]: I0904 20:35:17.310813    2527 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-cilium-config-path\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.310884 kubelet[2527]: I0904 20:35:17.310824    2527 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-hubble-tls\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.310884 kubelet[2527]: I0904 20:35:17.310835    2527 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-etc-cni-netd\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.310884 kubelet[2527]: I0904 20:35:17.310845    2527 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-lib-modules\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.310884 kubelet[2527]: I0904 20:35:17.310855    2527 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-host-proc-sys-kernel\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.311528 kubelet[2527]: I0904 20:35:17.310867    2527 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0-cilium-config-path\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.311775 kubelet[2527]: I0904 20:35:17.311579    2527 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dnc5v\" (UniqueName: \"kubernetes.io/projected/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-kube-api-access-dnc5v\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.311775 kubelet[2527]: I0904 20:35:17.311594    2527 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-xtables-lock\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.311775 kubelet[2527]: I0904 20:35:17.311606    2527 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-bpf-maps\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.311775 kubelet[2527]: I0904 20:35:17.311616    2527 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a-hostproc\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.311775 kubelet[2527]: I0904 20:35:17.311628    2527 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-66cs6\" (UniqueName: \"kubernetes.io/projected/76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0-kube-api-access-66cs6\") on node \"ci-3975.2.1-b-0d33e4c091\" DevicePath \"\""
Sep  4 20:35:17.384707 systemd[1]: Removed slice kubepods-besteffort-pod76ffbe5a_d9ad_4b35_bc49_fad5a9558bb0.slice - libcontainer container kubepods-besteffort-pod76ffbe5a_d9ad_4b35_bc49_fad5a9558bb0.slice.
Sep  4 20:35:17.386167 systemd[1]: Removed slice kubepods-burstable-pod7f337ba1_ab65_48e5_9f50_a2cf1e60a92a.slice - libcontainer container kubepods-burstable-pod7f337ba1_ab65_48e5_9f50_a2cf1e60a92a.slice.
Sep  4 20:35:17.386264 systemd[1]: kubepods-burstable-pod7f337ba1_ab65_48e5_9f50_a2cf1e60a92a.slice: Consumed 8.293s CPU time.
Sep  4 20:35:17.743209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60c84b0d0921e0bb6b68e684278a3e837546a9509049b89ad6b3ce9b54dce34c-rootfs.mount: Deactivated successfully.
Sep  4 20:35:17.743350 systemd[1]: var-lib-kubelet-pods-76ffbe5a\x2dd9ad\x2d4b35\x2dbc49\x2dfad5a9558bb0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d66cs6.mount: Deactivated successfully.
Sep  4 20:35:17.743428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd-rootfs.mount: Deactivated successfully.
Sep  4 20:35:17.743512 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b0d3806e610379d9d91df81a1da776d7057f987e3c9438825a5acd967816dfcd-shm.mount: Deactivated successfully.
Sep  4 20:35:17.743599 systemd[1]: var-lib-kubelet-pods-7f337ba1\x2dab65\x2d48e5\x2d9f50\x2da2cf1e60a92a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Sep  4 20:35:17.743681 systemd[1]: var-lib-kubelet-pods-7f337ba1\x2dab65\x2d48e5\x2d9f50\x2da2cf1e60a92a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddnc5v.mount: Deactivated successfully.
Sep  4 20:35:17.743756 systemd[1]: var-lib-kubelet-pods-7f337ba1\x2dab65\x2d48e5\x2d9f50\x2da2cf1e60a92a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Sep  4 20:35:17.852351 kubelet[2527]: E0904 20:35:17.852316    2527 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Sep  4 20:35:18.658320 kubelet[2527]: I0904 20:35:18.658112    2527 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0" path="/var/lib/kubelet/pods/76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0/volumes"
Sep  4 20:35:18.659620 kubelet[2527]: I0904 20:35:18.659100    2527 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" path="/var/lib/kubelet/pods/7f337ba1-ab65-48e5-9f50-a2cf1e60a92a/volumes"
Sep  4 20:35:18.674927 sshd[4208]: pam_unix(sshd:session): session closed for user core
Sep  4 20:35:18.684371 systemd[1]: sshd@29-209.38.64.58:22-139.178.68.195:57068.service: Deactivated successfully.
Sep  4 20:35:18.687611 systemd[1]: session-30.scope: Deactivated successfully.
Sep  4 20:35:18.691055 systemd-logind[1444]: Session 30 logged out. Waiting for processes to exit.
Sep  4 20:35:18.694682 systemd[1]: Started sshd@30-209.38.64.58:22-139.178.68.195:52954.service - OpenSSH per-connection server daemon (139.178.68.195:52954).
Sep  4 20:35:18.697124 systemd-logind[1444]: Removed session 30.
Sep  4 20:35:18.759425 sshd[4372]: Accepted publickey for core from 139.178.68.195 port 52954 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:35:18.761519 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:35:18.768307 systemd-logind[1444]: New session 31 of user core.
Sep  4 20:35:18.773524 systemd[1]: Started session-31.scope - Session 31 of User core.
Sep  4 20:35:19.540838 sshd[4372]: pam_unix(sshd:session): session closed for user core
Sep  4 20:35:19.553259 systemd[1]: sshd@30-209.38.64.58:22-139.178.68.195:52954.service: Deactivated successfully.
Sep  4 20:35:19.560245 systemd[1]: session-31.scope: Deactivated successfully.
Sep  4 20:35:19.564528 systemd-logind[1444]: Session 31 logged out. Waiting for processes to exit.
Sep  4 20:35:19.576340 systemd[1]: Started sshd@31-209.38.64.58:22-139.178.68.195:52960.service - OpenSSH per-connection server daemon (139.178.68.195:52960).
Sep  4 20:35:19.580505 systemd-logind[1444]: Removed session 31.
Sep  4 20:35:19.587839 kubelet[2527]: I0904 20:35:19.586281    2527 topology_manager.go:215] "Topology Admit Handler" podUID="3233e9e0-5b38-4497-a070-9db73a48c6fb" podNamespace="kube-system" podName="cilium-pl82b"
Sep  4 20:35:19.587839 kubelet[2527]: E0904 20:35:19.586346    2527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" containerName="apply-sysctl-overwrites"
Sep  4 20:35:19.587839 kubelet[2527]: E0904 20:35:19.586356    2527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0" containerName="cilium-operator"
Sep  4 20:35:19.587839 kubelet[2527]: E0904 20:35:19.586363    2527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" containerName="mount-bpf-fs"
Sep  4 20:35:19.587839 kubelet[2527]: E0904 20:35:19.586370    2527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" containerName="clean-cilium-state"
Sep  4 20:35:19.587839 kubelet[2527]: E0904 20:35:19.586376    2527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" containerName="cilium-agent"
Sep  4 20:35:19.587839 kubelet[2527]: E0904 20:35:19.586383    2527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" containerName="mount-cgroup"
Sep  4 20:35:19.587839 kubelet[2527]: I0904 20:35:19.586408    2527 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ffbe5a-d9ad-4b35-bc49-fad5a9558bb0" containerName="cilium-operator"
Sep  4 20:35:19.587839 kubelet[2527]: I0904 20:35:19.586417    2527 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f337ba1-ab65-48e5-9f50-a2cf1e60a92a" containerName="cilium-agent"
Sep  4 20:35:19.602470 systemd[1]: Created slice kubepods-burstable-pod3233e9e0_5b38_4497_a070_9db73a48c6fb.slice - libcontainer container kubepods-burstable-pod3233e9e0_5b38_4497_a070_9db73a48c6fb.slice.
Sep  4 20:35:19.646186 sshd[4383]: Accepted publickey for core from 139.178.68.195 port 52960 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:35:19.648644 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:35:19.655504 systemd-logind[1444]: New session 32 of user core.
Sep  4 20:35:19.659466 systemd[1]: Started session-32.scope - Session 32 of User core.
Sep  4 20:35:19.721911 sshd[4383]: pam_unix(sshd:session): session closed for user core
Sep  4 20:35:19.725470 kubelet[2527]: I0904 20:35:19.724900    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3233e9e0-5b38-4497-a070-9db73a48c6fb-cilium-ipsec-secrets\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.725470 kubelet[2527]: I0904 20:35:19.724949    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3233e9e0-5b38-4497-a070-9db73a48c6fb-hostproc\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.725470 kubelet[2527]: I0904 20:35:19.724969    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3233e9e0-5b38-4497-a070-9db73a48c6fb-lib-modules\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.725470 kubelet[2527]: I0904 20:35:19.724990    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3233e9e0-5b38-4497-a070-9db73a48c6fb-cilium-config-path\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.725470 kubelet[2527]: I0904 20:35:19.725012    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3233e9e0-5b38-4497-a070-9db73a48c6fb-xtables-lock\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.725470 kubelet[2527]: I0904 20:35:19.725030    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3233e9e0-5b38-4497-a070-9db73a48c6fb-host-proc-sys-net\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.726124 kubelet[2527]: I0904 20:35:19.725133    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3233e9e0-5b38-4497-a070-9db73a48c6fb-bpf-maps\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.726124 kubelet[2527]: I0904 20:35:19.725186    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3233e9e0-5b38-4497-a070-9db73a48c6fb-etc-cni-netd\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.726124 kubelet[2527]: I0904 20:35:19.725207    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3233e9e0-5b38-4497-a070-9db73a48c6fb-clustermesh-secrets\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.726124 kubelet[2527]: I0904 20:35:19.725231    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3233e9e0-5b38-4497-a070-9db73a48c6fb-host-proc-sys-kernel\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.726124 kubelet[2527]: I0904 20:35:19.725256    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3233e9e0-5b38-4497-a070-9db73a48c6fb-cilium-cgroup\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.726124 kubelet[2527]: I0904 20:35:19.725275    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p95c\" (UniqueName: \"kubernetes.io/projected/3233e9e0-5b38-4497-a070-9db73a48c6fb-kube-api-access-9p95c\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.726337 kubelet[2527]: I0904 20:35:19.725300    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3233e9e0-5b38-4497-a070-9db73a48c6fb-cilium-run\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.726337 kubelet[2527]: I0904 20:35:19.725322    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3233e9e0-5b38-4497-a070-9db73a48c6fb-cni-path\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.726337 kubelet[2527]: I0904 20:35:19.725340    2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3233e9e0-5b38-4497-a070-9db73a48c6fb-hubble-tls\") pod \"cilium-pl82b\" (UID: \"3233e9e0-5b38-4497-a070-9db73a48c6fb\") " pod="kube-system/cilium-pl82b"
Sep  4 20:35:19.732776 systemd[1]: sshd@31-209.38.64.58:22-139.178.68.195:52960.service: Deactivated successfully.
Sep  4 20:35:19.735257 systemd[1]: session-32.scope: Deactivated successfully.
Sep  4 20:35:19.736824 systemd-logind[1444]: Session 32 logged out. Waiting for processes to exit.
Sep  4 20:35:19.746672 systemd[1]: Started sshd@32-209.38.64.58:22-139.178.68.195:52976.service - OpenSSH per-connection server daemon (139.178.68.195:52976).
Sep  4 20:35:19.749055 systemd-logind[1444]: Removed session 32.
Sep  4 20:35:19.792180 sshd[4391]: Accepted publickey for core from 139.178.68.195 port 52976 ssh2: RSA SHA256:6m86ErQYPfwi49NZRVftW/USO9k3FxgPtHd71f+HMpY
Sep  4 20:35:19.794167 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 20:35:19.801262 systemd-logind[1444]: New session 33 of user core.
Sep  4 20:35:19.813508 systemd[1]: Started session-33.scope - Session 33 of User core.
Sep  4 20:35:19.913967 kubelet[2527]: E0904 20:35:19.913699    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:19.915174 containerd[1467]: time="2024-09-04T20:35:19.914415760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pl82b,Uid:3233e9e0-5b38-4497-a070-9db73a48c6fb,Namespace:kube-system,Attempt:0,}"
Sep  4 20:35:19.952543 containerd[1467]: time="2024-09-04T20:35:19.952416853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 20:35:19.952714 containerd[1467]: time="2024-09-04T20:35:19.952563391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:35:19.952714 containerd[1467]: time="2024-09-04T20:35:19.952597179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 20:35:19.952714 containerd[1467]: time="2024-09-04T20:35:19.952634444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 20:35:19.989364 systemd[1]: Started cri-containerd-2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d.scope - libcontainer container 2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d.
Sep  4 20:35:20.023453 containerd[1467]: time="2024-09-04T20:35:20.023129953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pl82b,Uid:3233e9e0-5b38-4497-a070-9db73a48c6fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d\""
Sep  4 20:35:20.024440 kubelet[2527]: E0904 20:35:20.024307    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:20.030083 containerd[1467]: time="2024-09-04T20:35:20.029084486Z" level=info msg="CreateContainer within sandbox \"2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Sep  4 20:35:20.050673 containerd[1467]: time="2024-09-04T20:35:20.050444961Z" level=info msg="CreateContainer within sandbox \"2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d8cf11032bdb5cb22b752ca108491a18b3cc454c631c2e5d91565e8d2270553\""
Sep  4 20:35:20.052026 containerd[1467]: time="2024-09-04T20:35:20.051955954Z" level=info msg="StartContainer for \"3d8cf11032bdb5cb22b752ca108491a18b3cc454c631c2e5d91565e8d2270553\""
Sep  4 20:35:20.088564 systemd[1]: Started cri-containerd-3d8cf11032bdb5cb22b752ca108491a18b3cc454c631c2e5d91565e8d2270553.scope - libcontainer container 3d8cf11032bdb5cb22b752ca108491a18b3cc454c631c2e5d91565e8d2270553.
Sep  4 20:35:20.124291 containerd[1467]: time="2024-09-04T20:35:20.124244672Z" level=info msg="StartContainer for \"3d8cf11032bdb5cb22b752ca108491a18b3cc454c631c2e5d91565e8d2270553\" returns successfully"
Sep  4 20:35:20.139127 systemd[1]: cri-containerd-3d8cf11032bdb5cb22b752ca108491a18b3cc454c631c2e5d91565e8d2270553.scope: Deactivated successfully.
Sep  4 20:35:20.180305 containerd[1467]: time="2024-09-04T20:35:20.180226377Z" level=info msg="shim disconnected" id=3d8cf11032bdb5cb22b752ca108491a18b3cc454c631c2e5d91565e8d2270553 namespace=k8s.io
Sep  4 20:35:20.180305 containerd[1467]: time="2024-09-04T20:35:20.180296961Z" level=warning msg="cleaning up after shim disconnected" id=3d8cf11032bdb5cb22b752ca108491a18b3cc454c631c2e5d91565e8d2270553 namespace=k8s.io
Sep  4 20:35:20.180639 containerd[1467]: time="2024-09-04T20:35:20.180326618Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 20:35:21.079869 kubelet[2527]: E0904 20:35:21.079628    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:21.083995 containerd[1467]: time="2024-09-04T20:35:21.083803267Z" level=info msg="CreateContainer within sandbox \"2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Sep  4 20:35:21.106115 containerd[1467]: time="2024-09-04T20:35:21.106016654Z" level=info msg="CreateContainer within sandbox \"2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fcad76bc8d27fccdbcb9699401d4081d5943b1cb2b859396700e6143e982022c\""
Sep  4 20:35:21.108438 containerd[1467]: time="2024-09-04T20:35:21.107424809Z" level=info msg="StartContainer for \"fcad76bc8d27fccdbcb9699401d4081d5943b1cb2b859396700e6143e982022c\""
Sep  4 20:35:21.145441 systemd[1]: Started cri-containerd-fcad76bc8d27fccdbcb9699401d4081d5943b1cb2b859396700e6143e982022c.scope - libcontainer container fcad76bc8d27fccdbcb9699401d4081d5943b1cb2b859396700e6143e982022c.
Sep  4 20:35:21.178267 containerd[1467]: time="2024-09-04T20:35:21.178223787Z" level=info msg="StartContainer for \"fcad76bc8d27fccdbcb9699401d4081d5943b1cb2b859396700e6143e982022c\" returns successfully"
Sep  4 20:35:21.187851 systemd[1]: cri-containerd-fcad76bc8d27fccdbcb9699401d4081d5943b1cb2b859396700e6143e982022c.scope: Deactivated successfully.
Sep  4 20:35:21.215804 containerd[1467]: time="2024-09-04T20:35:21.215732776Z" level=info msg="shim disconnected" id=fcad76bc8d27fccdbcb9699401d4081d5943b1cb2b859396700e6143e982022c namespace=k8s.io
Sep  4 20:35:21.215804 containerd[1467]: time="2024-09-04T20:35:21.215791697Z" level=warning msg="cleaning up after shim disconnected" id=fcad76bc8d27fccdbcb9699401d4081d5943b1cb2b859396700e6143e982022c namespace=k8s.io
Sep  4 20:35:21.215804 containerd[1467]: time="2024-09-04T20:35:21.215800395Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 20:35:21.839453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcad76bc8d27fccdbcb9699401d4081d5943b1cb2b859396700e6143e982022c-rootfs.mount: Deactivated successfully.
Sep  4 20:35:22.087479 kubelet[2527]: E0904 20:35:22.087413    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:22.090303 containerd[1467]: time="2024-09-04T20:35:22.090032255Z" level=info msg="CreateContainer within sandbox \"2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Sep  4 20:35:22.107528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3080888616.mount: Deactivated successfully.
Sep  4 20:35:22.116639 containerd[1467]: time="2024-09-04T20:35:22.116573577Z" level=info msg="CreateContainer within sandbox \"2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d80de5ca7902f60f17cab320d7a844ad238f99aa000ed3e67a9ed0f440c9ebaa\""
Sep  4 20:35:22.118003 containerd[1467]: time="2024-09-04T20:35:22.117972640Z" level=info msg="StartContainer for \"d80de5ca7902f60f17cab320d7a844ad238f99aa000ed3e67a9ed0f440c9ebaa\""
Sep  4 20:35:22.156434 systemd[1]: Started cri-containerd-d80de5ca7902f60f17cab320d7a844ad238f99aa000ed3e67a9ed0f440c9ebaa.scope - libcontainer container d80de5ca7902f60f17cab320d7a844ad238f99aa000ed3e67a9ed0f440c9ebaa.
Sep  4 20:35:22.189291 containerd[1467]: time="2024-09-04T20:35:22.189245851Z" level=info msg="StartContainer for \"d80de5ca7902f60f17cab320d7a844ad238f99aa000ed3e67a9ed0f440c9ebaa\" returns successfully"
Sep  4 20:35:22.196606 systemd[1]: cri-containerd-d80de5ca7902f60f17cab320d7a844ad238f99aa000ed3e67a9ed0f440c9ebaa.scope: Deactivated successfully.
Sep  4 20:35:22.225366 containerd[1467]: time="2024-09-04T20:35:22.225045850Z" level=info msg="shim disconnected" id=d80de5ca7902f60f17cab320d7a844ad238f99aa000ed3e67a9ed0f440c9ebaa namespace=k8s.io
Sep  4 20:35:22.225366 containerd[1467]: time="2024-09-04T20:35:22.225099941Z" level=warning msg="cleaning up after shim disconnected" id=d80de5ca7902f60f17cab320d7a844ad238f99aa000ed3e67a9ed0f440c9ebaa namespace=k8s.io
Sep  4 20:35:22.225366 containerd[1467]: time="2024-09-04T20:35:22.225108083Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 20:35:22.242051 containerd[1467]: time="2024-09-04T20:35:22.240663854Z" level=warning msg="cleanup warnings time=\"2024-09-04T20:35:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Sep  4 20:35:22.839384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d80de5ca7902f60f17cab320d7a844ad238f99aa000ed3e67a9ed0f440c9ebaa-rootfs.mount: Deactivated successfully.
Sep  4 20:35:22.855038 kubelet[2527]: E0904 20:35:22.854920    2527 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Sep  4 20:35:23.092243 kubelet[2527]: E0904 20:35:23.091492    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:23.097447 containerd[1467]: time="2024-09-04T20:35:23.097295604Z" level=info msg="CreateContainer within sandbox \"2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Sep  4 20:35:23.118368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4043908367.mount: Deactivated successfully.
Sep  4 20:35:23.120184 containerd[1467]: time="2024-09-04T20:35:23.120002553Z" level=info msg="CreateContainer within sandbox \"2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"41b5471d1eb708c2ae7781196ea61d244865678b09d05b3fc2e89b034af2d027\""
Sep  4 20:35:23.122187 containerd[1467]: time="2024-09-04T20:35:23.120868701Z" level=info msg="StartContainer for \"41b5471d1eb708c2ae7781196ea61d244865678b09d05b3fc2e89b034af2d027\""
Sep  4 20:35:23.162416 systemd[1]: Started cri-containerd-41b5471d1eb708c2ae7781196ea61d244865678b09d05b3fc2e89b034af2d027.scope - libcontainer container 41b5471d1eb708c2ae7781196ea61d244865678b09d05b3fc2e89b034af2d027.
Sep  4 20:35:23.201866 systemd[1]: cri-containerd-41b5471d1eb708c2ae7781196ea61d244865678b09d05b3fc2e89b034af2d027.scope: Deactivated successfully.
Sep  4 20:35:23.207117 containerd[1467]: time="2024-09-04T20:35:23.206976689Z" level=info msg="StartContainer for \"41b5471d1eb708c2ae7781196ea61d244865678b09d05b3fc2e89b034af2d027\" returns successfully"
Sep  4 20:35:23.242812 containerd[1467]: time="2024-09-04T20:35:23.242734812Z" level=info msg="shim disconnected" id=41b5471d1eb708c2ae7781196ea61d244865678b09d05b3fc2e89b034af2d027 namespace=k8s.io
Sep  4 20:35:23.242812 containerd[1467]: time="2024-09-04T20:35:23.242805384Z" level=warning msg="cleaning up after shim disconnected" id=41b5471d1eb708c2ae7781196ea61d244865678b09d05b3fc2e89b034af2d027 namespace=k8s.io
Sep  4 20:35:23.242812 containerd[1467]: time="2024-09-04T20:35:23.242814087Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 20:35:23.270988 containerd[1467]: time="2024-09-04T20:35:23.270920372Z" level=warning msg="cleanup warnings time=\"2024-09-04T20:35:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Sep  4 20:35:23.839548 systemd[1]: run-containerd-runc-k8s.io-41b5471d1eb708c2ae7781196ea61d244865678b09d05b3fc2e89b034af2d027-runc.n1a8Ok.mount: Deactivated successfully.
Sep  4 20:35:23.839702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41b5471d1eb708c2ae7781196ea61d244865678b09d05b3fc2e89b034af2d027-rootfs.mount: Deactivated successfully.
Sep  4 20:35:24.098297 kubelet[2527]: E0904 20:35:24.098090    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:24.102102 containerd[1467]: time="2024-09-04T20:35:24.101819147Z" level=info msg="CreateContainer within sandbox \"2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Sep  4 20:35:24.136174 containerd[1467]: time="2024-09-04T20:35:24.134656800Z" level=info msg="CreateContainer within sandbox \"2cf70ab7e827f7daa3dfdb55616ebf44d5ecf7991dc9a241792d91c7e26cfa3d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c390f430c8b4010f55fcc89da75c49ddabc780b983f9ef13f211c61cdb7233ed\""
Sep  4 20:35:24.137079 containerd[1467]: time="2024-09-04T20:35:24.137034081Z" level=info msg="StartContainer for \"c390f430c8b4010f55fcc89da75c49ddabc780b983f9ef13f211c61cdb7233ed\""
Sep  4 20:35:24.189485 systemd[1]: Started cri-containerd-c390f430c8b4010f55fcc89da75c49ddabc780b983f9ef13f211c61cdb7233ed.scope - libcontainer container c390f430c8b4010f55fcc89da75c49ddabc780b983f9ef13f211c61cdb7233ed.
Sep  4 20:35:24.234058 containerd[1467]: time="2024-09-04T20:35:24.233687529Z" level=info msg="StartContainer for \"c390f430c8b4010f55fcc89da75c49ddabc780b983f9ef13f211c61cdb7233ed\" returns successfully"
Sep  4 20:35:24.674395 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Sep  4 20:35:25.102926 kubelet[2527]: E0904 20:35:25.102890    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:25.120023 kubelet[2527]: I0904 20:35:25.119907    2527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pl82b" podStartSLOduration=6.119120566 podStartE2EDuration="6.119120566s" podCreationTimestamp="2024-09-04 20:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 20:35:25.118307553 +0000 UTC m=+142.652430595" watchObservedRunningTime="2024-09-04 20:35:25.119120566 +0000 UTC m=+142.653243604"
Sep  4 20:35:26.105698 kubelet[2527]: E0904 20:35:26.105056    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:26.184865 kubelet[2527]: I0904 20:35:26.183722    2527 setters.go:568] "Node became not ready" node="ci-3975.2.1-b-0d33e4c091" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-04T20:35:26Z","lastTransitionTime":"2024-09-04T20:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Sep  4 20:35:27.107300 kubelet[2527]: E0904 20:35:27.107264    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:27.909776 systemd-networkd[1346]: lxc_health: Link UP
Sep  4 20:35:27.916528 systemd-networkd[1346]: lxc_health: Gained carrier
Sep  4 20:35:28.108954 kubelet[2527]: E0904 20:35:28.108852    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:28.483485 systemd[1]: run-containerd-runc-k8s.io-c390f430c8b4010f55fcc89da75c49ddabc780b983f9ef13f211c61cdb7233ed-runc.doRFb2.mount: Deactivated successfully.
Sep  4 20:35:29.113236 kubelet[2527]: E0904 20:35:29.113197    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:29.780030 systemd-networkd[1346]: lxc_health: Gained IPv6LL
Sep  4 20:35:30.115808 kubelet[2527]: E0904 20:35:30.115745    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:31.656270 kubelet[2527]: E0904 20:35:31.656236    2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Sep  4 20:35:33.260136 systemd[1]: run-containerd-runc-k8s.io-c390f430c8b4010f55fcc89da75c49ddabc780b983f9ef13f211c61cdb7233ed-runc.hudbAm.mount: Deactivated successfully.
Sep  4 20:35:37.783847 sshd[4391]: pam_unix(sshd:session): session closed for user core
Sep  4 20:35:37.788124 systemd[1]: sshd@32-209.38.64.58:22-139.178.68.195:52976.service: Deactivated successfully.
Sep  4 20:35:37.790555 systemd[1]: session-33.scope: Deactivated successfully.
Sep  4 20:35:37.793309 systemd-logind[1444]: Session 33 logged out. Waiting for processes to exit.
Sep  4 20:35:37.794597 systemd-logind[1444]: Removed session 33.