Sep  4 17:19:43.044092 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep  4 15:49:08 -00 2024
Sep  4 17:19:43.044131 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf
Sep  4 17:19:43.044147 kernel: BIOS-provided physical RAM map:
Sep  4 17:19:43.044158 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Sep  4 17:19:43.044168 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Sep  4 17:19:43.044178 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Sep  4 17:19:43.044196 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable
Sep  4 17:19:43.044208 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved
Sep  4 17:19:43.044219 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved
Sep  4 17:19:43.044231 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Sep  4 17:19:43.044242 kernel: NX (Execute Disable) protection: active
Sep  4 17:19:43.044253 kernel: APIC: Static calls initialized
Sep  4 17:19:43.044318 kernel: SMBIOS 2.7 present.
Sep  4 17:19:43.044334 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017
Sep  4 17:19:43.044352 kernel: Hypervisor detected: KVM
Sep  4 17:19:43.044365 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Sep  4 17:19:43.044378 kernel: kvm-clock: using sched offset of 5953925578 cycles
Sep  4 17:19:43.044392 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Sep  4 17:19:43.044405 kernel: tsc: Detected 2499.996 MHz processor
Sep  4 17:19:43.044418 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Sep  4 17:19:43.044431 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Sep  4 17:19:43.044448 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000
Sep  4 17:19:43.044461 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Sep  4 17:19:43.044473 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Sep  4 17:19:43.044486 kernel: Using GB pages for direct mapping
Sep  4 17:19:43.044499 kernel: ACPI: Early table checksum verification disabled
Sep  4 17:19:43.044536 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON)
Sep  4 17:19:43.044548 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001)
Sep  4 17:19:43.044560 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001)
Sep  4 17:19:43.044571 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001)
Sep  4 17:19:43.044588 kernel: ACPI: FACS 0x000000007D9EFF40 000040
Sep  4 17:19:43.044602 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Sep  4 17:19:43.044614 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Sep  4 17:19:43.044626 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001)
Sep  4 17:19:43.044638 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Sep  4 17:19:43.044650 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001)
Sep  4 17:19:43.044663 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001)
Sep  4 17:19:43.044676 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Sep  4 17:19:43.044692 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3]
Sep  4 17:19:43.044706 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488]
Sep  4 17:19:43.044726 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f]
Sep  4 17:19:43.044741 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39]
Sep  4 17:19:43.044756 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645]
Sep  4 17:19:43.044770 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf]
Sep  4 17:19:43.044789 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b]
Sep  4 17:19:43.044804 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7]
Sep  4 17:19:43.044820 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037]
Sep  4 17:19:43.044834 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba]
Sep  4 17:19:43.044847 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Sep  4 17:19:43.044861 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Sep  4 17:19:43.044877 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff]
Sep  4 17:19:43.044892 kernel: NUMA: Initialized distance table, cnt=1
Sep  4 17:19:43.044907 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff]
Sep  4 17:19:43.044925 kernel: Zone ranges:
Sep  4 17:19:43.044941 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Sep  4 17:19:43.044956 kernel:   DMA32    [mem 0x0000000001000000-0x000000007d9e9fff]
Sep  4 17:19:43.044971 kernel:   Normal   empty
Sep  4 17:19:43.044986 kernel: Movable zone start for each node
Sep  4 17:19:43.045001 kernel: Early memory node ranges
Sep  4 17:19:43.045016 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Sep  4 17:19:43.045029 kernel:   node   0: [mem 0x0000000000100000-0x000000007d9e9fff]
Sep  4 17:19:43.045041 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff]
Sep  4 17:19:43.045057 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Sep  4 17:19:43.045071 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Sep  4 17:19:43.045084 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges
Sep  4 17:19:43.045097 kernel: ACPI: PM-Timer IO Port: 0xb008
Sep  4 17:19:43.045110 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Sep  4 17:19:43.045123 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
Sep  4 17:19:43.045137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Sep  4 17:19:43.045151 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Sep  4 17:19:43.045165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Sep  4 17:19:43.045183 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Sep  4 17:19:43.045197 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Sep  4 17:19:43.045210 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Sep  4 17:19:43.045225 kernel: TSC deadline timer available
Sep  4 17:19:43.045239 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Sep  4 17:19:43.045253 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Sep  4 17:19:43.045325 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices
Sep  4 17:19:43.045341 kernel: Booting paravirtualized kernel on KVM
Sep  4 17:19:43.045356 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Sep  4 17:19:43.045370 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Sep  4 17:19:43.045389 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576
Sep  4 17:19:43.045402 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152
Sep  4 17:19:43.045416 kernel: pcpu-alloc: [0] 0 1 
Sep  4 17:19:43.045432 kernel: kvm-guest: PV spinlocks enabled
Sep  4 17:19:43.045445 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Sep  4 17:19:43.045461 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf
Sep  4 17:19:43.045476 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Sep  4 17:19:43.045489 kernel: random: crng init done
Sep  4 17:19:43.045527 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Sep  4 17:19:43.045538 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Sep  4 17:19:43.045550 kernel: Fallback order for Node 0: 0 
Sep  4 17:19:43.045562 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 506242
Sep  4 17:19:43.045574 kernel: Policy zone: DMA32
Sep  4 17:19:43.045585 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Sep  4 17:19:43.045597 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 131296K reserved, 0K cma-reserved)
Sep  4 17:19:43.045610 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Sep  4 17:19:43.046938 kernel: Kernel/User page tables isolation: enabled
Sep  4 17:19:43.046960 kernel: ftrace: allocating 37670 entries in 148 pages
Sep  4 17:19:43.046976 kernel: ftrace: allocated 148 pages with 3 groups
Sep  4 17:19:43.046992 kernel: Dynamic Preempt: voluntary
Sep  4 17:19:43.047007 kernel: rcu: Preemptible hierarchical RCU implementation.
Sep  4 17:19:43.047024 kernel: rcu:         RCU event tracing is enabled.
Sep  4 17:19:43.047038 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Sep  4 17:19:43.047053 kernel:         Trampoline variant of Tasks RCU enabled.
Sep  4 17:19:43.047068 kernel:         Rude variant of Tasks RCU enabled.
Sep  4 17:19:43.047083 kernel:         Tracing variant of Tasks RCU enabled.
Sep  4 17:19:43.047103 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Sep  4 17:19:43.047119 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Sep  4 17:19:43.047196 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Sep  4 17:19:43.047214 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Sep  4 17:19:43.047230 kernel: Console: colour VGA+ 80x25
Sep  4 17:19:43.047244 kernel: printk: console [ttyS0] enabled
Sep  4 17:19:43.047259 kernel: ACPI: Core revision 20230628
Sep  4 17:19:43.047314 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns
Sep  4 17:19:43.047331 kernel: APIC: Switch to symmetric I/O mode setup
Sep  4 17:19:43.047350 kernel: x2apic enabled
Sep  4 17:19:43.047366 kernel: APIC: Switched APIC routing to: physical x2apic
Sep  4 17:19:43.047392 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns
Sep  4 17:19:43.047412 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996)
Sep  4 17:19:43.047427 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Sep  4 17:19:43.047443 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Sep  4 17:19:43.047458 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Sep  4 17:19:43.047473 kernel: Spectre V2 : Mitigation: Retpolines
Sep  4 17:19:43.047489 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Sep  4 17:19:43.047521 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Sep  4 17:19:43.047535 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Sep  4 17:19:43.047547 kernel: RETBleed: Vulnerable
Sep  4 17:19:43.047562 kernel: Speculative Store Bypass: Vulnerable
Sep  4 17:19:43.047574 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode
Sep  4 17:19:43.047587 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Sep  4 17:19:43.047600 kernel: GDS: Unknown: Dependent on hypervisor status
Sep  4 17:19:43.047614 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Sep  4 17:19:43.047626 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Sep  4 17:19:43.047642 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Sep  4 17:19:43.047654 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Sep  4 17:19:43.047666 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Sep  4 17:19:43.047679 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Sep  4 17:19:43.047697 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Sep  4 17:19:43.047710 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Sep  4 17:19:43.047724 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Sep  4 17:19:43.047738 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Sep  4 17:19:43.047754 kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Sep  4 17:19:43.047769 kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Sep  4 17:19:43.047785 kernel: x86/fpu: xstate_offset[5]:  960, xstate_sizes[5]:   64
Sep  4 17:19:43.047804 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]:  512
Sep  4 17:19:43.047820 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024
Sep  4 17:19:43.047835 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]:    8
Sep  4 17:19:43.047851 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format.
Sep  4 17:19:43.047867 kernel: Freeing SMP alternatives memory: 32K
Sep  4 17:19:43.047953 kernel: pid_max: default: 32768 minimum: 301
Sep  4 17:19:43.047972 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity
Sep  4 17:19:43.047989 kernel: SELinux:  Initializing.
Sep  4 17:19:43.048005 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Sep  4 17:19:43.048021 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Sep  4 17:19:43.048037 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7)
Sep  4 17:19:43.048053 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1.
Sep  4 17:19:43.048073 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1.
Sep  4 17:19:43.048089 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1.
Sep  4 17:19:43.048104 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Sep  4 17:19:43.048120 kernel: signal: max sigframe size: 3632
Sep  4 17:19:43.048136 kernel: rcu: Hierarchical SRCU implementation.
Sep  4 17:19:43.048153 kernel: rcu:         Max phase no-delay instances is 400.
Sep  4 17:19:43.048169 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Sep  4 17:19:43.048185 kernel: smp: Bringing up secondary CPUs ...
Sep  4 17:19:43.048201 kernel: smpboot: x86: Booting SMP configuration:
Sep  4 17:19:43.048220 kernel: .... node  #0, CPUs:      #1
Sep  4 17:19:43.048237 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Sep  4 17:19:43.048254 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Sep  4 17:19:43.048313 kernel: smp: Brought up 1 node, 2 CPUs
Sep  4 17:19:43.048331 kernel: smpboot: Max logical packages: 1
Sep  4 17:19:43.048349 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS)
Sep  4 17:19:43.048365 kernel: devtmpfs: initialized
Sep  4 17:19:43.048381 kernel: x86/mm: Memory block size: 128MB
Sep  4 17:19:43.048401 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Sep  4 17:19:43.048417 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Sep  4 17:19:43.048433 kernel: pinctrl core: initialized pinctrl subsystem
Sep  4 17:19:43.048449 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Sep  4 17:19:43.048465 kernel: audit: initializing netlink subsys (disabled)
Sep  4 17:19:43.048480 kernel: audit: type=2000 audit(1725470381.977:1): state=initialized audit_enabled=0 res=1
Sep  4 17:19:43.048496 kernel: thermal_sys: Registered thermal governor 'step_wise'
Sep  4 17:19:43.048536 kernel: thermal_sys: Registered thermal governor 'user_space'
Sep  4 17:19:43.048549 kernel: cpuidle: using governor menu
Sep  4 17:19:43.048569 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Sep  4 17:19:43.048585 kernel: dca service started, version 1.12.1
Sep  4 17:19:43.048601 kernel: PCI: Using configuration type 1 for base access
Sep  4 17:19:43.048617 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Sep  4 17:19:43.048633 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Sep  4 17:19:43.048650 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Sep  4 17:19:43.048666 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Sep  4 17:19:43.048682 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Sep  4 17:19:43.048697 kernel: ACPI: Added _OSI(Module Device)
Sep  4 17:19:43.048716 kernel: ACPI: Added _OSI(Processor Device)
Sep  4 17:19:43.048732 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Sep  4 17:19:43.048748 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Sep  4 17:19:43.048764 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Sep  4 17:19:43.048780 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Sep  4 17:19:43.048796 kernel: ACPI: Interpreter enabled
Sep  4 17:19:43.048812 kernel: ACPI: PM: (supports S0 S5)
Sep  4 17:19:43.048922 kernel: ACPI: Using IOAPIC for interrupt routing
Sep  4 17:19:43.048940 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Sep  4 17:19:43.048961 kernel: PCI: Using E820 reservations for host bridge windows
Sep  4 17:19:43.049082 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Sep  4 17:19:43.049101 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Sep  4 17:19:43.049430 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Sep  4 17:19:43.049613 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI]
Sep  4 17:19:43.049752 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
Sep  4 17:19:43.049773 kernel: acpiphp: Slot [3] registered
Sep  4 17:19:43.049794 kernel: acpiphp: Slot [4] registered
Sep  4 17:19:43.049810 kernel: acpiphp: Slot [5] registered
Sep  4 17:19:43.049826 kernel: acpiphp: Slot [6] registered
Sep  4 17:19:43.049842 kernel: acpiphp: Slot [7] registered
Sep  4 17:19:43.049858 kernel: acpiphp: Slot [8] registered
Sep  4 17:19:43.049874 kernel: acpiphp: Slot [9] registered
Sep  4 17:19:43.049890 kernel: acpiphp: Slot [10] registered
Sep  4 17:19:43.049906 kernel: acpiphp: Slot [11] registered
Sep  4 17:19:43.049921 kernel: acpiphp: Slot [12] registered
Sep  4 17:19:43.050059 kernel: acpiphp: Slot [13] registered
Sep  4 17:19:43.050081 kernel: acpiphp: Slot [14] registered
Sep  4 17:19:43.050097 kernel: acpiphp: Slot [15] registered
Sep  4 17:19:43.050113 kernel: acpiphp: Slot [16] registered
Sep  4 17:19:43.050129 kernel: acpiphp: Slot [17] registered
Sep  4 17:19:43.050144 kernel: acpiphp: Slot [18] registered
Sep  4 17:19:43.050160 kernel: acpiphp: Slot [19] registered
Sep  4 17:19:43.050176 kernel: acpiphp: Slot [20] registered
Sep  4 17:19:43.050192 kernel: acpiphp: Slot [21] registered
Sep  4 17:19:43.050207 kernel: acpiphp: Slot [22] registered
Sep  4 17:19:43.050226 kernel: acpiphp: Slot [23] registered
Sep  4 17:19:43.050242 kernel: acpiphp: Slot [24] registered
Sep  4 17:19:43.050257 kernel: acpiphp: Slot [25] registered
Sep  4 17:19:43.050326 kernel: acpiphp: Slot [26] registered
Sep  4 17:19:43.050345 kernel: acpiphp: Slot [27] registered
Sep  4 17:19:43.050362 kernel: acpiphp: Slot [28] registered
Sep  4 17:19:43.050377 kernel: acpiphp: Slot [29] registered
Sep  4 17:19:43.050393 kernel: acpiphp: Slot [30] registered
Sep  4 17:19:43.050409 kernel: acpiphp: Slot [31] registered
Sep  4 17:19:43.050424 kernel: PCI host bridge to bus 0000:00
Sep  4 17:19:43.050606 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Sep  4 17:19:43.050737 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Sep  4 17:19:43.050855 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Sep  4 17:19:43.050967 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Sep  4 17:19:43.051082 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Sep  4 17:19:43.051693 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Sep  4 17:19:43.051870 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Sep  4 17:19:43.052025 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000
Sep  4 17:19:43.052165 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Sep  4 17:19:43.052362 kernel: pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
Sep  4 17:19:43.052517 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff
Sep  4 17:19:43.052662 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff
Sep  4 17:19:43.052801 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff
Sep  4 17:19:43.052938 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff
Sep  4 17:19:43.053071 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff
Sep  4 17:19:43.053201 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff
Sep  4 17:19:43.053383 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 12695 usecs
Sep  4 17:19:43.053539 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000
Sep  4 17:19:43.053665 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref]
Sep  4 17:19:43.054836 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Sep  4 17:19:43.054991 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Sep  4 17:19:43.055223 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Sep  4 17:19:43.055432 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff]
Sep  4 17:19:43.055618 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Sep  4 17:19:43.055756 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff]
Sep  4 17:19:43.055774 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Sep  4 17:19:43.055788 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Sep  4 17:19:43.055809 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Sep  4 17:19:43.055825 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Sep  4 17:19:43.055841 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Sep  4 17:19:43.055856 kernel: iommu: Default domain type: Translated
Sep  4 17:19:43.055873 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Sep  4 17:19:43.055888 kernel: PCI: Using ACPI for IRQ routing
Sep  4 17:19:43.055902 kernel: PCI: pci_cache_line_size set to 64 bytes
Sep  4 17:19:43.055915 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Sep  4 17:19:43.055929 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff]
Sep  4 17:19:43.056073 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device
Sep  4 17:19:43.056210 kernel: pci 0000:00:03.0: vgaarb: bridge control possible
Sep  4 17:19:43.056417 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Sep  4 17:19:43.056442 kernel: vgaarb: loaded
Sep  4 17:19:43.056459 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
Sep  4 17:19:43.056476 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter
Sep  4 17:19:43.056492 kernel: clocksource: Switched to clocksource kvm-clock
Sep  4 17:19:43.056550 kernel: VFS: Disk quotas dquot_6.6.0
Sep  4 17:19:43.056659 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Sep  4 17:19:43.056677 kernel: pnp: PnP ACPI init
Sep  4 17:19:43.056694 kernel: pnp: PnP ACPI: found 5 devices
Sep  4 17:19:43.056710 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Sep  4 17:19:43.056726 kernel: NET: Registered PF_INET protocol family
Sep  4 17:19:43.056742 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Sep  4 17:19:43.056759 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Sep  4 17:19:43.056775 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Sep  4 17:19:43.056791 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Sep  4 17:19:43.056811 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear)
Sep  4 17:19:43.056827 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Sep  4 17:19:43.056843 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Sep  4 17:19:43.056859 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Sep  4 17:19:43.056875 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Sep  4 17:19:43.056891 kernel: NET: Registered PF_XDP protocol family
Sep  4 17:19:43.057041 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Sep  4 17:19:43.057378 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Sep  4 17:19:43.057552 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Sep  4 17:19:43.057808 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Sep  4 17:19:43.057985 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Sep  4 17:19:43.058007 kernel: PCI: CLS 0 bytes, default 64
Sep  4 17:19:43.058025 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Sep  4 17:19:43.058042 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns
Sep  4 17:19:43.058059 kernel: clocksource: Switched to clocksource tsc
Sep  4 17:19:43.058075 kernel: Initialise system trusted keyrings
Sep  4 17:19:43.058091 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Sep  4 17:19:43.058112 kernel: Key type asymmetric registered
Sep  4 17:19:43.058128 kernel: Asymmetric key parser 'x509' registered
Sep  4 17:19:43.058143 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Sep  4 17:19:43.058157 kernel: io scheduler mq-deadline registered
Sep  4 17:19:43.058173 kernel: io scheduler kyber registered
Sep  4 17:19:43.058189 kernel: io scheduler bfq registered
Sep  4 17:19:43.058369 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Sep  4 17:19:43.058391 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Sep  4 17:19:43.058408 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Sep  4 17:19:43.058429 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Sep  4 17:19:43.058444 kernel: i8042: Warning: Keylock active
Sep  4 17:19:43.058461 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Sep  4 17:19:43.058477 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Sep  4 17:19:43.058674 kernel: rtc_cmos 00:00: RTC can wake from S4
Sep  4 17:19:43.058804 kernel: rtc_cmos 00:00: registered as rtc0
Sep  4 17:19:43.058926 kernel: rtc_cmos 00:00: setting system clock to 2024-09-04T17:19:42 UTC (1725470382)
Sep  4 17:19:43.059049 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Sep  4 17:19:43.059068 kernel: intel_pstate: CPU model not supported
Sep  4 17:19:43.059083 kernel: NET: Registered PF_INET6 protocol family
Sep  4 17:19:43.059097 kernel: Segment Routing with IPv6
Sep  4 17:19:43.059112 kernel: In-situ OAM (IOAM) with IPv6
Sep  4 17:19:43.059137 kernel: NET: Registered PF_PACKET protocol family
Sep  4 17:19:43.059153 kernel: Key type dns_resolver registered
Sep  4 17:19:43.059166 kernel: IPI shorthand broadcast: enabled
Sep  4 17:19:43.059182 kernel: sched_clock: Marking stable (668107792, 271530291)->(1024005248, -84367165)
Sep  4 17:19:43.059194 kernel: registered taskstats version 1
Sep  4 17:19:43.059216 kernel: Loading compiled-in X.509 certificates
Sep  4 17:19:43.059232 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02'
Sep  4 17:19:43.059247 kernel: Key type .fscrypt registered
Sep  4 17:19:43.059309 kernel: Key type fscrypt-provisioning registered
Sep  4 17:19:43.059328 kernel: ima: No TPM chip found, activating TPM-bypass!
Sep  4 17:19:43.059346 kernel: ima: Allocated hash algorithm: sha1
Sep  4 17:19:43.059361 kernel: ima: No architecture policies found
Sep  4 17:19:43.059376 kernel: clk: Disabling unused clocks
Sep  4 17:19:43.059394 kernel: Freeing unused kernel image (initmem) memory: 49336K
Sep  4 17:19:43.059489 kernel: Write protecting the kernel read-only data: 36864k
Sep  4 17:19:43.059745 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K
Sep  4 17:19:43.059762 kernel: Run /init as init process
Sep  4 17:19:43.059776 kernel:   with arguments:
Sep  4 17:19:43.059790 kernel:     /init
Sep  4 17:19:43.059803 kernel:   with environment:
Sep  4 17:19:43.059817 kernel:     HOME=/
Sep  4 17:19:43.059832 kernel:     TERM=linux
Sep  4 17:19:43.059846 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Sep  4 17:19:43.059874 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Sep  4 17:19:43.059908 systemd[1]: Detected virtualization amazon.
Sep  4 17:19:43.059936 systemd[1]: Detected architecture x86-64.
Sep  4 17:19:43.059954 systemd[1]: Running in initrd.
Sep  4 17:19:43.059974 systemd[1]: No hostname configured, using default hostname.
Sep  4 17:19:43.059993 systemd[1]: Hostname set to <localhost>.
Sep  4 17:19:43.060012 systemd[1]: Initializing machine ID from VM UUID.
Sep  4 17:19:43.060029 systemd[1]: Queued start job for default target initrd.target.
Sep  4 17:19:43.060047 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Sep  4 17:19:43.060065 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Sep  4 17:19:43.060085 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Sep  4 17:19:43.060103 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Sep  4 17:19:43.060125 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Sep  4 17:19:43.060143 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Sep  4 17:19:43.060164 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Sep  4 17:19:43.060182 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Sep  4 17:19:43.060201 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Sep  4 17:19:43.060219 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Sep  4 17:19:43.060238 systemd[1]: Reached target paths.target - Path Units.
Sep  4 17:19:43.060346 systemd[1]: Reached target slices.target - Slice Units.
Sep  4 17:19:43.060369 systemd[1]: Reached target swap.target - Swaps.
Sep  4 17:19:43.060387 systemd[1]: Reached target timers.target - Timer Units.
Sep  4 17:19:43.060406 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Sep  4 17:19:43.060423 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Sep  4 17:19:43.060441 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Sep  4 17:19:43.060459 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Sep  4 17:19:43.060477 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Sep  4 17:19:43.060495 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Sep  4 17:19:43.060644 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Sep  4 17:19:43.060663 systemd[1]: Reached target sockets.target - Socket Units.
Sep  4 17:19:43.060681 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Sep  4 17:19:43.060700 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Sep  4 17:19:43.060717 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Sep  4 17:19:43.060735 systemd[1]: Starting systemd-fsck-usr.service...
Sep  4 17:19:43.060753 systemd[1]: Starting systemd-journald.service - Journal Service...
Sep  4 17:19:43.060775 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Sep  4 17:19:43.060797 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Sep  4 17:19:43.060818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:19:43.060836 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Sep  4 17:19:43.060885 systemd-journald[178]: Collecting audit messages is disabled.
Sep  4 17:19:43.060929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Sep  4 17:19:43.060946 systemd[1]: Finished systemd-fsck-usr.service.
Sep  4 17:19:43.060967 systemd-journald[178]: Journal started
Sep  4 17:19:43.061007 systemd-journald[178]: Runtime Journal (/run/log/journal/ec26c1df0ac013271db9b68f1d673310) is 4.8M, max 38.6M, 33.7M free.
Sep  4 17:19:43.078569 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Sep  4 17:19:43.078661 systemd[1]: Started systemd-journald.service - Journal Service.
Sep  4 17:19:43.089660 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Sep  4 17:19:43.095387 systemd-modules-load[179]: Inserted module 'overlay'
Sep  4 17:19:43.108780 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Sep  4 17:19:43.268300 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Sep  4 17:19:43.268394 kernel: Bridge firewalling registered
Sep  4 17:19:43.158583 systemd-modules-load[179]: Inserted module 'br_netfilter'
Sep  4 17:19:43.280266 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories...
Sep  4 17:19:43.284779 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Sep  4 17:19:43.292205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:19:43.315904 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Sep  4 17:19:43.327108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Sep  4 17:19:43.334553 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Sep  4 17:19:43.339550 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Sep  4 17:19:43.356240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Sep  4 17:19:43.363777 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Sep  4 17:19:43.367164 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 17:19:43.379800 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Sep  4 17:19:43.406337 dracut-cmdline[213]: dracut-dracut-053
Sep  4 17:19:43.411543 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf
Sep  4 17:19:43.425992 systemd-resolved[210]: Positive Trust Anchors:
Sep  4 17:19:43.426009 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Sep  4 17:19:43.426071 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test
Sep  4 17:19:43.432767 systemd-resolved[210]: Defaulting to hostname 'linux'.
Sep  4 17:19:43.434257 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Sep  4 17:19:43.440466 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Sep  4 17:19:43.532533 kernel: SCSI subsystem initialized
Sep  4 17:19:43.544536 kernel: Loading iSCSI transport class v2.0-870.
Sep  4 17:19:43.557535 kernel: iscsi: registered transport (tcp)
Sep  4 17:19:43.616529 kernel: iscsi: registered transport (qla4xxx)
Sep  4 17:19:43.616605 kernel: QLogic iSCSI HBA Driver
Sep  4 17:19:43.667251 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Sep  4 17:19:43.674780 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Sep  4 17:19:43.712869 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Sep  4 17:19:43.712944 kernel: device-mapper: uevent: version 1.0.3
Sep  4 17:19:43.712959 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Sep  4 17:19:43.775554 kernel: raid6: avx512x4 gen()  9281 MB/s
Sep  4 17:19:43.792545 kernel: raid6: avx512x2 gen() 16185 MB/s
Sep  4 17:19:43.809556 kernel: raid6: avx512x1 gen() 16130 MB/s
Sep  4 17:19:43.826557 kernel: raid6: avx2x4   gen() 15779 MB/s
Sep  4 17:19:43.843556 kernel: raid6: avx2x2   gen() 15539 MB/s
Sep  4 17:19:43.860732 kernel: raid6: avx2x1   gen() 11989 MB/s
Sep  4 17:19:43.860804 kernel: raid6: using algorithm avx512x2 gen() 16185 MB/s
Sep  4 17:19:43.878925 kernel: raid6: .... xor() 20773 MB/s, rmw enabled
Sep  4 17:19:43.879004 kernel: raid6: using avx512x2 recovery algorithm
Sep  4 17:19:43.907539 kernel: xor: automatically using best checksumming function   avx       
Sep  4 17:19:44.113533 kernel: Btrfs loaded, zoned=no, fsverity=no
Sep  4 17:19:44.124449 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Sep  4 17:19:44.130769 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Sep  4 17:19:44.183302 systemd-udevd[396]: Using default interface naming scheme 'v255'.
Sep  4 17:19:44.189368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Sep  4 17:19:44.202086 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Sep  4 17:19:44.227962 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation
Sep  4 17:19:44.266228 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Sep  4 17:19:44.271756 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Sep  4 17:19:44.363743 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Sep  4 17:19:44.375252 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Sep  4 17:19:44.412834 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Sep  4 17:19:44.414228 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Sep  4 17:19:44.417336 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep  4 17:19:44.420353 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Sep  4 17:19:44.435750 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Sep  4 17:19:44.477957 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Sep  4 17:19:44.492526 kernel: cryptd: max_cpu_qlen set to 1000
Sep  4 17:19:44.514551 kernel: AVX2 version of gcm_enc/dec engaged.
Sep  4 17:19:44.514613 kernel: AES CTR mode by8 optimization enabled
Sep  4 17:19:44.536430 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Sep  4 17:19:44.536754 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 17:19:44.538991 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Sep  4 17:19:44.540363 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep  4 17:19:44.540631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:19:44.542757 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:19:44.567877 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:19:44.591909 kernel: ena 0000:00:05.0: ENA device version: 0.10
Sep  4 17:19:44.592233 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Sep  4 17:19:44.602825 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
Sep  4 17:19:44.609784 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:51:23:fa:ec:99
Sep  4 17:19:44.631109 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line.
Sep  4 17:19:44.829040 kernel: nvme nvme0: pci function 0000:00:04.0
Sep  4 17:19:44.829420 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Sep  4 17:19:44.829443 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Sep  4 17:19:44.830065 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Sep  4 17:19:44.830090 kernel: GPT:9289727 != 16777215
Sep  4 17:19:44.830110 kernel: GPT:Alternate GPT header not at the end of the disk.
Sep  4 17:19:44.830131 kernel: GPT:9289727 != 16777215
Sep  4 17:19:44.830150 kernel: GPT: Use GNU Parted to correct GPT errors.
Sep  4 17:19:44.830170 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Sep  4 17:19:44.830190 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (456)
Sep  4 17:19:44.830211 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (455)
Sep  4 17:19:44.839724 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM.
Sep  4 17:19:44.840213 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:19:44.853721 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Sep  4 17:19:44.898291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Sep  4 17:19:44.915727 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 17:19:44.932834 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT.
Sep  4 17:19:44.949677 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A.
Sep  4 17:19:44.949814 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A.
Sep  4 17:19:44.975923 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Sep  4 17:19:44.983885 disk-uuid[627]: Primary Header is updated.
Sep  4 17:19:44.983885 disk-uuid[627]: Secondary Entries is updated.
Sep  4 17:19:44.983885 disk-uuid[627]: Secondary Header is updated.
Sep  4 17:19:44.988628 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Sep  4 17:19:44.993555 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Sep  4 17:19:44.998547 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Sep  4 17:19:46.006183 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Sep  4 17:19:46.006629 disk-uuid[628]: The operation has completed successfully.
Sep  4 17:19:46.162146 systemd[1]: disk-uuid.service: Deactivated successfully.
Sep  4 17:19:46.162283 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Sep  4 17:19:46.187793 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Sep  4 17:19:46.193438 sh[971]: Success
Sep  4 17:19:46.221925 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Sep  4 17:19:46.312015 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Sep  4 17:19:46.321709 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Sep  4 17:19:46.328059 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Sep  4 17:19:46.356362 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602
Sep  4 17:19:46.356442 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Sep  4 17:19:46.356462 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Sep  4 17:19:46.356479 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Sep  4 17:19:46.356955 kernel: BTRFS info (device dm-0): using free space tree
Sep  4 17:19:46.409534 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Sep  4 17:19:46.419448 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Sep  4 17:19:46.422299 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Sep  4 17:19:46.431725 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Sep  4 17:19:46.441248 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Sep  4 17:19:46.453523 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b
Sep  4 17:19:46.453586 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Sep  4 17:19:46.454925 kernel: BTRFS info (device nvme0n1p6): using free space tree
Sep  4 17:19:46.458557 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Sep  4 17:19:46.474450 systemd[1]: mnt-oem.mount: Deactivated successfully.
Sep  4 17:19:46.476063 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b
Sep  4 17:19:46.495797 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Sep  4 17:19:46.505984 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Sep  4 17:19:46.573998 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Sep  4 17:19:46.580782 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Sep  4 17:19:46.619593 systemd-networkd[1163]: lo: Link UP
Sep  4 17:19:46.619604 systemd-networkd[1163]: lo: Gained carrier
Sep  4 17:19:46.622909 systemd-networkd[1163]: Enumeration completed
Sep  4 17:19:46.623416 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:19:46.623421 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Sep  4 17:19:46.627192 systemd[1]: Started systemd-networkd.service - Network Configuration.
Sep  4 17:19:46.631720 systemd[1]: Reached target network.target - Network.
Sep  4 17:19:46.635691 systemd-networkd[1163]: eth0: Link UP
Sep  4 17:19:46.635697 systemd-networkd[1163]: eth0: Gained carrier
Sep  4 17:19:46.635712 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:19:46.660653 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.19.141/20, gateway 172.31.16.1 acquired from 172.31.16.1
Sep  4 17:19:46.886626 ignition[1081]: Ignition 2.18.0
Sep  4 17:19:46.886642 ignition[1081]: Stage: fetch-offline
Sep  4 17:19:46.886990 ignition[1081]: no configs at "/usr/lib/ignition/base.d"
Sep  4 17:19:46.887280 ignition[1081]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Sep  4 17:19:46.887787 ignition[1081]: Ignition finished successfully
Sep  4 17:19:46.893199 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Sep  4 17:19:46.899741 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Sep  4 17:19:46.918681 ignition[1173]: Ignition 2.18.0
Sep  4 17:19:46.918695 ignition[1173]: Stage: fetch
Sep  4 17:19:46.919172 ignition[1173]: no configs at "/usr/lib/ignition/base.d"
Sep  4 17:19:46.919187 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Sep  4 17:19:46.919345 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1
Sep  4 17:19:46.946679 ignition[1173]: PUT result: OK
Sep  4 17:19:46.948938 ignition[1173]: parsed url from cmdline: ""
Sep  4 17:19:46.948947 ignition[1173]: no config URL provided
Sep  4 17:19:46.948958 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign"
Sep  4 17:19:46.948971 ignition[1173]: no config at "/usr/lib/ignition/user.ign"
Sep  4 17:19:46.948988 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1
Sep  4 17:19:46.950012 ignition[1173]: PUT result: OK
Sep  4 17:19:46.950059 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Sep  4 17:19:46.952614 ignition[1173]: GET result: OK
Sep  4 17:19:46.953806 ignition[1173]: parsing config with SHA512: 1d81cabda6118725f90e15fed136f4d749ab866707b9f56a63f977619b86fcbecce7d57f85354d1c503997a38fae2cdc9148ba99368e1ae6282912226574c772
Sep  4 17:19:46.964164 unknown[1173]: fetched base config from "system"
Sep  4 17:19:46.964187 unknown[1173]: fetched base config from "system"
Sep  4 17:19:46.964201 unknown[1173]: fetched user config from "aws"
Sep  4 17:19:46.968190 ignition[1173]: fetch: fetch complete
Sep  4 17:19:46.968203 ignition[1173]: fetch: fetch passed
Sep  4 17:19:46.968285 ignition[1173]: Ignition finished successfully
Sep  4 17:19:46.971188 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Sep  4 17:19:46.978732 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Sep  4 17:19:47.006725 ignition[1180]: Ignition 2.18.0
Sep  4 17:19:47.006739 ignition[1180]: Stage: kargs
Sep  4 17:19:47.007285 ignition[1180]: no configs at "/usr/lib/ignition/base.d"
Sep  4 17:19:47.007299 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Sep  4 17:19:47.007405 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1
Sep  4 17:19:47.008698 ignition[1180]: PUT result: OK
Sep  4 17:19:47.018010 ignition[1180]: kargs: kargs passed
Sep  4 17:19:47.018100 ignition[1180]: Ignition finished successfully
Sep  4 17:19:47.022297 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Sep  4 17:19:47.028276 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Sep  4 17:19:47.065428 ignition[1187]: Ignition 2.18.0
Sep  4 17:19:47.065443 ignition[1187]: Stage: disks
Sep  4 17:19:47.066056 ignition[1187]: no configs at "/usr/lib/ignition/base.d"
Sep  4 17:19:47.066070 ignition[1187]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Sep  4 17:19:47.066175 ignition[1187]: PUT http://169.254.169.254/latest/api/token: attempt #1
Sep  4 17:19:47.069259 ignition[1187]: PUT result: OK
Sep  4 17:19:47.075121 ignition[1187]: disks: disks passed
Sep  4 17:19:47.075201 ignition[1187]: Ignition finished successfully
Sep  4 17:19:47.079066 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Sep  4 17:19:47.079808 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Sep  4 17:19:47.085109 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Sep  4 17:19:47.088237 systemd[1]: Reached target local-fs.target - Local File Systems.
Sep  4 17:19:47.093112 systemd[1]: Reached target sysinit.target - System Initialization.
Sep  4 17:19:47.097982 systemd[1]: Reached target basic.target - Basic System.
Sep  4 17:19:47.108739 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Sep  4 17:19:47.164388 systemd-fsck[1196]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Sep  4 17:19:47.171055 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Sep  4 17:19:47.189866 systemd[1]: Mounting sysroot.mount - /sysroot...
Sep  4 17:19:47.357541 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none.
Sep  4 17:19:47.359228 systemd[1]: Mounted sysroot.mount - /sysroot.
Sep  4 17:19:47.360065 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Sep  4 17:19:47.376664 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Sep  4 17:19:47.391684 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Sep  4 17:19:47.397279 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Sep  4 17:19:47.397361 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Sep  4 17:19:47.399563 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Sep  4 17:19:47.405366 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Sep  4 17:19:47.409525 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1215)
Sep  4 17:19:47.413751 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b
Sep  4 17:19:47.413814 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Sep  4 17:19:47.413844 kernel: BTRFS info (device nvme0n1p6): using free space tree
Sep  4 17:19:47.419015 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Sep  4 17:19:47.418906 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Sep  4 17:19:47.422791 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Sep  4 17:19:47.786556 initrd-setup-root[1239]: cut: /sysroot/etc/passwd: No such file or directory
Sep  4 17:19:47.802462 initrd-setup-root[1246]: cut: /sysroot/etc/group: No such file or directory
Sep  4 17:19:47.811742 initrd-setup-root[1253]: cut: /sysroot/etc/shadow: No such file or directory
Sep  4 17:19:47.818455 initrd-setup-root[1260]: cut: /sysroot/etc/gshadow: No such file or directory
Sep  4 17:19:48.045330 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Sep  4 17:19:48.053686 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Sep  4 17:19:48.056742 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Sep  4 17:19:48.074716 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b
Sep  4 17:19:48.074776 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Sep  4 17:19:48.120948 ignition[1328]: INFO     : Ignition 2.18.0
Sep  4 17:19:48.122872 ignition[1328]: INFO     : Stage: mount
Sep  4 17:19:48.122872 ignition[1328]: INFO     : no configs at "/usr/lib/ignition/base.d"
Sep  4 17:19:48.122872 ignition[1328]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Sep  4 17:19:48.122872 ignition[1328]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Sep  4 17:19:48.131878 ignition[1328]: INFO     : PUT result: OK
Sep  4 17:19:48.123770 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Sep  4 17:19:48.135110 ignition[1328]: INFO     : mount: mount passed
Sep  4 17:19:48.136142 ignition[1328]: INFO     : Ignition finished successfully
Sep  4 17:19:48.138390 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Sep  4 17:19:48.145669 systemd[1]: Starting ignition-files.service - Ignition (files)...
Sep  4 17:19:48.314699 systemd-networkd[1163]: eth0: Gained IPv6LL
Sep  4 17:19:48.372804 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Sep  4 17:19:48.389602 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1340)
Sep  4 17:19:48.392006 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b
Sep  4 17:19:48.392158 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Sep  4 17:19:48.392183 kernel: BTRFS info (device nvme0n1p6): using free space tree
Sep  4 17:19:48.396534 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Sep  4 17:19:48.398949 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Sep  4 17:19:48.424767 ignition[1357]: INFO     : Ignition 2.18.0
Sep  4 17:19:48.424767 ignition[1357]: INFO     : Stage: files
Sep  4 17:19:48.427029 ignition[1357]: INFO     : no configs at "/usr/lib/ignition/base.d"
Sep  4 17:19:48.427029 ignition[1357]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Sep  4 17:19:48.427029 ignition[1357]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Sep  4 17:19:48.431242 ignition[1357]: INFO     : PUT result: OK
Sep  4 17:19:48.433839 ignition[1357]: DEBUG    : files: compiled without relabeling support, skipping
Sep  4 17:19:48.435970 ignition[1357]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Sep  4 17:19:48.435970 ignition[1357]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Sep  4 17:19:48.452629 ignition[1357]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Sep  4 17:19:48.454466 ignition[1357]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Sep  4 17:19:48.457042 unknown[1357]: wrote ssh authorized keys file for user: core
Sep  4 17:19:48.459543 ignition[1357]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Sep  4 17:19:48.463685 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Sep  4 17:19:48.466216 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Sep  4 17:19:48.466216 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Sep  4 17:19:48.471235 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Sep  4 17:19:48.679571 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Sep  4 17:19:48.770808 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Sep  4 17:19:48.773784 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Sep  4 17:19:48.773784 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Sep  4 17:19:49.261355 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET result: OK
Sep  4 17:19:49.478732 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Sep  4 17:19:49.478732 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/install.sh"
Sep  4 17:19:49.487308 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh"
Sep  4 17:19:49.487308 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nginx.yaml"
Sep  4 17:19:49.487308 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml"
Sep  4 17:19:49.487308 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Sep  4 17:19:49.487308 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Sep  4 17:19:49.487308 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Sep  4 17:19:49.487308 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Sep  4 17:19:49.510913 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Sep  4 17:19:49.510913 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Sep  4 17:19:49.510913 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw"
Sep  4 17:19:49.510913 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw"
Sep  4 17:19:49.510913 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw"
Sep  4 17:19:49.524758 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1
Sep  4 17:19:49.825248 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(c): GET result: OK
Sep  4 17:19:50.626203 ignition[1357]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw"
Sep  4 17:19:50.626203 ignition[1357]: INFO     : files: op(d): [started]  processing unit "containerd.service"
Sep  4 17:19:50.631713 ignition[1357]: INFO     : files: op(d): op(e): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Sep  4 17:19:50.634808 ignition[1357]: INFO     : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Sep  4 17:19:50.634808 ignition[1357]: INFO     : files: op(d): [finished] processing unit "containerd.service"
Sep  4 17:19:50.634808 ignition[1357]: INFO     : files: op(f): [started]  processing unit "prepare-helm.service"
Sep  4 17:19:50.640741 ignition[1357]: INFO     : files: op(f): op(10): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Sep  4 17:19:50.640741 ignition[1357]: INFO     : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Sep  4 17:19:50.640741 ignition[1357]: INFO     : files: op(f): [finished] processing unit "prepare-helm.service"
Sep  4 17:19:50.640741 ignition[1357]: INFO     : files: op(11): [started]  setting preset to enabled for "prepare-helm.service"
Sep  4 17:19:50.650537 ignition[1357]: INFO     : files: op(11): [finished] setting preset to enabled for "prepare-helm.service"
Sep  4 17:19:50.650537 ignition[1357]: INFO     : files: createResultFile: createFiles: op(12): [started]  writing file "/sysroot/etc/.ignition-result.json"
Sep  4 17:19:50.655909 ignition[1357]: INFO     : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json"
Sep  4 17:19:50.655909 ignition[1357]: INFO     : files: files passed
Sep  4 17:19:50.658895 ignition[1357]: INFO     : Ignition finished successfully
Sep  4 17:19:50.661855 systemd[1]: Finished ignition-files.service - Ignition (files).
Sep  4 17:19:50.667687 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Sep  4 17:19:50.675806 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Sep  4 17:19:50.682288 systemd[1]: ignition-quench.service: Deactivated successfully.
Sep  4 17:19:50.682380 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Sep  4 17:19:50.692835 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Sep  4 17:19:50.697869 initrd-setup-root-after-ignition[1386]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Sep  4 17:19:50.700026 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Sep  4 17:19:50.704008 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Sep  4 17:19:50.709629 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Sep  4 17:19:50.717049 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Sep  4 17:19:50.773641 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Sep  4 17:19:50.773782 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Sep  4 17:19:50.776389 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Sep  4 17:19:50.779589 systemd[1]: Reached target initrd.target - Initrd Default Target.
Sep  4 17:19:50.781775 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Sep  4 17:19:50.791788 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Sep  4 17:19:50.842336 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Sep  4 17:19:50.847774 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Sep  4 17:19:50.874114 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Sep  4 17:19:50.876851 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep  4 17:19:50.877077 systemd[1]: Stopped target timers.target - Timer Units.
Sep  4 17:19:50.881545 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Sep  4 17:19:50.881678 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Sep  4 17:19:50.886231 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Sep  4 17:19:50.887411 systemd[1]: Stopped target basic.target - Basic System.
Sep  4 17:19:50.890467 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Sep  4 17:19:50.890623 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Sep  4 17:19:50.890818 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Sep  4 17:19:50.891062 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Sep  4 17:19:50.891408 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Sep  4 17:19:50.891691 systemd[1]: Stopped target sysinit.target - System Initialization.
Sep  4 17:19:50.891913 systemd[1]: Stopped target local-fs.target - Local File Systems.
Sep  4 17:19:50.892135 systemd[1]: Stopped target swap.target - Swaps.
Sep  4 17:19:50.892275 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Sep  4 17:19:50.892392 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Sep  4 17:19:50.894336 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Sep  4 17:19:50.895451 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Sep  4 17:19:50.895850 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Sep  4 17:19:50.904995 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Sep  4 17:19:50.907732 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Sep  4 17:19:50.907892 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Sep  4 17:19:50.911253 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Sep  4 17:19:50.911922 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Sep  4 17:19:50.968948 ignition[1410]: INFO     : Ignition 2.18.0
Sep  4 17:19:50.968948 ignition[1410]: INFO     : Stage: umount
Sep  4 17:19:50.968948 ignition[1410]: INFO     : no configs at "/usr/lib/ignition/base.d"
Sep  4 17:19:50.968948 ignition[1410]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Sep  4 17:19:50.968948 ignition[1410]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Sep  4 17:19:50.914468 systemd[1]: ignition-files.service: Deactivated successfully.
Sep  4 17:19:50.982611 ignition[1410]: INFO     : PUT result: OK
Sep  4 17:19:50.914681 systemd[1]: Stopped ignition-files.service - Ignition (files).
Sep  4 17:19:50.987181 ignition[1410]: INFO     : umount: umount passed
Sep  4 17:19:50.987181 ignition[1410]: INFO     : Ignition finished successfully
Sep  4 17:19:50.934630 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Sep  4 17:19:50.970925 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Sep  4 17:19:50.972429 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Sep  4 17:19:50.974808 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Sep  4 17:19:50.977830 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Sep  4 17:19:50.978196 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Sep  4 17:19:50.997471 systemd[1]: ignition-mount.service: Deactivated successfully.
Sep  4 17:19:50.997781 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Sep  4 17:19:51.002123 systemd[1]: ignition-disks.service: Deactivated successfully.
Sep  4 17:19:51.003623 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Sep  4 17:19:51.006658 systemd[1]: ignition-kargs.service: Deactivated successfully.
Sep  4 17:19:51.007874 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Sep  4 17:19:51.010428 systemd[1]: ignition-fetch.service: Deactivated successfully.
Sep  4 17:19:51.010494 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Sep  4 17:19:51.014451 systemd[1]: Stopped target network.target - Network.
Sep  4 17:19:51.015643 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Sep  4 17:19:51.015720 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Sep  4 17:19:51.021190 systemd[1]: Stopped target paths.target - Path Units.
Sep  4 17:19:51.022207 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Sep  4 17:19:51.023484 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Sep  4 17:19:51.031351 systemd[1]: Stopped target slices.target - Slice Units.
Sep  4 17:19:51.031844 systemd[1]: Stopped target sockets.target - Socket Units.
Sep  4 17:19:51.035427 systemd[1]: iscsid.socket: Deactivated successfully.
Sep  4 17:19:51.035490 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Sep  4 17:19:51.036639 systemd[1]: iscsiuio.socket: Deactivated successfully.
Sep  4 17:19:51.036681 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Sep  4 17:19:51.039929 systemd[1]: ignition-setup.service: Deactivated successfully.
Sep  4 17:19:51.040007 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Sep  4 17:19:51.047233 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Sep  4 17:19:51.047315 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Sep  4 17:19:51.050721 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Sep  4 17:19:51.052887 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Sep  4 17:19:51.056485 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Sep  4 17:19:51.057230 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Sep  4 17:19:51.057340 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Sep  4 17:19:51.061201 systemd[1]: sysroot-boot.service: Deactivated successfully.
Sep  4 17:19:51.061310 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Sep  4 17:19:51.064677 systemd-networkd[1163]: eth0: DHCPv6 lease lost
Sep  4 17:19:51.074987 systemd[1]: systemd-networkd.service: Deactivated successfully.
Sep  4 17:19:51.075852 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Sep  4 17:19:51.089750 systemd[1]: systemd-resolved.service: Deactivated successfully.
Sep  4 17:19:51.089897 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Sep  4 17:19:51.100653 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Sep  4 17:19:51.100779 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Sep  4 17:19:51.104066 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Sep  4 17:19:51.104132 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Sep  4 17:19:51.117694 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Sep  4 17:19:51.119358 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Sep  4 17:19:51.119460 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Sep  4 17:19:51.122091 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Sep  4 17:19:51.122261 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Sep  4 17:19:51.125639 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Sep  4 17:19:51.125709 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Sep  4 17:19:51.128289 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Sep  4 17:19:51.128439 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Sep  4 17:19:51.135239 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Sep  4 17:19:51.162270 systemd[1]: systemd-udevd.service: Deactivated successfully.
Sep  4 17:19:51.162459 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Sep  4 17:19:51.173685 systemd[1]: network-cleanup.service: Deactivated successfully.
Sep  4 17:19:51.173909 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Sep  4 17:19:51.178590 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Sep  4 17:19:51.178667 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Sep  4 17:19:51.181137 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Sep  4 17:19:51.181190 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Sep  4 17:19:51.186733 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Sep  4 17:19:51.187832 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Sep  4 17:19:51.190392 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Sep  4 17:19:51.190467 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Sep  4 17:19:51.193874 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Sep  4 17:19:51.194002 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 17:19:51.205694 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Sep  4 17:19:51.207027 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Sep  4 17:19:51.207097 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Sep  4 17:19:51.208593 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Sep  4 17:19:51.208645 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Sep  4 17:19:51.210495 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Sep  4 17:19:51.210953 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Sep  4 17:19:51.225496 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep  4 17:19:51.225654 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:19:51.239174 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Sep  4 17:19:51.239296 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Sep  4 17:19:51.242690 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Sep  4 17:19:51.252740 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Sep  4 17:19:51.270834 systemd[1]: Switching root.
Sep  4 17:19:51.303190 systemd-journald[178]: Journal stopped
Sep  4 17:19:55.190091 systemd-journald[178]: Received SIGTERM from PID 1 (systemd).
Sep  4 17:19:55.190180 kernel: SELinux:  policy capability network_peer_controls=1
Sep  4 17:19:55.190206 kernel: SELinux:  policy capability open_perms=1
Sep  4 17:19:55.190229 kernel: SELinux:  policy capability extended_socket_class=1
Sep  4 17:19:55.190252 kernel: SELinux:  policy capability always_check_network=0
Sep  4 17:19:55.190268 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep  4 17:19:55.190285 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep  4 17:19:55.190307 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Sep  4 17:19:55.190323 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Sep  4 17:19:55.190340 kernel: audit: type=1403 audit(1725470393.786:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Sep  4 17:19:55.190364 systemd[1]: Successfully loaded SELinux policy in 86.382ms.
Sep  4 17:19:55.190392 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.704ms.
Sep  4 17:19:55.190412 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Sep  4 17:19:55.190431 systemd[1]: Detected virtualization amazon.
Sep  4 17:19:55.190449 systemd[1]: Detected architecture x86-64.
Sep  4 17:19:55.190466 systemd[1]: Detected first boot.
Sep  4 17:19:55.190488 systemd[1]: Initializing machine ID from VM UUID.
Sep  4 17:19:55.195496 zram_generator::config[1469]: No configuration found.
Sep  4 17:19:55.195563 systemd[1]: Populated /etc with preset unit settings.
Sep  4 17:19:55.195585 systemd[1]: Queued start job for default target multi-user.target.
Sep  4 17:19:55.195604 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6.
Sep  4 17:19:55.195623 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Sep  4 17:19:55.195644 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Sep  4 17:19:55.195664 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Sep  4 17:19:55.195690 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Sep  4 17:19:55.195711 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Sep  4 17:19:55.195732 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Sep  4 17:19:55.195752 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Sep  4 17:19:55.195772 systemd[1]: Created slice user.slice - User and Session Slice.
Sep  4 17:19:55.195791 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Sep  4 17:19:55.195812 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Sep  4 17:19:55.195832 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Sep  4 17:19:55.195853 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Sep  4 17:19:55.195876 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Sep  4 17:19:55.195896 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Sep  4 17:19:55.195916 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Sep  4 17:19:55.195937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Sep  4 17:19:55.195957 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Sep  4 17:19:55.195978 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep  4 17:19:55.196005 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Sep  4 17:19:55.196027 systemd[1]: Reached target slices.target - Slice Units.
Sep  4 17:19:55.196050 systemd[1]: Reached target swap.target - Swaps.
Sep  4 17:19:55.196071 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Sep  4 17:19:55.196098 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Sep  4 17:19:55.196120 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Sep  4 17:19:55.196141 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Sep  4 17:19:55.196161 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Sep  4 17:19:55.196182 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Sep  4 17:19:55.196204 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Sep  4 17:19:55.196225 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Sep  4 17:19:55.196245 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Sep  4 17:19:55.196269 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Sep  4 17:19:55.196290 systemd[1]: Mounting media.mount - External Media Directory...
Sep  4 17:19:55.196311 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:19:55.196332 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Sep  4 17:19:55.196352 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Sep  4 17:19:55.196372 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Sep  4 17:19:55.196394 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Sep  4 17:19:55.196413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 17:19:55.196437 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Sep  4 17:19:55.196458 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Sep  4 17:19:55.196478 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 17:19:55.196498 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Sep  4 17:19:55.197374 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Sep  4 17:19:55.197405 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Sep  4 17:19:55.197427 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Sep  4 17:19:55.198311 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Sep  4 17:19:55.198361 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
Sep  4 17:19:55.198387 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.)
Sep  4 17:19:55.198413 systemd[1]: Starting systemd-journald.service - Journal Service...
Sep  4 17:19:55.198437 kernel: loop: module loaded
Sep  4 17:19:55.198462 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Sep  4 17:19:55.198488 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Sep  4 17:19:55.198841 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Sep  4 17:19:55.198871 kernel: fuse: init (API version 7.39)
Sep  4 17:19:55.198897 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Sep  4 17:19:55.198928 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:19:55.198953 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Sep  4 17:19:55.198979 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Sep  4 17:19:55.199003 systemd[1]: Mounted media.mount - External Media Directory.
Sep  4 17:19:55.199028 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Sep  4 17:19:55.199051 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Sep  4 17:19:55.199076 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Sep  4 17:19:55.199100 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Sep  4 17:19:55.199124 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Sep  4 17:19:55.199163 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Sep  4 17:19:55.199188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 17:19:55.199215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 17:19:55.199239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Sep  4 17:19:55.199265 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Sep  4 17:19:55.199294 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Sep  4 17:19:55.199318 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Sep  4 17:19:55.199342 systemd[1]: modprobe@loop.service: Deactivated successfully.
Sep  4 17:19:55.199366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Sep  4 17:19:55.199390 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Sep  4 17:19:55.199415 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Sep  4 17:19:55.199439 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Sep  4 17:19:55.199465 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Sep  4 17:19:55.199493 systemd[1]: Reached target network-pre.target - Preparation for Network.
Sep  4 17:19:55.199552 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Sep  4 17:19:55.199626 systemd-journald[1565]: Collecting audit messages is disabled.
Sep  4 17:19:55.199672 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Sep  4 17:19:55.199702 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Sep  4 17:19:55.199726 kernel: ACPI: bus type drm_connector registered
Sep  4 17:19:55.199750 systemd-journald[1565]: Journal started
Sep  4 17:19:55.200129 systemd-journald[1565]: Runtime Journal (/run/log/journal/ec26c1df0ac013271db9b68f1d673310) is 4.8M, max 38.6M, 33.7M free.
Sep  4 17:19:55.222582 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Sep  4 17:19:55.227527 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Sep  4 17:19:55.234611 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Sep  4 17:19:55.239524 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Sep  4 17:19:55.250788 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Sep  4 17:19:55.263643 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Sep  4 17:19:55.269530 systemd[1]: Started systemd-journald.service - Journal Service.
Sep  4 17:19:55.276649 systemd[1]: modprobe@drm.service: Deactivated successfully.
Sep  4 17:19:55.284874 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Sep  4 17:19:55.290836 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Sep  4 17:19:55.292984 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Sep  4 17:19:55.296090 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Sep  4 17:19:55.344005 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Sep  4 17:19:55.350594 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Sep  4 17:19:55.361734 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Sep  4 17:19:55.376860 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Sep  4 17:19:55.381768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Sep  4 17:19:55.396615 systemd-journald[1565]: Time spent on flushing to /var/log/journal/ec26c1df0ac013271db9b68f1d673310 is 39.910ms for 958 entries.
Sep  4 17:19:55.396615 systemd-journald[1565]: System Journal (/var/log/journal/ec26c1df0ac013271db9b68f1d673310) is 8.0M, max 195.6M, 187.6M free.
Sep  4 17:19:55.448948 systemd-journald[1565]: Received client request to flush runtime journal.
Sep  4 17:19:55.415035 systemd-tmpfiles[1599]: ACLs are not supported, ignoring.
Sep  4 17:19:55.415281 systemd-tmpfiles[1599]: ACLs are not supported, ignoring.
Sep  4 17:19:55.438453 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Sep  4 17:19:55.450901 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Sep  4 17:19:55.453268 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Sep  4 17:19:55.454279 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Sep  4 17:19:55.521989 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Sep  4 17:19:55.534906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Sep  4 17:19:55.564722 systemd-tmpfiles[1642]: ACLs are not supported, ignoring.
Sep  4 17:19:55.565383 systemd-tmpfiles[1642]: ACLs are not supported, ignoring.
Sep  4 17:19:55.574862 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Sep  4 17:19:56.439115 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Sep  4 17:19:56.446711 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Sep  4 17:19:56.488270 systemd-udevd[1648]: Using default interface naming scheme 'v255'.
Sep  4 17:19:56.557182 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Sep  4 17:19:56.570624 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Sep  4 17:19:56.617729 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Sep  4 17:19:56.641643 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1651)
Sep  4 17:19:56.702176 (udev-worker)[1660]: Network interface NamePolicy= disabled on kernel command line.
Sep  4 17:19:56.704626 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0.
Sep  4 17:19:56.785527 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255
Sep  4 17:19:56.790693 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Sep  4 17:19:56.802530 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
Sep  4 17:19:56.829547 kernel: ACPI: button: Power Button [PWRF]
Sep  4 17:19:56.840532 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4
Sep  4 17:19:56.850579 kernel: ACPI: button: Sleep Button [SLPF]
Sep  4 17:19:56.855529 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5
Sep  4 17:19:56.940568 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1656)
Sep  4 17:19:56.969870 systemd-networkd[1653]: lo: Link UP
Sep  4 17:19:56.969882 systemd-networkd[1653]: lo: Gained carrier
Sep  4 17:19:56.975742 systemd-networkd[1653]: Enumeration completed
Sep  4 17:19:56.976101 systemd[1]: Started systemd-networkd.service - Network Configuration.
Sep  4 17:19:56.977610 systemd-networkd[1653]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:19:56.984564 systemd-networkd[1653]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Sep  4 17:19:56.991218 systemd-networkd[1653]: eth0: Link UP
Sep  4 17:19:56.991999 systemd-networkd[1653]: eth0: Gained carrier
Sep  4 17:19:56.992220 systemd-networkd[1653]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:19:56.996752 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Sep  4 17:19:57.004809 systemd-networkd[1653]: eth0: DHCPv4 address 172.31.19.141/20, gateway 172.31.16.1 acquired from 172.31.16.1
Sep  4 17:19:57.009697 kernel: mousedev: PS/2 mouse device common for all mice
Sep  4 17:19:57.014451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:19:57.158184 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Sep  4 17:19:57.173021 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Sep  4 17:19:57.207806 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Sep  4 17:19:57.315784 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:19:57.334195 lvm[1770]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Sep  4 17:19:57.363921 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Sep  4 17:19:57.366575 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Sep  4 17:19:57.373708 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Sep  4 17:19:57.381650 lvm[1775]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Sep  4 17:19:57.409928 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Sep  4 17:19:57.412488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Sep  4 17:19:57.418917 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Sep  4 17:19:57.418972 systemd[1]: Reached target local-fs.target - Local File Systems.
Sep  4 17:19:57.424176 systemd[1]: Reached target machines.target - Containers.
Sep  4 17:19:57.429034 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Sep  4 17:19:57.459337 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Sep  4 17:19:57.472855 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Sep  4 17:19:57.474496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 17:19:57.477151 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Sep  4 17:19:57.499966 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Sep  4 17:19:57.513986 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Sep  4 17:19:57.518346 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Sep  4 17:19:57.538789 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Sep  4 17:19:57.547531 kernel: loop0: detected capacity change from 0 to 209816
Sep  4 17:19:57.552657 kernel: block loop0: the capability attribute has been deprecated.
Sep  4 17:19:57.586116 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Sep  4 17:19:57.596991 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Sep  4 17:19:57.598084 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Sep  4 17:19:57.633560 kernel: loop1: detected capacity change from 0 to 60984
Sep  4 17:19:57.729568 kernel: loop2: detected capacity change from 0 to 80568
Sep  4 17:19:57.803564 kernel: loop3: detected capacity change from 0 to 139904
Sep  4 17:19:57.889745 kernel: loop4: detected capacity change from 0 to 209816
Sep  4 17:19:57.911566 kernel: loop5: detected capacity change from 0 to 60984
Sep  4 17:19:57.921558 kernel: loop6: detected capacity change from 0 to 80568
Sep  4 17:19:57.935567 kernel: loop7: detected capacity change from 0 to 139904
Sep  4 17:19:57.955470 (sd-merge)[1797]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'.
Sep  4 17:19:57.956351 (sd-merge)[1797]: Merged extensions into '/usr'.
Sep  4 17:19:57.966715 systemd[1]: Reloading requested from client PID 1784 ('systemd-sysext') (unit systemd-sysext.service)...
Sep  4 17:19:57.966896 systemd[1]: Reloading...
Sep  4 17:19:58.037568 zram_generator::config[1820]: No configuration found.
Sep  4 17:19:58.284980 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 17:19:58.400917 systemd[1]: Reloading finished in 433 ms.
Sep  4 17:19:58.420806 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Sep  4 17:19:58.433794 systemd[1]: Starting ensure-sysext.service...
Sep  4 17:19:58.446034 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories...
Sep  4 17:19:58.464113 systemd[1]: Reloading requested from client PID 1877 ('systemctl') (unit ensure-sysext.service)...
Sep  4 17:19:58.464708 systemd[1]: Reloading...
Sep  4 17:19:58.505437 systemd-tmpfiles[1878]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Sep  4 17:19:58.506017 systemd-tmpfiles[1878]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Sep  4 17:19:58.511345 systemd-tmpfiles[1878]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Sep  4 17:19:58.519872 systemd-tmpfiles[1878]: ACLs are not supported, ignoring.
Sep  4 17:19:58.520171 systemd-tmpfiles[1878]: ACLs are not supported, ignoring.
Sep  4 17:19:58.529746 systemd-tmpfiles[1878]: Detected autofs mount point /boot during canonicalization of boot.
Sep  4 17:19:58.529760 systemd-tmpfiles[1878]: Skipping /boot
Sep  4 17:19:58.549737 systemd-tmpfiles[1878]: Detected autofs mount point /boot during canonicalization of boot.
Sep  4 17:19:58.550334 systemd-tmpfiles[1878]: Skipping /boot
Sep  4 17:19:58.600566 ldconfig[1780]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Sep  4 17:19:58.611198 zram_generator::config[1907]: No configuration found.
Sep  4 17:19:58.810702 systemd-networkd[1653]: eth0: Gained IPv6LL
Sep  4 17:19:58.811347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 17:19:58.905497 systemd[1]: Reloading finished in 440 ms.
Sep  4 17:19:58.924132 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Sep  4 17:19:58.926162 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Sep  4 17:19:58.935018 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Sep  4 17:19:58.950843 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Sep  4 17:19:58.963390 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Sep  4 17:19:58.968953 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Sep  4 17:19:58.983874 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Sep  4 17:19:58.999064 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Sep  4 17:19:59.021551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:19:59.022599 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 17:19:59.035288 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 17:19:59.055088 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Sep  4 17:19:59.067426 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Sep  4 17:19:59.072444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 17:19:59.072668 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:19:59.086456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 17:19:59.088060 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 17:19:59.103733 systemd[1]: modprobe@loop.service: Deactivated successfully.
Sep  4 17:19:59.104006 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Sep  4 17:19:59.114018 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Sep  4 17:19:59.143589 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Sep  4 17:19:59.151856 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:19:59.153743 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 17:19:59.171973 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 17:19:59.184543 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Sep  4 17:19:59.193854 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Sep  4 17:19:59.195398 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 17:19:59.195772 systemd[1]: Reached target time-set.target - System Time Set.
Sep  4 17:19:59.212424 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Sep  4 17:19:59.215652 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Sep  4 17:19:59.222790 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Sep  4 17:19:59.225385 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Sep  4 17:19:59.227667 augenrules[2008]: No rules
Sep  4 17:19:59.228215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Sep  4 17:19:59.228433 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Sep  4 17:19:59.231853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 17:19:59.234953 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 17:19:59.237422 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Sep  4 17:19:59.241668 systemd[1]: modprobe@drm.service: Deactivated successfully.
Sep  4 17:19:59.241902 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Sep  4 17:19:59.248147 systemd[1]: modprobe@loop.service: Deactivated successfully.
Sep  4 17:19:59.250752 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Sep  4 17:19:59.268316 systemd[1]: Finished ensure-sysext.service.
Sep  4 17:19:59.270312 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Sep  4 17:19:59.282594 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Sep  4 17:19:59.282661 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Sep  4 17:19:59.282695 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Sep  4 17:19:59.296844 systemd-resolved[1976]: Positive Trust Anchors:
Sep  4 17:19:59.296862 systemd-resolved[1976]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Sep  4 17:19:59.296913 systemd-resolved[1976]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test
Sep  4 17:19:59.301299 systemd-resolved[1976]: Defaulting to hostname 'linux'.
Sep  4 17:19:59.303256 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Sep  4 17:19:59.304534 systemd[1]: Reached target network.target - Network.
Sep  4 17:19:59.305541 systemd[1]: Reached target network-online.target - Network is Online.
Sep  4 17:19:59.306692 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Sep  4 17:19:59.308053 systemd[1]: Reached target sysinit.target - System Initialization.
Sep  4 17:19:59.309576 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Sep  4 17:19:59.311277 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Sep  4 17:19:59.313066 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Sep  4 17:19:59.314429 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Sep  4 17:19:59.315896 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Sep  4 17:19:59.317252 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Sep  4 17:19:59.317280 systemd[1]: Reached target paths.target - Path Units.
Sep  4 17:19:59.318559 systemd[1]: Reached target timers.target - Timer Units.
Sep  4 17:19:59.322067 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Sep  4 17:19:59.328042 systemd[1]: Starting docker.socket - Docker Socket for the API...
Sep  4 17:19:59.331472 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Sep  4 17:19:59.334679 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Sep  4 17:19:59.339323 systemd[1]: Reached target sockets.target - Socket Units.
Sep  4 17:19:59.341948 systemd[1]: Reached target basic.target - Basic System.
Sep  4 17:19:59.344384 systemd[1]: System is tainted: cgroupsv1
Sep  4 17:19:59.344865 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Sep  4 17:19:59.344981 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Sep  4 17:19:59.361893 systemd[1]: Starting containerd.service - containerd container runtime...
Sep  4 17:19:59.373719 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Sep  4 17:19:59.380907 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Sep  4 17:19:59.385622 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Sep  4 17:19:59.393118 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Sep  4 17:19:59.395769 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Sep  4 17:19:59.412924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:19:59.441188 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Sep  4 17:19:59.456666 jq[2035]: false
Sep  4 17:19:59.466716 systemd[1]: Started ntpd.service - Network Time Service.
Sep  4 17:19:59.485168 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Sep  4 17:19:59.505626 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Sep  4 17:19:59.510184 dbus-daemon[2034]: [system] SELinux support is enabled
Sep  4 17:19:59.513732 systemd[1]: Starting setup-oem.service - Setup OEM...
Sep  4 17:19:59.527701 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Sep  4 17:19:59.527805 dbus-daemon[2034]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1653 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Sep  4 17:19:59.548259 extend-filesystems[2036]: Found loop4
Sep  4 17:19:59.567666 extend-filesystems[2036]: Found loop5
Sep  4 17:19:59.567666 extend-filesystems[2036]: Found loop6
Sep  4 17:19:59.567666 extend-filesystems[2036]: Found loop7
Sep  4 17:19:59.567666 extend-filesystems[2036]: Found nvme0n1
Sep  4 17:19:59.567666 extend-filesystems[2036]: Found nvme0n1p1
Sep  4 17:19:59.567666 extend-filesystems[2036]: Found nvme0n1p2
Sep  4 17:19:59.567666 extend-filesystems[2036]: Found nvme0n1p3
Sep  4 17:19:59.567666 extend-filesystems[2036]: Found usr
Sep  4 17:19:59.567666 extend-filesystems[2036]: Found nvme0n1p4
Sep  4 17:19:59.567666 extend-filesystems[2036]: Found nvme0n1p6
Sep  4 17:19:59.567666 extend-filesystems[2036]: Found nvme0n1p7
Sep  4 17:19:59.567666 extend-filesystems[2036]: Found nvme0n1p9
Sep  4 17:19:59.567666 extend-filesystems[2036]: Checking size of /dev/nvme0n1p9
Sep  4 17:19:59.550814 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Sep  4 17:19:59.604813 systemd[1]: Starting systemd-logind.service - User Login Management...
Sep  4 17:19:59.615853 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Sep  4 17:19:59.624831 systemd[1]: Starting update-engine.service - Update Engine...
Sep  4 17:19:59.644814 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Sep  4 17:19:59.653275 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Sep  4 17:19:59.659671 extend-filesystems[2036]: Resized partition /dev/nvme0n1p9
Sep  4 17:19:59.695250 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Sep  4 17:19:59.695645 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Sep  4 17:19:59.706363 extend-filesystems[2076]: resize2fs 1.47.0 (5-Feb-2023)
Sep  4 17:19:59.718301 systemd[1]: motdgen.service: Deactivated successfully.
Sep  4 17:19:59.718690 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Sep  4 17:19:59.745537 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Sep  4 17:19:59.737080 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Sep  4 17:19:59.745785 jq[2066]: true
Sep  4 17:19:59.739144 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Sep  4 17:19:59.766277 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Sep  4 17:19:59.768829 update_engine[2064]: I0904 17:19:59.762993  2064 main.cc:92] Flatcar Update Engine starting
Sep  4 17:19:59.775624 coreos-metadata[2033]: Sep 04 17:19:59.775 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.777 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.778 INFO Fetch successful
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.779 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.780 INFO Fetch successful
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.780 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.783 INFO Fetch successful
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.784 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.784 INFO Fetch successful
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.784 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.785 INFO Fetch failed with 404: resource not found
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.785 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.786 INFO Fetch successful
Sep  4 17:19:59.786655 coreos-metadata[2033]: Sep 04 17:19:59.786 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1
Sep  4 17:19:59.787321 update_engine[2064]: I0904 17:19:59.779001  2064 update_check_scheduler.cc:74] Next update check in 7m8s
Sep  4 17:19:59.787378 coreos-metadata[2033]: Sep 04 17:19:59.786 INFO Fetch successful
Sep  4 17:19:59.787378 coreos-metadata[2033]: Sep 04 17:19:59.787 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1
Sep  4 17:19:59.797764 coreos-metadata[2033]: Sep 04 17:19:59.787 INFO Fetch successful
Sep  4 17:19:59.797764 coreos-metadata[2033]: Sep 04 17:19:59.787 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1
Sep  4 17:19:59.797764 coreos-metadata[2033]: Sep 04 17:19:59.788 INFO Fetch successful
Sep  4 17:19:59.797764 coreos-metadata[2033]: Sep 04 17:19:59.788 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1
Sep  4 17:19:59.797764 coreos-metadata[2033]: Sep 04 17:19:59.789 INFO Fetch successful
Sep  4 17:19:59.816479 ntpd[2042]: ntpd 4.2.8p17@1.4004-o Wed Sep  4 15:12:45 UTC 2024 (1): Starting
Sep  4 17:19:59.817733 ntpd[2042]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Sep  4 17:19:59.818198 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: ntpd 4.2.8p17@1.4004-o Wed Sep  4 15:12:45 UTC 2024 (1): Starting
Sep  4 17:19:59.818198 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Sep  4 17:19:59.818198 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: ----------------------------------------------------
Sep  4 17:19:59.818198 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: ntp-4 is maintained by Network Time Foundation,
Sep  4 17:19:59.818198 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Sep  4 17:19:59.818198 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: corporation.  Support and training for ntp-4 are
Sep  4 17:19:59.818198 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: available at https://www.nwtime.org/support
Sep  4 17:19:59.818198 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: ----------------------------------------------------
Sep  4 17:19:59.817751 ntpd[2042]: ----------------------------------------------------
Sep  4 17:19:59.817762 ntpd[2042]: ntp-4 is maintained by Network Time Foundation,
Sep  4 17:19:59.817772 ntpd[2042]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Sep  4 17:19:59.817782 ntpd[2042]: corporation.  Support and training for ntp-4 are
Sep  4 17:19:59.817791 ntpd[2042]: available at https://www.nwtime.org/support
Sep  4 17:19:59.817801 ntpd[2042]: ----------------------------------------------------
Sep  4 17:19:59.837586 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Sep  4 17:19:59.828334 ntpd[2042]: proto: precision = 0.085 usec (-23)
Sep  4 17:19:59.877813 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: proto: precision = 0.085 usec (-23)
Sep  4 17:19:59.877813 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: basedate set to 2024-08-23
Sep  4 17:19:59.877813 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: gps base set to 2024-08-25 (week 2329)
Sep  4 17:19:59.877813 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: Listen and drop on 0 v6wildcard [::]:123
Sep  4 17:19:59.877813 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Sep  4 17:19:59.877813 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: Listen normally on 2 lo 127.0.0.1:123
Sep  4 17:19:59.877813 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: Listen normally on 3 eth0 172.31.19.141:123
Sep  4 17:19:59.877813 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: Listen normally on 4 lo [::1]:123
Sep  4 17:19:59.877813 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: Listen normally on 5 eth0 [fe80::451:23ff:fefa:ec99%2]:123
Sep  4 17:19:59.877813 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: Listening on routing socket on fd #22 for interface updates
Sep  4 17:19:59.877813 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Sep  4 17:19:59.877813 ntpd[2042]:  4 Sep 17:19:59 ntpd[2042]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Sep  4 17:19:59.878800 jq[2087]: true
Sep  4 17:19:59.839448 ntpd[2042]: basedate set to 2024-08-23
Sep  4 17:19:59.855644 (ntainerd)[2090]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Sep  4 17:19:59.839472 ntpd[2042]: gps base set to 2024-08-25 (week 2329)
Sep  4 17:19:59.844059 ntpd[2042]: Listen and drop on 0 v6wildcard [::]:123
Sep  4 17:19:59.844194 ntpd[2042]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Sep  4 17:19:59.844400 ntpd[2042]: Listen normally on 2 lo 127.0.0.1:123
Sep  4 17:19:59.844439 ntpd[2042]: Listen normally on 3 eth0 172.31.19.141:123
Sep  4 17:19:59.844577 ntpd[2042]: Listen normally on 4 lo [::1]:123
Sep  4 17:19:59.844626 ntpd[2042]: Listen normally on 5 eth0 [fe80::451:23ff:fefa:ec99%2]:123
Sep  4 17:19:59.844666 ntpd[2042]: Listening on routing socket on fd #22 for interface updates
Sep  4 17:19:59.847878 ntpd[2042]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Sep  4 17:19:59.847910 ntpd[2042]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Sep  4 17:19:59.891255 extend-filesystems[2076]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Sep  4 17:19:59.891255 extend-filesystems[2076]: old_desc_blocks = 1, new_desc_blocks = 1
Sep  4 17:19:59.891255 extend-filesystems[2076]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Sep  4 17:19:59.895843 extend-filesystems[2036]: Resized filesystem in /dev/nvme0n1p9
Sep  4 17:19:59.898653 systemd[1]: extend-filesystems.service: Deactivated successfully.
Sep  4 17:19:59.907891 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Sep  4 17:19:59.946470 systemd-logind[2061]: Watching system buttons on /dev/input/event1 (Power Button)
Sep  4 17:19:59.946991 systemd[1]: Finished setup-oem.service - Setup OEM.
Sep  4 17:19:59.951633 systemd-logind[2061]: Watching system buttons on /dev/input/event2 (Sleep Button)
Sep  4 17:19:59.951662 systemd-logind[2061]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Sep  4 17:19:59.956114 systemd-logind[2061]: New seat seat0.
Sep  4 17:19:59.968854 systemd[1]: Started systemd-logind.service - User Login Management.
Sep  4 17:19:59.972437 dbus-daemon[2034]: [system] Successfully activated service 'org.freedesktop.systemd1'
Sep  4 17:19:59.985819 systemd[1]: Started update-engine.service - Update Engine.
Sep  4 17:20:00.005854 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent.
Sep  4 17:20:00.007460 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Sep  4 17:20:00.007530 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Sep  4 17:20:00.021230 systemd[1]: Starting systemd-hostnamed.service - Hostname Service...
Sep  4 17:20:00.022470 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Sep  4 17:20:00.022529 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Sep  4 17:20:00.025381 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Sep  4 17:20:00.029767 tar[2081]: linux-amd64/helm
Sep  4 17:20:00.042716 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Sep  4 17:20:00.071189 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Sep  4 17:20:00.079486 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Sep  4 17:20:00.191527 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (2135)
Sep  4 17:20:00.254780 bash[2165]: Updated "/home/core/.ssh/authorized_keys"
Sep  4 17:20:00.258104 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Sep  4 17:20:00.274367 systemd[1]: Starting sshkeys.service...
Sep  4 17:20:00.344410 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Sep  4 17:20:00.356739 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Sep  4 17:20:00.492575 amazon-ssm-agent[2134]: Initializing new seelog logger
Sep  4 17:20:00.493617 amazon-ssm-agent[2134]: New Seelog Logger Creation Complete
Sep  4 17:20:00.493786 amazon-ssm-agent[2134]: 2024/09/04 17:20:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Sep  4 17:20:00.493786 amazon-ssm-agent[2134]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Sep  4 17:20:00.494232 amazon-ssm-agent[2134]: 2024/09/04 17:20:00 processing appconfig overrides
Sep  4 17:20:00.505335 amazon-ssm-agent[2134]: 2024/09/04 17:20:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Sep  4 17:20:00.505335 amazon-ssm-agent[2134]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Sep  4 17:20:00.505335 amazon-ssm-agent[2134]: 2024/09/04 17:20:00 processing appconfig overrides
Sep  4 17:20:00.505335 amazon-ssm-agent[2134]: 2024/09/04 17:20:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Sep  4 17:20:00.505335 amazon-ssm-agent[2134]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Sep  4 17:20:00.505335 amazon-ssm-agent[2134]: 2024/09/04 17:20:00 processing appconfig overrides
Sep  4 17:20:00.506660 amazon-ssm-agent[2134]: 2024-09-04 17:20:00 INFO Proxy environment variables:
Sep  4 17:20:00.531231 amazon-ssm-agent[2134]: 2024/09/04 17:20:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Sep  4 17:20:00.531231 amazon-ssm-agent[2134]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Sep  4 17:20:00.531371 amazon-ssm-agent[2134]: 2024/09/04 17:20:00 processing appconfig overrides
Sep  4 17:20:00.605717 locksmithd[2139]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Sep  4 17:20:00.623785 amazon-ssm-agent[2134]: 2024-09-04 17:20:00 INFO https_proxy:
Sep  4 17:20:00.655467 dbus-daemon[2034]: [system] Successfully activated service 'org.freedesktop.hostname1'
Sep  4 17:20:00.655667 systemd[1]: Started systemd-hostnamed.service - Hostname Service.
Sep  4 17:20:00.673544 dbus-daemon[2034]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2137 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Sep  4 17:20:00.690066 systemd[1]: Starting polkit.service - Authorization Manager...
Sep  4 17:20:00.707633 coreos-metadata[2184]: Sep 04 17:20:00.707 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Sep  4 17:20:00.711597 coreos-metadata[2184]: Sep 04 17:20:00.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1
Sep  4 17:20:00.721863 coreos-metadata[2184]: Sep 04 17:20:00.721 INFO Fetch successful
Sep  4 17:20:00.721863 coreos-metadata[2184]: Sep 04 17:20:00.721 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1
Sep  4 17:20:00.727538 amazon-ssm-agent[2134]: 2024-09-04 17:20:00 INFO http_proxy:
Sep  4 17:20:00.736823 coreos-metadata[2184]: Sep 04 17:20:00.736 INFO Fetch successful
Sep  4 17:20:00.752016 unknown[2184]: wrote ssh authorized keys file for user: core
Sep  4 17:20:00.771413 sshd_keygen[2099]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Sep  4 17:20:00.811873 polkitd[2261]: Started polkitd version 121
Sep  4 17:20:00.830889 amazon-ssm-agent[2134]: 2024-09-04 17:20:00 INFO no_proxy:
Sep  4 17:20:00.841988 update-ssh-keys[2273]: Updated "/home/core/.ssh/authorized_keys"
Sep  4 17:20:00.851844 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Sep  4 17:20:00.869799 systemd[1]: Finished sshkeys.service.
Sep  4 17:20:00.891887 polkitd[2261]: Loading rules from directory /etc/polkit-1/rules.d
Sep  4 17:20:00.891983 polkitd[2261]: Loading rules from directory /usr/share/polkit-1/rules.d
Sep  4 17:20:00.896118 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Sep  4 17:20:00.913186 polkitd[2261]: Finished loading, compiling and executing 2 rules
Sep  4 17:20:00.917926 systemd[1]: Starting issuegen.service - Generate /run/issue...
Sep  4 17:20:00.922211 dbus-daemon[2034]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Sep  4 17:20:00.923745 systemd[1]: Started polkit.service - Authorization Manager.
Sep  4 17:20:00.927223 amazon-ssm-agent[2134]: 2024-09-04 17:20:00 INFO Checking if agent identity type OnPrem can be assumed
Sep  4 17:20:00.927999 polkitd[2261]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Sep  4 17:20:00.982239 systemd[1]: issuegen.service: Deactivated successfully.
Sep  4 17:20:00.982603 systemd[1]: Finished issuegen.service - Generate /run/issue.
Sep  4 17:20:00.998404 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Sep  4 17:20:01.031622 amazon-ssm-agent[2134]: 2024-09-04 17:20:00 INFO Checking if agent identity type EC2 can be assumed
Sep  4 17:20:01.043950 systemd-resolved[1976]: System hostname changed to 'ip-172-31-19-141'.
Sep  4 17:20:01.048427 systemd-hostnamed[2137]: Hostname set to <ip-172-31-19-141> (transient)
Sep  4 17:20:01.061361 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Sep  4 17:20:01.077906 systemd[1]: Started getty@tty1.service - Getty on tty1.
Sep  4 17:20:01.095616 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Sep  4 17:20:01.103274 systemd[1]: Reached target getty.target - Login Prompts.
Sep  4 17:20:01.131589 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO Agent will take identity from EC2
Sep  4 17:20:01.159705 containerd[2090]: time="2024-09-04T17:20:01.159283096Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17
Sep  4 17:20:01.232524 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [amazon-ssm-agent] using named pipe channel for IPC
Sep  4 17:20:01.237937 containerd[2090]: time="2024-09-04T17:20:01.237591779Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Sep  4 17:20:01.237937 containerd[2090]: time="2024-09-04T17:20:01.237691512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Sep  4 17:20:01.248738 containerd[2090]: time="2024-09-04T17:20:01.248645285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:20:01.248738 containerd[2090]: time="2024-09-04T17:20:01.248736160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Sep  4 17:20:01.249556 containerd[2090]: time="2024-09-04T17:20:01.249305486Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:20:01.252849 containerd[2090]: time="2024-09-04T17:20:01.251552687Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Sep  4 17:20:01.252849 containerd[2090]: time="2024-09-04T17:20:01.251724119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Sep  4 17:20:01.252849 containerd[2090]: time="2024-09-04T17:20:01.251792857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:20:01.252849 containerd[2090]: time="2024-09-04T17:20:01.251811084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Sep  4 17:20:01.252849 containerd[2090]: time="2024-09-04T17:20:01.251891899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Sep  4 17:20:01.252849 containerd[2090]: time="2024-09-04T17:20:01.252153789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Sep  4 17:20:01.252849 containerd[2090]: time="2024-09-04T17:20:01.252175946Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Sep  4 17:20:01.252849 containerd[2090]: time="2024-09-04T17:20:01.252191083Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Sep  4 17:20:01.253807 containerd[2090]: time="2024-09-04T17:20:01.253771993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:20:01.253807 containerd[2090]: time="2024-09-04T17:20:01.253807935Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Sep  4 17:20:01.253975 containerd[2090]: time="2024-09-04T17:20:01.253911297Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Sep  4 17:20:01.253975 containerd[2090]: time="2024-09-04T17:20:01.253928121Z" level=info msg="metadata content store policy set" policy=shared
Sep  4 17:20:01.280233 containerd[2090]: time="2024-09-04T17:20:01.280179306Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Sep  4 17:20:01.280233 containerd[2090]: time="2024-09-04T17:20:01.280243086Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Sep  4 17:20:01.280411 containerd[2090]: time="2024-09-04T17:20:01.280261737Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Sep  4 17:20:01.280411 containerd[2090]: time="2024-09-04T17:20:01.280301852Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Sep  4 17:20:01.280411 containerd[2090]: time="2024-09-04T17:20:01.280321787Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Sep  4 17:20:01.280411 containerd[2090]: time="2024-09-04T17:20:01.280335663Z" level=info msg="NRI interface is disabled by configuration."
Sep  4 17:20:01.280411 containerd[2090]: time="2024-09-04T17:20:01.280353027Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Sep  4 17:20:01.281882 containerd[2090]: time="2024-09-04T17:20:01.281802687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Sep  4 17:20:01.282532 containerd[2090]: time="2024-09-04T17:20:01.281991836Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Sep  4 17:20:01.282532 containerd[2090]: time="2024-09-04T17:20:01.282018541Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Sep  4 17:20:01.282532 containerd[2090]: time="2024-09-04T17:20:01.282043729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Sep  4 17:20:01.282532 containerd[2090]: time="2024-09-04T17:20:01.282066243Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Sep  4 17:20:01.282532 containerd[2090]: time="2024-09-04T17:20:01.282093848Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Sep  4 17:20:01.282532 containerd[2090]: time="2024-09-04T17:20:01.282114226Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Sep  4 17:20:01.282532 containerd[2090]: time="2024-09-04T17:20:01.282134130Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Sep  4 17:20:01.282532 containerd[2090]: time="2024-09-04T17:20:01.282157222Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Sep  4 17:20:01.282532 containerd[2090]: time="2024-09-04T17:20:01.282178042Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Sep  4 17:20:01.282532 containerd[2090]: time="2024-09-04T17:20:01.282198821Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Sep  4 17:20:01.282532 containerd[2090]: time="2024-09-04T17:20:01.282220824Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Sep  4 17:20:01.282532 containerd[2090]: time="2024-09-04T17:20:01.282381087Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284335183Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284390255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284413399Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284449468Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284511683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284530305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284549449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284567722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284587399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284607263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284678255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284699669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.287992 containerd[2090]: time="2024-09-04T17:20:01.284721272Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Sep  4 17:20:01.291535 containerd[2090]: time="2024-09-04T17:20:01.290344548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.291535 containerd[2090]: time="2024-09-04T17:20:01.290397101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.291535 containerd[2090]: time="2024-09-04T17:20:01.290419951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.291535 containerd[2090]: time="2024-09-04T17:20:01.290441236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.291535 containerd[2090]: time="2024-09-04T17:20:01.290460381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.291535 containerd[2090]: time="2024-09-04T17:20:01.290482797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.291535 containerd[2090]: time="2024-09-04T17:20:01.290514205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.291535 containerd[2090]: time="2024-09-04T17:20:01.290535380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Sep  4 17:20:01.292233 containerd[2090]: time="2024-09-04T17:20:01.290928035Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Sep  4 17:20:01.292233 containerd[2090]: time="2024-09-04T17:20:01.291022630Z" level=info msg="Connect containerd service"
Sep  4 17:20:01.292233 containerd[2090]: time="2024-09-04T17:20:01.291074630Z" level=info msg="using legacy CRI server"
Sep  4 17:20:01.292233 containerd[2090]: time="2024-09-04T17:20:01.291084732Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Sep  4 17:20:01.292233 containerd[2090]: time="2024-09-04T17:20:01.291263316Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Sep  4 17:20:01.297617 containerd[2090]: time="2024-09-04T17:20:01.296536009Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Sep  4 17:20:01.297617 containerd[2090]: time="2024-09-04T17:20:01.296742065Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Sep  4 17:20:01.297617 containerd[2090]: time="2024-09-04T17:20:01.296780051Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Sep  4 17:20:01.297617 containerd[2090]: time="2024-09-04T17:20:01.296821380Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Sep  4 17:20:01.297617 containerd[2090]: time="2024-09-04T17:20:01.296842325Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Sep  4 17:20:01.297617 containerd[2090]: time="2024-09-04T17:20:01.297366423Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Sep  4 17:20:01.297617 containerd[2090]: time="2024-09-04T17:20:01.297425569Z" level=info msg=serving... address=/run/containerd/containerd.sock
Sep  4 17:20:01.297617 containerd[2090]: time="2024-09-04T17:20:01.297495336Z" level=info msg="Start subscribing containerd event"
Sep  4 17:20:01.297617 containerd[2090]: time="2024-09-04T17:20:01.297559045Z" level=info msg="Start recovering state"
Sep  4 17:20:01.298168 containerd[2090]: time="2024-09-04T17:20:01.297639352Z" level=info msg="Start event monitor"
Sep  4 17:20:01.298168 containerd[2090]: time="2024-09-04T17:20:01.297657623Z" level=info msg="Start snapshots syncer"
Sep  4 17:20:01.298168 containerd[2090]: time="2024-09-04T17:20:01.297668803Z" level=info msg="Start cni network conf syncer for default"
Sep  4 17:20:01.298168 containerd[2090]: time="2024-09-04T17:20:01.297680043Z" level=info msg="Start streaming server"
Sep  4 17:20:01.298168 containerd[2090]: time="2024-09-04T17:20:01.297867129Z" level=info msg="containerd successfully booted in 0.139775s"
Sep  4 17:20:01.300048 systemd[1]: Started containerd.service - containerd container runtime.
Sep  4 17:20:01.332723 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [amazon-ssm-agent] using named pipe channel for IPC
Sep  4 17:20:01.429862 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [amazon-ssm-agent] using named pipe channel for IPC
Sep  4 17:20:01.530692 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0
Sep  4 17:20:01.631581 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [amazon-ssm-agent] OS: linux, Arch: amd64
Sep  4 17:20:01.730214 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [amazon-ssm-agent] Starting Core Agent
Sep  4 17:20:01.795477 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [amazon-ssm-agent] registrar detected. Attempting registration
Sep  4 17:20:01.795477 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [Registrar] Starting registrar module
Sep  4 17:20:01.795477 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration
Sep  4 17:20:01.795477 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [EC2Identity] EC2 registration was successful.
Sep  4 17:20:01.795477 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [CredentialRefresher] credentialRefresher has started
Sep  4 17:20:01.795477 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [CredentialRefresher] Starting credentials refresher loop
Sep  4 17:20:01.795477 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO EC2RoleProvider Successfully connected with instance profile role credentials
Sep  4 17:20:01.800037 tar[2081]: linux-amd64/LICENSE
Sep  4 17:20:01.800785 tar[2081]: linux-amd64/README.md
Sep  4 17:20:01.826008 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Sep  4 17:20:01.830721 amazon-ssm-agent[2134]: 2024-09-04 17:20:01 INFO [CredentialRefresher] Next credential rotation will be in 32.383326552166665 minutes
Sep  4 17:20:02.137720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:20:02.141002 systemd[1]: Reached target multi-user.target - Multi-User System.
Sep  4 17:20:02.260773 systemd[1]: Startup finished in 11.851s (kernel) + 8.554s (userspace) = 20.406s.
Sep  4 17:20:02.267265 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Sep  4 17:20:02.815812 amazon-ssm-agent[2134]: 2024-09-04 17:20:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process
Sep  4 17:20:02.917192 amazon-ssm-agent[2134]: 2024-09-04 17:20:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2344) started
Sep  4 17:20:03.017925 amazon-ssm-agent[2134]: 2024-09-04 17:20:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds
Sep  4 17:20:03.098232 kubelet[2333]: E0904 17:20:03.098073    2333 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Sep  4 17:20:03.101885 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep  4 17:20:03.102185 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep  4 17:20:07.246108 systemd-resolved[1976]: Clock change detected. Flushing caches.
Sep  4 17:20:08.435109 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Sep  4 17:20:08.443485 systemd[1]: Started sshd@0-172.31.19.141:22-139.178.68.195:36322.service - OpenSSH per-connection server daemon (139.178.68.195:36322).
Sep  4 17:20:08.630485 sshd[2358]: Accepted publickey for core from 139.178.68.195 port 36322 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:20:08.632731 sshd[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:20:08.643596 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Sep  4 17:20:08.654279 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Sep  4 17:20:08.659172 systemd-logind[2061]: New session 1 of user core.
Sep  4 17:20:08.675596 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Sep  4 17:20:08.683474 systemd[1]: Starting user@500.service - User Manager for UID 500...
Sep  4 17:20:08.689560 (systemd)[2364]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:20:08.875657 systemd[2364]: Queued start job for default target default.target.
Sep  4 17:20:08.876603 systemd[2364]: Created slice app.slice - User Application Slice.
Sep  4 17:20:08.876639 systemd[2364]: Reached target paths.target - Paths.
Sep  4 17:20:08.876660 systemd[2364]: Reached target timers.target - Timers.
Sep  4 17:20:08.882057 systemd[2364]: Starting dbus.socket - D-Bus User Message Bus Socket...
Sep  4 17:20:08.892371 systemd[2364]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Sep  4 17:20:08.892460 systemd[2364]: Reached target sockets.target - Sockets.
Sep  4 17:20:08.892480 systemd[2364]: Reached target basic.target - Basic System.
Sep  4 17:20:08.892532 systemd[2364]: Reached target default.target - Main User Target.
Sep  4 17:20:08.892670 systemd[2364]: Startup finished in 185ms.
Sep  4 17:20:08.893319 systemd[1]: Started user@500.service - User Manager for UID 500.
Sep  4 17:20:08.903520 systemd[1]: Started session-1.scope - Session 1 of User core.
Sep  4 17:20:09.050712 systemd[1]: Started sshd@1-172.31.19.141:22-139.178.68.195:36336.service - OpenSSH per-connection server daemon (139.178.68.195:36336).
Sep  4 17:20:09.211992 sshd[2376]: Accepted publickey for core from 139.178.68.195 port 36336 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:20:09.214235 sshd[2376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:20:09.220617 systemd-logind[2061]: New session 2 of user core.
Sep  4 17:20:09.230450 systemd[1]: Started session-2.scope - Session 2 of User core.
Sep  4 17:20:09.352583 sshd[2376]: pam_unix(sshd:session): session closed for user core
Sep  4 17:20:09.357208 systemd[1]: sshd@1-172.31.19.141:22-139.178.68.195:36336.service: Deactivated successfully.
Sep  4 17:20:09.361647 systemd[1]: session-2.scope: Deactivated successfully.
Sep  4 17:20:09.362706 systemd-logind[2061]: Session 2 logged out. Waiting for processes to exit.
Sep  4 17:20:09.364124 systemd-logind[2061]: Removed session 2.
Sep  4 17:20:09.381398 systemd[1]: Started sshd@2-172.31.19.141:22-139.178.68.195:36344.service - OpenSSH per-connection server daemon (139.178.68.195:36344).
Sep  4 17:20:09.540375 sshd[2384]: Accepted publickey for core from 139.178.68.195 port 36344 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:20:09.541881 sshd[2384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:20:09.547466 systemd-logind[2061]: New session 3 of user core.
Sep  4 17:20:09.557268 systemd[1]: Started session-3.scope - Session 3 of User core.
Sep  4 17:20:09.674034 sshd[2384]: pam_unix(sshd:session): session closed for user core
Sep  4 17:20:09.677687 systemd[1]: sshd@2-172.31.19.141:22-139.178.68.195:36344.service: Deactivated successfully.
Sep  4 17:20:09.682680 systemd[1]: session-3.scope: Deactivated successfully.
Sep  4 17:20:09.683465 systemd-logind[2061]: Session 3 logged out. Waiting for processes to exit.
Sep  4 17:20:09.685142 systemd-logind[2061]: Removed session 3.
Sep  4 17:20:09.702502 systemd[1]: Started sshd@3-172.31.19.141:22-139.178.68.195:36346.service - OpenSSH per-connection server daemon (139.178.68.195:36346).
Sep  4 17:20:09.865695 sshd[2392]: Accepted publickey for core from 139.178.68.195 port 36346 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:20:09.867380 sshd[2392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:20:09.876813 systemd-logind[2061]: New session 4 of user core.
Sep  4 17:20:09.885636 systemd[1]: Started session-4.scope - Session 4 of User core.
Sep  4 17:20:10.019456 sshd[2392]: pam_unix(sshd:session): session closed for user core
Sep  4 17:20:10.023682 systemd[1]: sshd@3-172.31.19.141:22-139.178.68.195:36346.service: Deactivated successfully.
Sep  4 17:20:10.034546 systemd-logind[2061]: Session 4 logged out. Waiting for processes to exit.
Sep  4 17:20:10.036118 systemd[1]: session-4.scope: Deactivated successfully.
Sep  4 17:20:10.038184 systemd-logind[2061]: Removed session 4.
Sep  4 17:20:10.047654 systemd[1]: Started sshd@4-172.31.19.141:22-139.178.68.195:36350.service - OpenSSH per-connection server daemon (139.178.68.195:36350).
Sep  4 17:20:10.215307 sshd[2400]: Accepted publickey for core from 139.178.68.195 port 36350 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:20:10.217381 sshd[2400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:20:10.236310 systemd-logind[2061]: New session 5 of user core.
Sep  4 17:20:10.243375 systemd[1]: Started session-5.scope - Session 5 of User core.
Sep  4 17:20:10.391883 sudo[2404]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Sep  4 17:20:10.392441 sudo[2404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Sep  4 17:20:10.405792 sudo[2404]: pam_unix(sudo:session): session closed for user root
Sep  4 17:20:10.429435 sshd[2400]: pam_unix(sshd:session): session closed for user core
Sep  4 17:20:10.434732 systemd[1]: sshd@4-172.31.19.141:22-139.178.68.195:36350.service: Deactivated successfully.
Sep  4 17:20:10.441269 systemd-logind[2061]: Session 5 logged out. Waiting for processes to exit.
Sep  4 17:20:10.442003 systemd[1]: session-5.scope: Deactivated successfully.
Sep  4 17:20:10.443470 systemd-logind[2061]: Removed session 5.
Sep  4 17:20:10.458950 systemd[1]: Started sshd@5-172.31.19.141:22-139.178.68.195:36356.service - OpenSSH per-connection server daemon (139.178.68.195:36356).
Sep  4 17:20:10.630736 sshd[2409]: Accepted publickey for core from 139.178.68.195 port 36356 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:20:10.632468 sshd[2409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:20:10.638066 systemd-logind[2061]: New session 6 of user core.
Sep  4 17:20:10.646355 systemd[1]: Started session-6.scope - Session 6 of User core.
Sep  4 17:20:10.759595 sudo[2414]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Sep  4 17:20:10.760073 sudo[2414]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Sep  4 17:20:10.771518 sudo[2414]: pam_unix(sudo:session): session closed for user root
Sep  4 17:20:10.786220 sudo[2413]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Sep  4 17:20:10.786773 sudo[2413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Sep  4 17:20:10.814458 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Sep  4 17:20:10.816747 auditctl[2417]: No rules
Sep  4 17:20:10.817329 systemd[1]: audit-rules.service: Deactivated successfully.
Sep  4 17:20:10.817663 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Sep  4 17:20:10.824283 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Sep  4 17:20:10.866602 augenrules[2436]: No rules
Sep  4 17:20:10.868604 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Sep  4 17:20:10.870886 sudo[2413]: pam_unix(sudo:session): session closed for user root
Sep  4 17:20:10.895395 sshd[2409]: pam_unix(sshd:session): session closed for user core
Sep  4 17:20:10.900294 systemd[1]: sshd@5-172.31.19.141:22-139.178.68.195:36356.service: Deactivated successfully.
Sep  4 17:20:10.907844 systemd[1]: session-6.scope: Deactivated successfully.
Sep  4 17:20:10.908979 systemd-logind[2061]: Session 6 logged out. Waiting for processes to exit.
Sep  4 17:20:10.910249 systemd-logind[2061]: Removed session 6.
Sep  4 17:20:10.931044 systemd[1]: Started sshd@6-172.31.19.141:22-139.178.68.195:36362.service - OpenSSH per-connection server daemon (139.178.68.195:36362).
Sep  4 17:20:11.088198 sshd[2445]: Accepted publickey for core from 139.178.68.195 port 36362 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:20:11.089744 sshd[2445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:20:11.097601 systemd-logind[2061]: New session 7 of user core.
Sep  4 17:20:11.104812 systemd[1]: Started session-7.scope - Session 7 of User core.
Sep  4 17:20:11.207532 sudo[2449]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Sep  4 17:20:11.207986 sudo[2449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Sep  4 17:20:11.388316 systemd[1]: Starting docker.service - Docker Application Container Engine...
Sep  4 17:20:11.400685 (dockerd)[2458]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Sep  4 17:20:11.877085 dockerd[2458]: time="2024-09-04T17:20:11.877026457Z" level=info msg="Starting up"
Sep  4 17:20:11.907786 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1257619042-merged.mount: Deactivated successfully.
Sep  4 17:20:12.849532 dockerd[2458]: time="2024-09-04T17:20:12.849484949Z" level=info msg="Loading containers: start."
Sep  4 17:20:13.062006 kernel: Initializing XFRM netlink socket
Sep  4 17:20:13.123184 (udev-worker)[2469]: Network interface NamePolicy= disabled on kernel command line.
Sep  4 17:20:13.219504 systemd-networkd[1653]: docker0: Link UP
Sep  4 17:20:13.237720 dockerd[2458]: time="2024-09-04T17:20:13.237677442Z" level=info msg="Loading containers: done."
Sep  4 17:20:13.323369 dockerd[2458]: time="2024-09-04T17:20:13.323319899Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Sep  4 17:20:13.323591 dockerd[2458]: time="2024-09-04T17:20:13.323565245Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9
Sep  4 17:20:13.323713 dockerd[2458]: time="2024-09-04T17:20:13.323691175Z" level=info msg="Daemon has completed initialization"
Sep  4 17:20:13.355045 dockerd[2458]: time="2024-09-04T17:20:13.354987709Z" level=info msg="API listen on /run/docker.sock"
Sep  4 17:20:13.355363 systemd[1]: Started docker.service - Docker Application Container Engine.
Sep  4 17:20:13.611421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Sep  4 17:20:13.622593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:20:14.167534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:20:14.169292 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Sep  4 17:20:14.338654 kubelet[2597]: E0904 17:20:14.338576    2597 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Sep  4 17:20:14.343951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep  4 17:20:14.344161 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep  4 17:20:14.355676 containerd[2090]: time="2024-09-04T17:20:14.355637925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\""
Sep  4 17:20:15.011842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1702674342.mount: Deactivated successfully.
Sep  4 17:20:19.304593 containerd[2090]: time="2024-09-04T17:20:19.304535705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:19.306065 containerd[2090]: time="2024-09-04T17:20:19.305967180Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=34530735"
Sep  4 17:20:19.309096 containerd[2090]: time="2024-09-04T17:20:19.307399840Z" level=info msg="ImageCreate event name:\"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:19.311411 containerd[2090]: time="2024-09-04T17:20:19.311374282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:19.312743 containerd[2090]: time="2024-09-04T17:20:19.312702903Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"34527535\" in 4.957022163s"
Sep  4 17:20:19.312832 containerd[2090]: time="2024-09-04T17:20:19.312752378Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\""
Sep  4 17:20:19.343152 containerd[2090]: time="2024-09-04T17:20:19.343113684Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\""
Sep  4 17:20:23.595143 containerd[2090]: time="2024-09-04T17:20:23.595085889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:23.596662 containerd[2090]: time="2024-09-04T17:20:23.596460266Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=31849709"
Sep  4 17:20:23.598448 containerd[2090]: time="2024-09-04T17:20:23.598390820Z" level=info msg="ImageCreate event name:\"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:23.602199 containerd[2090]: time="2024-09-04T17:20:23.602135369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:23.603570 containerd[2090]: time="2024-09-04T17:20:23.603402271Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"33399655\" in 4.260093832s"
Sep  4 17:20:23.603570 containerd[2090]: time="2024-09-04T17:20:23.603450707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\""
Sep  4 17:20:23.630091 containerd[2090]: time="2024-09-04T17:20:23.629973095Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\""
Sep  4 17:20:24.362222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Sep  4 17:20:24.369227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:20:24.782539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:20:24.784436 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Sep  4 17:20:24.871966 kubelet[2689]: E0904 17:20:24.871852    2689 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Sep  4 17:20:24.874173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep  4 17:20:24.874325 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep  4 17:20:25.567435 containerd[2090]: time="2024-09-04T17:20:25.567383016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:25.569954 containerd[2090]: time="2024-09-04T17:20:25.569873827Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=17097777"
Sep  4 17:20:25.572677 containerd[2090]: time="2024-09-04T17:20:25.572601819Z" level=info msg="ImageCreate event name:\"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:25.576109 containerd[2090]: time="2024-09-04T17:20:25.576032229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:25.580942 containerd[2090]: time="2024-09-04T17:20:25.580494742Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"18647741\" in 1.950475647s"
Sep  4 17:20:25.580942 containerd[2090]: time="2024-09-04T17:20:25.580548330Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\""
Sep  4 17:20:25.611702 containerd[2090]: time="2024-09-04T17:20:25.611666105Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\""
Sep  4 17:20:26.945223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount378581231.mount: Deactivated successfully.
Sep  4 17:20:28.357163 containerd[2090]: time="2024-09-04T17:20:28.357104274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:28.359013 containerd[2090]: time="2024-09-04T17:20:28.358564919Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=28303449"
Sep  4 17:20:28.361068 containerd[2090]: time="2024-09-04T17:20:28.361019645Z" level=info msg="ImageCreate event name:\"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:28.364790 containerd[2090]: time="2024-09-04T17:20:28.364009381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:28.364790 containerd[2090]: time="2024-09-04T17:20:28.364643019Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"28302468\" in 2.752937845s"
Sep  4 17:20:28.364790 containerd[2090]: time="2024-09-04T17:20:28.364684008Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\""
Sep  4 17:20:28.398997 containerd[2090]: time="2024-09-04T17:20:28.398956894Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Sep  4 17:20:28.891247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3137003554.mount: Deactivated successfully.
Sep  4 17:20:28.901076 containerd[2090]: time="2024-09-04T17:20:28.901024346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:28.903011 containerd[2090]: time="2024-09-04T17:20:28.902644446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290"
Sep  4 17:20:28.909416 containerd[2090]: time="2024-09-04T17:20:28.909369140Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:28.925145 containerd[2090]: time="2024-09-04T17:20:28.925095151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:28.926564 containerd[2090]: time="2024-09-04T17:20:28.926511874Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 527.511796ms"
Sep  4 17:20:28.926564 containerd[2090]: time="2024-09-04T17:20:28.926556724Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Sep  4 17:20:28.951764 containerd[2090]: time="2024-09-04T17:20:28.951729046Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Sep  4 17:20:29.487932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2399939196.mount: Deactivated successfully.
Sep  4 17:20:31.511728 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Sep  4 17:20:34.469017 containerd[2090]: time="2024-09-04T17:20:34.468650722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:34.471243 containerd[2090]: time="2024-09-04T17:20:34.471167142Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625"
Sep  4 17:20:34.473405 containerd[2090]: time="2024-09-04T17:20:34.473345085Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:34.477940 containerd[2090]: time="2024-09-04T17:20:34.477461847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:34.478804 containerd[2090]: time="2024-09-04T17:20:34.478762241Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.52696896s"
Sep  4 17:20:34.478903 containerd[2090]: time="2024-09-04T17:20:34.478814313Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\""
Sep  4 17:20:34.503873 containerd[2090]: time="2024-09-04T17:20:34.503835286Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\""
Sep  4 17:20:35.015422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Sep  4 17:20:35.017836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888767456.mount: Deactivated successfully.
Sep  4 17:20:35.040169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:20:35.603134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:20:35.615895 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Sep  4 17:20:35.771637 kubelet[2806]: E0904 17:20:35.770911    2806 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Sep  4 17:20:35.778091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep  4 17:20:35.779665 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep  4 17:20:36.129628 containerd[2090]: time="2024-09-04T17:20:36.129576754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:36.130946 containerd[2090]: time="2024-09-04T17:20:36.130836256Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749"
Sep  4 17:20:36.132642 containerd[2090]: time="2024-09-04T17:20:36.132586105Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:36.137374 containerd[2090]: time="2024-09-04T17:20:36.135899584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:20:36.137374 containerd[2090]: time="2024-09-04T17:20:36.136959372Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.63308361s"
Sep  4 17:20:36.137374 containerd[2090]: time="2024-09-04T17:20:36.137033239Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\""
Sep  4 17:20:39.780295 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:20:39.798352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:20:39.855873 systemd[1]: Reloading requested from client PID 2879 ('systemctl') (unit session-7.scope)...
Sep  4 17:20:39.855892 systemd[1]: Reloading...
Sep  4 17:20:40.013252 zram_generator::config[2917]: No configuration found.
Sep  4 17:20:40.278885 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 17:20:40.418096 systemd[1]: Reloading finished in 561 ms.
Sep  4 17:20:40.501878 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Sep  4 17:20:40.502105 systemd[1]: kubelet.service: Failed with result 'signal'.
Sep  4 17:20:40.502517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:20:40.508576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:20:40.943234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:20:40.955595 (kubelet)[2989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Sep  4 17:20:41.017878 kubelet[2989]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 17:20:41.018396 kubelet[2989]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Sep  4 17:20:41.018396 kubelet[2989]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 17:20:41.019987 kubelet[2989]: I0904 17:20:41.019309    2989 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Sep  4 17:20:41.741636 kubelet[2989]: I0904 17:20:41.741594    2989 server.go:467] "Kubelet version" kubeletVersion="v1.28.7"
Sep  4 17:20:41.741636 kubelet[2989]: I0904 17:20:41.741625    2989 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep  4 17:20:41.741897 kubelet[2989]: I0904 17:20:41.741876    2989 server.go:895] "Client rotation is on, will bootstrap in background"
Sep  4 17:20:41.787834 kubelet[2989]: E0904 17:20:41.787595    2989 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:41.787834 kubelet[2989]: I0904 17:20:41.787683    2989 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep  4 17:20:41.808948 kubelet[2989]: I0904 17:20:41.807239    2989 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Sep  4 17:20:41.811607 kubelet[2989]: I0904 17:20:41.811573    2989 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Sep  4 17:20:41.812229 kubelet[2989]: I0904 17:20:41.812206    2989 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Sep  4 17:20:41.813067 kubelet[2989]: I0904 17:20:41.813040    2989 topology_manager.go:138] "Creating topology manager with none policy"
Sep  4 17:20:41.813067 kubelet[2989]: I0904 17:20:41.813069    2989 container_manager_linux.go:301] "Creating device plugin manager"
Sep  4 17:20:41.814612 kubelet[2989]: I0904 17:20:41.814585    2989 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 17:20:41.816413 kubelet[2989]: I0904 17:20:41.816388    2989 kubelet.go:393] "Attempting to sync node with API server"
Sep  4 17:20:41.816496 kubelet[2989]: I0904 17:20:41.816420    2989 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Sep  4 17:20:41.816496 kubelet[2989]: I0904 17:20:41.816457    2989 kubelet.go:309] "Adding apiserver pod source"
Sep  4 17:20:41.816496 kubelet[2989]: I0904 17:20:41.816477    2989 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Sep  4 17:20:41.820302 kubelet[2989]: I0904 17:20:41.820278    2989 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1"
Sep  4 17:20:41.823127 kubelet[2989]: W0904 17:20:41.823073    2989 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.19.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-141&limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:41.823273 kubelet[2989]: E0904 17:20:41.823142    2989 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-141&limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:41.823336 kubelet[2989]: W0904 17:20:41.823275    2989 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.19.141:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:41.823336 kubelet[2989]: E0904 17:20:41.823323    2989 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.141:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:41.826142 kubelet[2989]: W0904 17:20:41.826115    2989 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Sep  4 17:20:41.827057 kubelet[2989]: I0904 17:20:41.826822    2989 server.go:1232] "Started kubelet"
Sep  4 17:20:41.827250 kubelet[2989]: I0904 17:20:41.827224    2989 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Sep  4 17:20:41.833937 kubelet[2989]: I0904 17:20:41.832159    2989 server.go:462] "Adding debug handlers to kubelet server"
Sep  4 17:20:41.833937 kubelet[2989]: I0904 17:20:41.832409    2989 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Sep  4 17:20:41.833937 kubelet[2989]: I0904 17:20:41.832781    2989 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Sep  4 17:20:41.833937 kubelet[2989]: E0904 17:20:41.833034    2989 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-141.17f21a399385b917", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-141", UID:"ip-172-31-19-141", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-141"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 20, 41, 826793751, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 20, 41, 826793751, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-19-141"}': 'Post "https://172.31.19.141:6443/api/v1/namespaces/default/events": dial tcp 172.31.19.141:6443: connect: connection refused'(may retry after sleeping)
Sep  4 17:20:41.835110 kubelet[2989]: I0904 17:20:41.835091    2989 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Sep  4 17:20:41.843395 kubelet[2989]: E0904 17:20:41.843058    2989 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Sep  4 17:20:41.843395 kubelet[2989]: E0904 17:20:41.843097    2989 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Sep  4 17:20:41.845660 kubelet[2989]: E0904 17:20:41.845636    2989 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-19-141\" not found"
Sep  4 17:20:41.850664 kubelet[2989]: I0904 17:20:41.850610    2989 volume_manager.go:291] "Starting Kubelet Volume Manager"
Sep  4 17:20:41.850805 kubelet[2989]: I0904 17:20:41.850739    2989 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Sep  4 17:20:41.850855 kubelet[2989]: I0904 17:20:41.850816    2989 reconciler_new.go:29] "Reconciler: start to sync state"
Sep  4 17:20:41.852907 kubelet[2989]: W0904 17:20:41.852855    2989 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.19.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:41.853080 kubelet[2989]: E0904 17:20:41.852938    2989 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:41.853080 kubelet[2989]: E0904 17:20:41.853041    2989 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-141?timeout=10s\": dial tcp 172.31.19.141:6443: connect: connection refused" interval="200ms"
Sep  4 17:20:41.881229 kubelet[2989]: I0904 17:20:41.881198    2989 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Sep  4 17:20:41.887040 kubelet[2989]: I0904 17:20:41.886834    2989 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Sep  4 17:20:41.887040 kubelet[2989]: I0904 17:20:41.886886    2989 status_manager.go:217] "Starting to sync pod status with apiserver"
Sep  4 17:20:41.890796 kubelet[2989]: I0904 17:20:41.890560    2989 kubelet.go:2303] "Starting kubelet main sync loop"
Sep  4 17:20:41.890796 kubelet[2989]: E0904 17:20:41.890694    2989 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Sep  4 17:20:41.892691 kubelet[2989]: W0904 17:20:41.892581    2989 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.19.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:41.892691 kubelet[2989]: E0904 17:20:41.892621    2989 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:41.934233 kubelet[2989]: I0904 17:20:41.934205    2989 cpu_manager.go:214] "Starting CPU manager" policy="none"
Sep  4 17:20:41.934233 kubelet[2989]: I0904 17:20:41.934228    2989 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Sep  4 17:20:41.934418 kubelet[2989]: I0904 17:20:41.934246    2989 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 17:20:41.936961 kubelet[2989]: I0904 17:20:41.936930    2989 policy_none.go:49] "None policy: Start"
Sep  4 17:20:41.938724 kubelet[2989]: I0904 17:20:41.938383    2989 memory_manager.go:169] "Starting memorymanager" policy="None"
Sep  4 17:20:41.938724 kubelet[2989]: I0904 17:20:41.938421    2989 state_mem.go:35] "Initializing new in-memory state store"
Sep  4 17:20:41.945123 kubelet[2989]: I0904 17:20:41.945077    2989 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Sep  4 17:20:41.947985 kubelet[2989]: I0904 17:20:41.947622    2989 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Sep  4 17:20:41.949297 kubelet[2989]: E0904 17:20:41.949274    2989 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-141\" not found"
Sep  4 17:20:41.953621 kubelet[2989]: I0904 17:20:41.953588    2989 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-141"
Sep  4 17:20:41.954314 kubelet[2989]: E0904 17:20:41.954289    2989 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.141:6443/api/v1/nodes\": dial tcp 172.31.19.141:6443: connect: connection refused" node="ip-172-31-19-141"
Sep  4 17:20:41.991896 kubelet[2989]: I0904 17:20:41.991646    2989 topology_manager.go:215] "Topology Admit Handler" podUID="8153a58453f522101414d5f0123ed375" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-141"
Sep  4 17:20:41.994572 kubelet[2989]: I0904 17:20:41.993948    2989 topology_manager.go:215] "Topology Admit Handler" podUID="f354c8fdf06c56a2f619de14d3020513" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-141"
Sep  4 17:20:41.996734 kubelet[2989]: I0904 17:20:41.996493    2989 topology_manager.go:215] "Topology Admit Handler" podUID="b1c3664190a93a499601bd51c1e40e04" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-141"
Sep  4 17:20:42.053571 kubelet[2989]: E0904 17:20:42.053480    2989 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-141?timeout=10s\": dial tcp 172.31.19.141:6443: connect: connection refused" interval="400ms"
Sep  4 17:20:42.152037 kubelet[2989]: I0904 17:20:42.151976    2989 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8153a58453f522101414d5f0123ed375-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-141\" (UID: \"8153a58453f522101414d5f0123ed375\") " pod="kube-system/kube-apiserver-ip-172-31-19-141"
Sep  4 17:20:42.152281 kubelet[2989]: I0904 17:20:42.152152    2989 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f354c8fdf06c56a2f619de14d3020513-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-141\" (UID: \"f354c8fdf06c56a2f619de14d3020513\") " pod="kube-system/kube-controller-manager-ip-172-31-19-141"
Sep  4 17:20:42.152281 kubelet[2989]: I0904 17:20:42.152187    2989 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f354c8fdf06c56a2f619de14d3020513-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-141\" (UID: \"f354c8fdf06c56a2f619de14d3020513\") " pod="kube-system/kube-controller-manager-ip-172-31-19-141"
Sep  4 17:20:42.152281 kubelet[2989]: I0904 17:20:42.152216    2989 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8153a58453f522101414d5f0123ed375-ca-certs\") pod \"kube-apiserver-ip-172-31-19-141\" (UID: \"8153a58453f522101414d5f0123ed375\") " pod="kube-system/kube-apiserver-ip-172-31-19-141"
Sep  4 17:20:42.152281 kubelet[2989]: I0904 17:20:42.152243    2989 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8153a58453f522101414d5f0123ed375-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-141\" (UID: \"8153a58453f522101414d5f0123ed375\") " pod="kube-system/kube-apiserver-ip-172-31-19-141"
Sep  4 17:20:42.152281 kubelet[2989]: I0904 17:20:42.152269    2989 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f354c8fdf06c56a2f619de14d3020513-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-141\" (UID: \"f354c8fdf06c56a2f619de14d3020513\") " pod="kube-system/kube-controller-manager-ip-172-31-19-141"
Sep  4 17:20:42.152636 kubelet[2989]: I0904 17:20:42.152314    2989 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f354c8fdf06c56a2f619de14d3020513-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-141\" (UID: \"f354c8fdf06c56a2f619de14d3020513\") " pod="kube-system/kube-controller-manager-ip-172-31-19-141"
Sep  4 17:20:42.152636 kubelet[2989]: I0904 17:20:42.152456    2989 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f354c8fdf06c56a2f619de14d3020513-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-141\" (UID: \"f354c8fdf06c56a2f619de14d3020513\") " pod="kube-system/kube-controller-manager-ip-172-31-19-141"
Sep  4 17:20:42.152636 kubelet[2989]: I0904 17:20:42.152499    2989 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1c3664190a93a499601bd51c1e40e04-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-141\" (UID: \"b1c3664190a93a499601bd51c1e40e04\") " pod="kube-system/kube-scheduler-ip-172-31-19-141"
Sep  4 17:20:42.156354 kubelet[2989]: I0904 17:20:42.156327    2989 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-141"
Sep  4 17:20:42.156725 kubelet[2989]: E0904 17:20:42.156702    2989 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.141:6443/api/v1/nodes\": dial tcp 172.31.19.141:6443: connect: connection refused" node="ip-172-31-19-141"
Sep  4 17:20:42.305606 containerd[2090]: time="2024-09-04T17:20:42.305212378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-141,Uid:b1c3664190a93a499601bd51c1e40e04,Namespace:kube-system,Attempt:0,}"
Sep  4 17:20:42.305606 containerd[2090]: time="2024-09-04T17:20:42.305298925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-141,Uid:8153a58453f522101414d5f0123ed375,Namespace:kube-system,Attempt:0,}"
Sep  4 17:20:42.307115 containerd[2090]: time="2024-09-04T17:20:42.307080670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-141,Uid:f354c8fdf06c56a2f619de14d3020513,Namespace:kube-system,Attempt:0,}"
Sep  4 17:20:42.455069 kubelet[2989]: E0904 17:20:42.455032    2989 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-141?timeout=10s\": dial tcp 172.31.19.141:6443: connect: connection refused" interval="800ms"
Sep  4 17:20:42.559848 kubelet[2989]: I0904 17:20:42.559738    2989 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-141"
Sep  4 17:20:42.560252 kubelet[2989]: E0904 17:20:42.560165    2989 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.141:6443/api/v1/nodes\": dial tcp 172.31.19.141:6443: connect: connection refused" node="ip-172-31-19-141"
Sep  4 17:20:42.737517 kubelet[2989]: W0904 17:20:42.737420    2989 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.19.141:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:42.737517 kubelet[2989]: E0904 17:20:42.737523    2989 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.141:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:42.924053 kubelet[2989]: W0904 17:20:42.923986    2989 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.19.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-141&limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:42.924053 kubelet[2989]: E0904 17:20:42.924057    2989 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-141&limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:43.032628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015728083.mount: Deactivated successfully.
Sep  4 17:20:43.041254 containerd[2090]: time="2024-09-04T17:20:43.041195129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 17:20:43.042724 containerd[2090]: time="2024-09-04T17:20:43.042683281Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 17:20:43.044093 containerd[2090]: time="2024-09-04T17:20:43.044048751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Sep  4 17:20:43.045426 containerd[2090]: time="2024-09-04T17:20:43.045388433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056"
Sep  4 17:20:43.047146 containerd[2090]: time="2024-09-04T17:20:43.047105750Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 17:20:43.048634 containerd[2090]: time="2024-09-04T17:20:43.048592679Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 17:20:43.050003 containerd[2090]: time="2024-09-04T17:20:43.049771488Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Sep  4 17:20:43.052966 containerd[2090]: time="2024-09-04T17:20:43.052182378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 17:20:43.054901 containerd[2090]: time="2024-09-04T17:20:43.054703458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 749.340258ms"
Sep  4 17:20:43.057413 containerd[2090]: time="2024-09-04T17:20:43.056599702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 749.182137ms"
Sep  4 17:20:43.057638 kubelet[2989]: W0904 17:20:43.057485    2989 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.19.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:43.057638 kubelet[2989]: E0904 17:20:43.057615    2989 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:43.065518 containerd[2090]: time="2024-09-04T17:20:43.065474639Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 760.147829ms"
Sep  4 17:20:43.204664 kubelet[2989]: W0904 17:20:43.204462    2989 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.19.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:43.204664 kubelet[2989]: E0904 17:20:43.204530    2989 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:43.266812 kubelet[2989]: E0904 17:20:43.266621    2989 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-141?timeout=10s\": dial tcp 172.31.19.141:6443: connect: connection refused" interval="1.6s"
Sep  4 17:20:43.315627 containerd[2090]: time="2024-09-04T17:20:43.315205103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:20:43.315627 containerd[2090]: time="2024-09-04T17:20:43.315293918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:20:43.315627 containerd[2090]: time="2024-09-04T17:20:43.315329026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:20:43.315627 containerd[2090]: time="2024-09-04T17:20:43.315355938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:20:43.318298 containerd[2090]: time="2024-09-04T17:20:43.318193458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:20:43.318403 containerd[2090]: time="2024-09-04T17:20:43.318284405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:20:43.318403 containerd[2090]: time="2024-09-04T17:20:43.318313586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:20:43.318403 containerd[2090]: time="2024-09-04T17:20:43.318349788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:20:43.323050 containerd[2090]: time="2024-09-04T17:20:43.322654086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:20:43.337253 containerd[2090]: time="2024-09-04T17:20:43.336621954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:20:43.337253 containerd[2090]: time="2024-09-04T17:20:43.336673985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:20:43.337253 containerd[2090]: time="2024-09-04T17:20:43.336692383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:20:43.370190 kubelet[2989]: I0904 17:20:43.366226    2989 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-141"
Sep  4 17:20:43.371745 kubelet[2989]: E0904 17:20:43.371194    2989 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.141:6443/api/v1/nodes\": dial tcp 172.31.19.141:6443: connect: connection refused" node="ip-172-31-19-141"
Sep  4 17:20:43.485950 containerd[2090]: time="2024-09-04T17:20:43.485801269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-141,Uid:b1c3664190a93a499601bd51c1e40e04,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7fd0b1ce60aaf8c07e934028b747bfee14f94f3d6774c4e1a8951b79bc20b0f\""
Sep  4 17:20:43.493331 containerd[2090]: time="2024-09-04T17:20:43.493292256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-141,Uid:f354c8fdf06c56a2f619de14d3020513,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac84d114dbd91c6d113632d29ec669d3219bf7b4dd0c4f2a1f21abc2ca7c0786\""
Sep  4 17:20:43.502349 containerd[2090]: time="2024-09-04T17:20:43.502302099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-141,Uid:8153a58453f522101414d5f0123ed375,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c13f4d965b0c00429ce94bd7781755b762142203462bf422d127723aba51272\""
Sep  4 17:20:43.506012 containerd[2090]: time="2024-09-04T17:20:43.504985746Z" level=info msg="CreateContainer within sandbox \"a7fd0b1ce60aaf8c07e934028b747bfee14f94f3d6774c4e1a8951b79bc20b0f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Sep  4 17:20:43.508485 containerd[2090]: time="2024-09-04T17:20:43.508449084Z" level=info msg="CreateContainer within sandbox \"ac84d114dbd91c6d113632d29ec669d3219bf7b4dd0c4f2a1f21abc2ca7c0786\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Sep  4 17:20:43.517237 containerd[2090]: time="2024-09-04T17:20:43.517193352Z" level=info msg="CreateContainer within sandbox \"5c13f4d965b0c00429ce94bd7781755b762142203462bf422d127723aba51272\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Sep  4 17:20:43.538185 containerd[2090]: time="2024-09-04T17:20:43.538132439Z" level=info msg="CreateContainer within sandbox \"a7fd0b1ce60aaf8c07e934028b747bfee14f94f3d6774c4e1a8951b79bc20b0f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"33c3c5b34cc04e178f07e342b2ce42d7e79d5feba122ad4868e1d8128dc7075e\""
Sep  4 17:20:43.539136 containerd[2090]: time="2024-09-04T17:20:43.539102480Z" level=info msg="StartContainer for \"33c3c5b34cc04e178f07e342b2ce42d7e79d5feba122ad4868e1d8128dc7075e\""
Sep  4 17:20:43.551894 containerd[2090]: time="2024-09-04T17:20:43.551768332Z" level=info msg="CreateContainer within sandbox \"ac84d114dbd91c6d113632d29ec669d3219bf7b4dd0c4f2a1f21abc2ca7c0786\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"357292d98d0b316a03921ac220f2d79c8c86f43dbe8785a0b8e477c2e514e59c\""
Sep  4 17:20:43.553449 containerd[2090]: time="2024-09-04T17:20:43.553122096Z" level=info msg="StartContainer for \"357292d98d0b316a03921ac220f2d79c8c86f43dbe8785a0b8e477c2e514e59c\""
Sep  4 17:20:43.555162 containerd[2090]: time="2024-09-04T17:20:43.555127047Z" level=info msg="CreateContainer within sandbox \"5c13f4d965b0c00429ce94bd7781755b762142203462bf422d127723aba51272\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"719abd359c9a432455f9c457326e29f0f6030ca44788c080beb09232d78d9b95\""
Sep  4 17:20:43.562130 containerd[2090]: time="2024-09-04T17:20:43.562089230Z" level=info msg="StartContainer for \"719abd359c9a432455f9c457326e29f0f6030ca44788c080beb09232d78d9b95\""
Sep  4 17:20:43.739870 containerd[2090]: time="2024-09-04T17:20:43.736538158Z" level=info msg="StartContainer for \"33c3c5b34cc04e178f07e342b2ce42d7e79d5feba122ad4868e1d8128dc7075e\" returns successfully"
Sep  4 17:20:43.895167 containerd[2090]: time="2024-09-04T17:20:43.894061208Z" level=info msg="StartContainer for \"357292d98d0b316a03921ac220f2d79c8c86f43dbe8785a0b8e477c2e514e59c\" returns successfully"
Sep  4 17:20:43.916713 containerd[2090]: time="2024-09-04T17:20:43.916499834Z" level=info msg="StartContainer for \"719abd359c9a432455f9c457326e29f0f6030ca44788c080beb09232d78d9b95\" returns successfully"
Sep  4 17:20:43.970379 kubelet[2989]: E0904 17:20:43.970346    2989 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.141:6443: connect: connection refused
Sep  4 17:20:44.976062 kubelet[2989]: I0904 17:20:44.976028    2989 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-141"
Sep  4 17:20:45.296140 update_engine[2064]: I0904 17:20:45.295966  2064 update_attempter.cc:509] Updating boot flags...
Sep  4 17:20:45.532205 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3276)
Sep  4 17:20:46.106964 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3281)
Sep  4 17:20:46.753392 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3281)
Sep  4 17:20:48.047733 kubelet[2989]: E0904 17:20:48.047691    2989 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-141\" not found" node="ip-172-31-19-141"
Sep  4 17:20:48.115649 kubelet[2989]: I0904 17:20:48.115488    2989 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-19-141"
Sep  4 17:20:48.829166 kubelet[2989]: I0904 17:20:48.829112    2989 apiserver.go:52] "Watching apiserver"
Sep  4 17:20:48.851471 kubelet[2989]: I0904 17:20:48.851413    2989 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Sep  4 17:20:50.605218 systemd[1]: Reloading requested from client PID 3530 ('systemctl') (unit session-7.scope)...
Sep  4 17:20:50.605303 systemd[1]: Reloading...
Sep  4 17:20:50.721964 zram_generator::config[3565]: No configuration found.
Sep  4 17:20:50.882416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 17:20:50.992329 systemd[1]: Reloading finished in 386 ms.
Sep  4 17:20:51.030421 kubelet[2989]: I0904 17:20:51.030342    2989 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep  4 17:20:51.030476 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:20:51.052389 systemd[1]: kubelet.service: Deactivated successfully.
Sep  4 17:20:51.052871 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:20:51.061463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:20:51.503142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:20:51.518526 (kubelet)[3635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Sep  4 17:20:51.701497 kubelet[3635]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 17:20:51.701497 kubelet[3635]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Sep  4 17:20:51.701497 kubelet[3635]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 17:20:51.702888 kubelet[3635]: I0904 17:20:51.701564    3635 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Sep  4 17:20:51.703440 sudo[3647]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Sep  4 17:20:51.703825 sudo[3647]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep  4 17:20:51.711660 kubelet[3635]: I0904 17:20:51.710562    3635 server.go:467] "Kubelet version" kubeletVersion="v1.28.7"
Sep  4 17:20:51.711660 kubelet[3635]: I0904 17:20:51.710596    3635 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep  4 17:20:51.711660 kubelet[3635]: I0904 17:20:51.711016    3635 server.go:895] "Client rotation is on, will bootstrap in background"
Sep  4 17:20:51.713865 kubelet[3635]: I0904 17:20:51.713725    3635 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Sep  4 17:20:51.716509 kubelet[3635]: I0904 17:20:51.716077    3635 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep  4 17:20:51.734233 kubelet[3635]: I0904 17:20:51.733905    3635 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Sep  4 17:20:51.734845 kubelet[3635]: I0904 17:20:51.734492    3635 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Sep  4 17:20:51.734845 kubelet[3635]: I0904 17:20:51.734814    3635 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Sep  4 17:20:51.734845 kubelet[3635]: I0904 17:20:51.734847    3635 topology_manager.go:138] "Creating topology manager with none policy"
Sep  4 17:20:51.735114 kubelet[3635]: I0904 17:20:51.734862    3635 container_manager_linux.go:301] "Creating device plugin manager"
Sep  4 17:20:51.735114 kubelet[3635]: I0904 17:20:51.734907    3635 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 17:20:51.735114 kubelet[3635]: I0904 17:20:51.735050    3635 kubelet.go:393] "Attempting to sync node with API server"
Sep  4 17:20:51.735114 kubelet[3635]: I0904 17:20:51.735071    3635 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Sep  4 17:20:51.741951 kubelet[3635]: I0904 17:20:51.741000    3635 kubelet.go:309] "Adding apiserver pod source"
Sep  4 17:20:51.741951 kubelet[3635]: I0904 17:20:51.741043    3635 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Sep  4 17:20:51.756670 kubelet[3635]: I0904 17:20:51.755993    3635 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1"
Sep  4 17:20:51.758888 kubelet[3635]: I0904 17:20:51.758388    3635 server.go:1232] "Started kubelet"
Sep  4 17:20:51.770839 kubelet[3635]: I0904 17:20:51.767436    3635 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Sep  4 17:20:51.775831 kubelet[3635]: I0904 17:20:51.773541    3635 server.go:462] "Adding debug handlers to kubelet server"
Sep  4 17:20:51.780255 kubelet[3635]: I0904 17:20:51.768105    3635 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Sep  4 17:20:51.782314 kubelet[3635]: I0904 17:20:51.778667    3635 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Sep  4 17:20:51.782777 kubelet[3635]: I0904 17:20:51.782750    3635 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Sep  4 17:20:51.783071 kubelet[3635]: I0904 17:20:51.783046    3635 volume_manager.go:291] "Starting Kubelet Volume Manager"
Sep  4 17:20:51.784723 kubelet[3635]: I0904 17:20:51.784655    3635 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Sep  4 17:20:51.787388 kubelet[3635]: E0904 17:20:51.787366    3635 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Sep  4 17:20:51.788201 kubelet[3635]: E0904 17:20:51.787412    3635 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Sep  4 17:20:51.802536 kubelet[3635]: I0904 17:20:51.802498    3635 reconciler_new.go:29] "Reconciler: start to sync state"
Sep  4 17:20:51.858570 kubelet[3635]: I0904 17:20:51.858261    3635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Sep  4 17:20:51.865800 kubelet[3635]: I0904 17:20:51.864948    3635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Sep  4 17:20:51.865800 kubelet[3635]: I0904 17:20:51.864979    3635 status_manager.go:217] "Starting to sync pod status with apiserver"
Sep  4 17:20:51.865800 kubelet[3635]: I0904 17:20:51.865004    3635 kubelet.go:2303] "Starting kubelet main sync loop"
Sep  4 17:20:51.865800 kubelet[3635]: E0904 17:20:51.865068    3635 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Sep  4 17:20:51.898189 kubelet[3635]: I0904 17:20:51.898166    3635 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-141"
Sep  4 17:20:51.927276 kubelet[3635]: I0904 17:20:51.926913    3635 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-19-141"
Sep  4 17:20:51.927957 kubelet[3635]: I0904 17:20:51.927569    3635 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-19-141"
Sep  4 17:20:51.966273 kubelet[3635]: E0904 17:20:51.966244    3635 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Sep  4 17:20:52.073145 kubelet[3635]: I0904 17:20:52.072194    3635 cpu_manager.go:214] "Starting CPU manager" policy="none"
Sep  4 17:20:52.073145 kubelet[3635]: I0904 17:20:52.072219    3635 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Sep  4 17:20:52.073145 kubelet[3635]: I0904 17:20:52.072238    3635 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 17:20:52.073665 kubelet[3635]: I0904 17:20:52.073554    3635 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Sep  4 17:20:52.073665 kubelet[3635]: I0904 17:20:52.073605    3635 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Sep  4 17:20:52.073665 kubelet[3635]: I0904 17:20:52.073616    3635 policy_none.go:49] "None policy: Start"
Sep  4 17:20:52.076854 kubelet[3635]: I0904 17:20:52.076120    3635 memory_manager.go:169] "Starting memorymanager" policy="None"
Sep  4 17:20:52.076854 kubelet[3635]: I0904 17:20:52.076153    3635 state_mem.go:35] "Initializing new in-memory state store"
Sep  4 17:20:52.076854 kubelet[3635]: I0904 17:20:52.076372    3635 state_mem.go:75] "Updated machine memory state"
Sep  4 17:20:52.080416 kubelet[3635]: I0904 17:20:52.080392    3635 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Sep  4 17:20:52.084720 kubelet[3635]: I0904 17:20:52.083654    3635 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Sep  4 17:20:52.166463 kubelet[3635]: I0904 17:20:52.166428    3635 topology_manager.go:215] "Topology Admit Handler" podUID="8153a58453f522101414d5f0123ed375" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-141"
Sep  4 17:20:52.167855 kubelet[3635]: I0904 17:20:52.167757    3635 topology_manager.go:215] "Topology Admit Handler" podUID="f354c8fdf06c56a2f619de14d3020513" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-141"
Sep  4 17:20:52.169808 kubelet[3635]: I0904 17:20:52.169585    3635 topology_manager.go:215] "Topology Admit Handler" podUID="b1c3664190a93a499601bd51c1e40e04" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-141"
Sep  4 17:20:52.198489 kubelet[3635]: E0904 17:20:52.197484    3635 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-19-141\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-141"
Sep  4 17:20:52.213740 kubelet[3635]: I0904 17:20:52.213703    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1c3664190a93a499601bd51c1e40e04-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-141\" (UID: \"b1c3664190a93a499601bd51c1e40e04\") " pod="kube-system/kube-scheduler-ip-172-31-19-141"
Sep  4 17:20:52.213877 kubelet[3635]: I0904 17:20:52.213781    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8153a58453f522101414d5f0123ed375-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-141\" (UID: \"8153a58453f522101414d5f0123ed375\") " pod="kube-system/kube-apiserver-ip-172-31-19-141"
Sep  4 17:20:52.213877 kubelet[3635]: I0904 17:20:52.213850    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f354c8fdf06c56a2f619de14d3020513-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-141\" (UID: \"f354c8fdf06c56a2f619de14d3020513\") " pod="kube-system/kube-controller-manager-ip-172-31-19-141"
Sep  4 17:20:52.214003 kubelet[3635]: I0904 17:20:52.213881    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f354c8fdf06c56a2f619de14d3020513-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-141\" (UID: \"f354c8fdf06c56a2f619de14d3020513\") " pod="kube-system/kube-controller-manager-ip-172-31-19-141"
Sep  4 17:20:52.214003 kubelet[3635]: I0904 17:20:52.213995    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f354c8fdf06c56a2f619de14d3020513-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-141\" (UID: \"f354c8fdf06c56a2f619de14d3020513\") " pod="kube-system/kube-controller-manager-ip-172-31-19-141"
Sep  4 17:20:52.214163 kubelet[3635]: I0904 17:20:52.214114    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8153a58453f522101414d5f0123ed375-ca-certs\") pod \"kube-apiserver-ip-172-31-19-141\" (UID: \"8153a58453f522101414d5f0123ed375\") " pod="kube-system/kube-apiserver-ip-172-31-19-141"
Sep  4 17:20:52.214212 kubelet[3635]: I0904 17:20:52.214184    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8153a58453f522101414d5f0123ed375-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-141\" (UID: \"8153a58453f522101414d5f0123ed375\") " pod="kube-system/kube-apiserver-ip-172-31-19-141"
Sep  4 17:20:52.214674 kubelet[3635]: I0904 17:20:52.214217    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f354c8fdf06c56a2f619de14d3020513-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-141\" (UID: \"f354c8fdf06c56a2f619de14d3020513\") " pod="kube-system/kube-controller-manager-ip-172-31-19-141"
Sep  4 17:20:52.214674 kubelet[3635]: I0904 17:20:52.214297    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f354c8fdf06c56a2f619de14d3020513-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-141\" (UID: \"f354c8fdf06c56a2f619de14d3020513\") " pod="kube-system/kube-controller-manager-ip-172-31-19-141"
Sep  4 17:20:52.763385 kubelet[3635]: I0904 17:20:52.763341    3635 apiserver.go:52] "Watching apiserver"
Sep  4 17:20:52.786946 kubelet[3635]: I0904 17:20:52.785152    3635 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Sep  4 17:20:52.818610 sudo[3647]: pam_unix(sudo:session): session closed for user root
Sep  4 17:20:52.923155 kubelet[3635]: E0904 17:20:52.923119    3635 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-141\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-141"
Sep  4 17:20:52.956725 kubelet[3635]: I0904 17:20:52.956659    3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-141" podStartSLOduration=3.954826928 podCreationTimestamp="2024-09-04 17:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:20:52.950644049 +0000 UTC m=+1.406090362" watchObservedRunningTime="2024-09-04 17:20:52.954826928 +0000 UTC m=+1.410273232"
Sep  4 17:20:52.968840 kubelet[3635]: I0904 17:20:52.968808    3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-141" podStartSLOduration=0.968750722 podCreationTimestamp="2024-09-04 17:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:20:52.96752972 +0000 UTC m=+1.422976025" watchObservedRunningTime="2024-09-04 17:20:52.968750722 +0000 UTC m=+1.424197015"
Sep  4 17:20:53.759656 kubelet[3635]: I0904 17:20:53.759611    3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-141" podStartSLOduration=1.759572924 podCreationTimestamp="2024-09-04 17:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:20:52.979521918 +0000 UTC m=+1.434968217" watchObservedRunningTime="2024-09-04 17:20:53.759572924 +0000 UTC m=+2.215019229"
Sep  4 17:20:54.862102 sudo[2449]: pam_unix(sudo:session): session closed for user root
Sep  4 17:20:54.885227 sshd[2445]: pam_unix(sshd:session): session closed for user core
Sep  4 17:20:54.890233 systemd[1]: sshd@6-172.31.19.141:22-139.178.68.195:36362.service: Deactivated successfully.
Sep  4 17:20:54.896433 systemd-logind[2061]: Session 7 logged out. Waiting for processes to exit.
Sep  4 17:20:54.898207 systemd[1]: session-7.scope: Deactivated successfully.
Sep  4 17:20:54.901271 systemd-logind[2061]: Removed session 7.
Sep  4 17:21:05.527940 kubelet[3635]: I0904 17:21:05.525190    3635 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Sep  4 17:21:05.528611 containerd[2090]: time="2024-09-04T17:21:05.525700641Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Sep  4 17:21:05.533144 kubelet[3635]: I0904 17:21:05.529334    3635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Sep  4 17:21:05.627705 kubelet[3635]: I0904 17:21:05.626559    3635 topology_manager.go:215] "Topology Admit Handler" podUID="5c0afef5-f3da-4058-9ac0-667c878fc3c7" podNamespace="kube-system" podName="kube-proxy-zp9ng"
Sep  4 17:21:05.656896 kubelet[3635]: W0904 17:21:05.656800    3635 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-19-141" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-141' and this object
Sep  4 17:21:05.656896 kubelet[3635]: E0904 17:21:05.656842    3635 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-19-141" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-141' and this object
Sep  4 17:21:05.670646 kubelet[3635]: I0904 17:21:05.668842    3635 topology_manager.go:215] "Topology Admit Handler" podUID="c29159c9-f066-43bc-8013-1523b3f97584" podNamespace="kube-system" podName="cilium-b8zfg"
Sep  4 17:21:05.675446 kubelet[3635]: I0904 17:21:05.675417    3635 topology_manager.go:215] "Topology Admit Handler" podUID="f7f9a8ad-be41-4c98-92d6-4c89d001f0a1" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-48g2t"
Sep  4 17:21:05.704238 kubelet[3635]: W0904 17:21:05.703550    3635 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-19-141" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-141' and this object
Sep  4 17:21:05.704238 kubelet[3635]: E0904 17:21:05.703589    3635 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-19-141" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-141' and this object
Sep  4 17:21:05.717490 kubelet[3635]: I0904 17:21:05.711620    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-hostproc\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.717490 kubelet[3635]: I0904 17:21:05.711679    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c0afef5-f3da-4058-9ac0-667c878fc3c7-lib-modules\") pod \"kube-proxy-zp9ng\" (UID: \"5c0afef5-f3da-4058-9ac0-667c878fc3c7\") " pod="kube-system/kube-proxy-zp9ng"
Sep  4 17:21:05.717490 kubelet[3635]: I0904 17:21:05.711708    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c29159c9-f066-43bc-8013-1523b3f97584-clustermesh-secrets\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.717490 kubelet[3635]: I0904 17:21:05.711737    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-cilium-cgroup\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.717490 kubelet[3635]: I0904 17:21:05.711768    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c29159c9-f066-43bc-8013-1523b3f97584-hubble-tls\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.717490 kubelet[3635]: I0904 17:21:05.713096    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c0afef5-f3da-4058-9ac0-667c878fc3c7-xtables-lock\") pod \"kube-proxy-zp9ng\" (UID: \"5c0afef5-f3da-4058-9ac0-667c878fc3c7\") " pod="kube-system/kube-proxy-zp9ng"
Sep  4 17:21:05.719168 kubelet[3635]: I0904 17:21:05.713182    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-host-proc-sys-kernel\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.719168 kubelet[3635]: I0904 17:21:05.713217    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c0afef5-f3da-4058-9ac0-667c878fc3c7-kube-proxy\") pod \"kube-proxy-zp9ng\" (UID: \"5c0afef5-f3da-4058-9ac0-667c878fc3c7\") " pod="kube-system/kube-proxy-zp9ng"
Sep  4 17:21:05.719168 kubelet[3635]: I0904 17:21:05.713248    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqhdb\" (UniqueName: \"kubernetes.io/projected/5c0afef5-f3da-4058-9ac0-667c878fc3c7-kube-api-access-xqhdb\") pod \"kube-proxy-zp9ng\" (UID: \"5c0afef5-f3da-4058-9ac0-667c878fc3c7\") " pod="kube-system/kube-proxy-zp9ng"
Sep  4 17:21:05.719168 kubelet[3635]: I0904 17:21:05.713276    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-cilium-run\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.719168 kubelet[3635]: I0904 17:21:05.713304    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-lib-modules\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.719385 kubelet[3635]: I0904 17:21:05.713337    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99mts\" (UniqueName: \"kubernetes.io/projected/f7f9a8ad-be41-4c98-92d6-4c89d001f0a1-kube-api-access-99mts\") pod \"cilium-operator-6bc8ccdb58-48g2t\" (UID: \"f7f9a8ad-be41-4c98-92d6-4c89d001f0a1\") " pod="kube-system/cilium-operator-6bc8ccdb58-48g2t"
Sep  4 17:21:05.719385 kubelet[3635]: I0904 17:21:05.713367    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-xtables-lock\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.719385 kubelet[3635]: I0904 17:21:05.713397    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c29159c9-f066-43bc-8013-1523b3f97584-cilium-config-path\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.719385 kubelet[3635]: I0904 17:21:05.713425    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-bpf-maps\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.719385 kubelet[3635]: I0904 17:21:05.713454    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-cni-path\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.719597 kubelet[3635]: I0904 17:21:05.713484    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-etc-cni-netd\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.719597 kubelet[3635]: I0904 17:21:05.714562    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7f9a8ad-be41-4c98-92d6-4c89d001f0a1-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-48g2t\" (UID: \"f7f9a8ad-be41-4c98-92d6-4c89d001f0a1\") " pod="kube-system/cilium-operator-6bc8ccdb58-48g2t"
Sep  4 17:21:05.719597 kubelet[3635]: I0904 17:21:05.714627    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-host-proc-sys-net\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.719597 kubelet[3635]: I0904 17:21:05.714660    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh6nx\" (UniqueName: \"kubernetes.io/projected/c29159c9-f066-43bc-8013-1523b3f97584-kube-api-access-nh6nx\") pod \"cilium-b8zfg\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") " pod="kube-system/cilium-b8zfg"
Sep  4 17:21:05.721590 kubelet[3635]: W0904 17:21:05.721553    3635 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-19-141" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-141' and this object
Sep  4 17:21:05.721783 kubelet[3635]: E0904 17:21:05.721768    3635 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-19-141" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-141' and this object
Sep  4 17:21:05.722417 kubelet[3635]: W0904 17:21:05.722398    3635 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-19-141" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-141' and this object
Sep  4 17:21:05.722590 kubelet[3635]: E0904 17:21:05.722579    3635 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-19-141" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-141' and this object
Sep  4 17:21:06.855202 containerd[2090]: time="2024-09-04T17:21:06.855122536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zp9ng,Uid:5c0afef5-f3da-4058-9ac0-667c878fc3c7,Namespace:kube-system,Attempt:0,}"
Sep  4 17:21:06.894005 containerd[2090]: time="2024-09-04T17:21:06.890491180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b8zfg,Uid:c29159c9-f066-43bc-8013-1523b3f97584,Namespace:kube-system,Attempt:0,}"
Sep  4 17:21:06.894703 containerd[2090]: time="2024-09-04T17:21:06.894642600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-48g2t,Uid:f7f9a8ad-be41-4c98-92d6-4c89d001f0a1,Namespace:kube-system,Attempt:0,}"
Sep  4 17:21:06.964968 containerd[2090]: time="2024-09-04T17:21:06.956380294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:21:06.964968 containerd[2090]: time="2024-09-04T17:21:06.956993898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:21:06.964968 containerd[2090]: time="2024-09-04T17:21:06.957032714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:21:06.964968 containerd[2090]: time="2024-09-04T17:21:06.957049549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:21:07.029541 containerd[2090]: time="2024-09-04T17:21:07.026482520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:21:07.029541 containerd[2090]: time="2024-09-04T17:21:07.027115933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:21:07.029541 containerd[2090]: time="2024-09-04T17:21:07.027359140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:21:07.029541 containerd[2090]: time="2024-09-04T17:21:07.027379094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:21:07.068224 containerd[2090]: time="2024-09-04T17:21:07.068094882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:21:07.068770 containerd[2090]: time="2024-09-04T17:21:07.068595903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:21:07.068770 containerd[2090]: time="2024-09-04T17:21:07.068638515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:21:07.068770 containerd[2090]: time="2024-09-04T17:21:07.068661819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:21:07.185485 containerd[2090]: time="2024-09-04T17:21:07.185431765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b8zfg,Uid:c29159c9-f066-43bc-8013-1523b3f97584,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\""
Sep  4 17:21:07.193986 containerd[2090]: time="2024-09-04T17:21:07.193885224Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Sep  4 17:21:07.197827 containerd[2090]: time="2024-09-04T17:21:07.197689503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zp9ng,Uid:5c0afef5-f3da-4058-9ac0-667c878fc3c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2dd2aa26ca45b53ffde3be47958cd2085a8c80ff0f3c084b8a816741550f548\""
Sep  4 17:21:07.204549 containerd[2090]: time="2024-09-04T17:21:07.204119069Z" level=info msg="CreateContainer within sandbox \"f2dd2aa26ca45b53ffde3be47958cd2085a8c80ff0f3c084b8a816741550f548\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Sep  4 17:21:07.211455 containerd[2090]: time="2024-09-04T17:21:07.211361322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-48g2t,Uid:f7f9a8ad-be41-4c98-92d6-4c89d001f0a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\""
Sep  4 17:21:07.237986 containerd[2090]: time="2024-09-04T17:21:07.237942187Z" level=info msg="CreateContainer within sandbox \"f2dd2aa26ca45b53ffde3be47958cd2085a8c80ff0f3c084b8a816741550f548\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f47ec11d5bc8c0b9a55113db280012fbfb50c084193d632b8a374cdc97be5254\""
Sep  4 17:21:07.239204 containerd[2090]: time="2024-09-04T17:21:07.239169055Z" level=info msg="StartContainer for \"f47ec11d5bc8c0b9a55113db280012fbfb50c084193d632b8a374cdc97be5254\""
Sep  4 17:21:07.321526 containerd[2090]: time="2024-09-04T17:21:07.321483869Z" level=info msg="StartContainer for \"f47ec11d5bc8c0b9a55113db280012fbfb50c084193d632b8a374cdc97be5254\" returns successfully"
Sep  4 17:21:07.907526 systemd[1]: run-containerd-runc-k8s.io-f2dd2aa26ca45b53ffde3be47958cd2085a8c80ff0f3c084b8a816741550f548-runc.LEN1OK.mount: Deactivated successfully.
Sep  4 17:21:07.985153 kubelet[3635]: I0904 17:21:07.985118    3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zp9ng" podStartSLOduration=2.98507292 podCreationTimestamp="2024-09-04 17:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:21:07.984820507 +0000 UTC m=+16.440266811" watchObservedRunningTime="2024-09-04 17:21:07.98507292 +0000 UTC m=+16.440519228"
Sep  4 17:21:13.407389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3995729992.mount: Deactivated successfully.
Sep  4 17:21:16.572126 containerd[2090]: time="2024-09-04T17:21:16.572053654Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:21:16.574441 containerd[2090]: time="2024-09-04T17:21:16.574355170Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735375"
Sep  4 17:21:16.576089 containerd[2090]: time="2024-09-04T17:21:16.575770826Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:21:16.577676 containerd[2090]: time="2024-09-04T17:21:16.577612237Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.383684821s"
Sep  4 17:21:16.577820 containerd[2090]: time="2024-09-04T17:21:16.577794910Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Sep  4 17:21:16.579601 containerd[2090]: time="2024-09-04T17:21:16.579572879Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Sep  4 17:21:16.589296 containerd[2090]: time="2024-09-04T17:21:16.589157836Z" level=info msg="CreateContainer within sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Sep  4 17:21:16.668566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291688121.mount: Deactivated successfully.
Sep  4 17:21:16.674660 containerd[2090]: time="2024-09-04T17:21:16.674615463Z" level=info msg="CreateContainer within sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29\""
Sep  4 17:21:16.675591 containerd[2090]: time="2024-09-04T17:21:16.675542694Z" level=info msg="StartContainer for \"84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29\""
Sep  4 17:21:16.819221 containerd[2090]: time="2024-09-04T17:21:16.819185230Z" level=info msg="StartContainer for \"84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29\" returns successfully"
Sep  4 17:21:17.172382 containerd[2090]: time="2024-09-04T17:21:17.166230187Z" level=info msg="shim disconnected" id=84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29 namespace=k8s.io
Sep  4 17:21:17.172638 containerd[2090]: time="2024-09-04T17:21:17.172386034Z" level=warning msg="cleaning up after shim disconnected" id=84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29 namespace=k8s.io
Sep  4 17:21:17.172638 containerd[2090]: time="2024-09-04T17:21:17.172407996Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:21:17.657540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29-rootfs.mount: Deactivated successfully.
Sep  4 17:21:18.039398 containerd[2090]: time="2024-09-04T17:21:18.035344897Z" level=info msg="CreateContainer within sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Sep  4 17:21:18.082743 containerd[2090]: time="2024-09-04T17:21:18.082655679Z" level=info msg="CreateContainer within sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96\""
Sep  4 17:21:18.086734 containerd[2090]: time="2024-09-04T17:21:18.086694971Z" level=info msg="StartContainer for \"cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96\""
Sep  4 17:21:18.140223 systemd[1]: run-containerd-runc-k8s.io-cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96-runc.PGfe2x.mount: Deactivated successfully.
Sep  4 17:21:18.200946 containerd[2090]: time="2024-09-04T17:21:18.200757189Z" level=info msg="StartContainer for \"cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96\" returns successfully"
Sep  4 17:21:18.219325 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Sep  4 17:21:18.219726 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Sep  4 17:21:18.219821 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Sep  4 17:21:18.230508 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Sep  4 17:21:18.302716 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Sep  4 17:21:18.317270 containerd[2090]: time="2024-09-04T17:21:18.317202739Z" level=info msg="shim disconnected" id=cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96 namespace=k8s.io
Sep  4 17:21:18.317270 containerd[2090]: time="2024-09-04T17:21:18.317264764Z" level=warning msg="cleaning up after shim disconnected" id=cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96 namespace=k8s.io
Sep  4 17:21:18.317270 containerd[2090]: time="2024-09-04T17:21:18.317275872Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:21:18.352621 containerd[2090]: time="2024-09-04T17:21:18.352567651Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:21:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Sep  4 17:21:18.656666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96-rootfs.mount: Deactivated successfully.
Sep  4 17:21:19.049856 containerd[2090]: time="2024-09-04T17:21:19.049732233Z" level=info msg="CreateContainer within sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Sep  4 17:21:19.110363 containerd[2090]: time="2024-09-04T17:21:19.110318715Z" level=info msg="CreateContainer within sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c\""
Sep  4 17:21:19.111550 containerd[2090]: time="2024-09-04T17:21:19.111514197Z" level=info msg="StartContainer for \"1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c\""
Sep  4 17:21:19.224280 containerd[2090]: time="2024-09-04T17:21:19.224228043Z" level=info msg="StartContainer for \"1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c\" returns successfully"
Sep  4 17:21:19.305433 containerd[2090]: time="2024-09-04T17:21:19.305074017Z" level=info msg="shim disconnected" id=1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c namespace=k8s.io
Sep  4 17:21:19.305433 containerd[2090]: time="2024-09-04T17:21:19.305171393Z" level=warning msg="cleaning up after shim disconnected" id=1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c namespace=k8s.io
Sep  4 17:21:19.305433 containerd[2090]: time="2024-09-04T17:21:19.305184541Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:21:19.656874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c-rootfs.mount: Deactivated successfully.
Sep  4 17:21:20.070959 containerd[2090]: time="2024-09-04T17:21:20.068747266Z" level=info msg="CreateContainer within sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Sep  4 17:21:20.110458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277427576.mount: Deactivated successfully.
Sep  4 17:21:20.122289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3063751118.mount: Deactivated successfully.
Sep  4 17:21:20.135524 containerd[2090]: time="2024-09-04T17:21:20.135293015Z" level=info msg="CreateContainer within sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f\""
Sep  4 17:21:20.138239 containerd[2090]: time="2024-09-04T17:21:20.138203380Z" level=info msg="StartContainer for \"eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f\""
Sep  4 17:21:20.263137 containerd[2090]: time="2024-09-04T17:21:20.262466274Z" level=info msg="StartContainer for \"eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f\" returns successfully"
Sep  4 17:21:20.472463 containerd[2090]: time="2024-09-04T17:21:20.472372430Z" level=info msg="shim disconnected" id=eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f namespace=k8s.io
Sep  4 17:21:20.472463 containerd[2090]: time="2024-09-04T17:21:20.472430380Z" level=warning msg="cleaning up after shim disconnected" id=eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f namespace=k8s.io
Sep  4 17:21:20.472463 containerd[2090]: time="2024-09-04T17:21:20.472443234Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:21:20.478288 containerd[2090]: time="2024-09-04T17:21:20.476974938Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907193"
Sep  4 17:21:20.478288 containerd[2090]: time="2024-09-04T17:21:20.477155394Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:21:20.479359 containerd[2090]: time="2024-09-04T17:21:20.479327191Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:21:20.491879 containerd[2090]: time="2024-09-04T17:21:20.489660038Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.910041449s"
Sep  4 17:21:20.491879 containerd[2090]: time="2024-09-04T17:21:20.489710875Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Sep  4 17:21:20.509545 containerd[2090]: time="2024-09-04T17:21:20.509486049Z" level=info msg="CreateContainer within sandbox \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Sep  4 17:21:20.517748 containerd[2090]: time="2024-09-04T17:21:20.517683916Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:21:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Sep  4 17:21:20.530494 containerd[2090]: time="2024-09-04T17:21:20.530444046Z" level=info msg="CreateContainer within sandbox \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\""
Sep  4 17:21:20.532319 containerd[2090]: time="2024-09-04T17:21:20.531252291Z" level=info msg="StartContainer for \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\""
Sep  4 17:21:20.592813 containerd[2090]: time="2024-09-04T17:21:20.592762635Z" level=info msg="StartContainer for \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\" returns successfully"
Sep  4 17:21:20.666551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f-rootfs.mount: Deactivated successfully.
Sep  4 17:21:21.082398 containerd[2090]: time="2024-09-04T17:21:21.082351579Z" level=info msg="CreateContainer within sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Sep  4 17:21:21.124800 containerd[2090]: time="2024-09-04T17:21:21.124656119Z" level=info msg="CreateContainer within sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\""
Sep  4 17:21:21.125649 containerd[2090]: time="2024-09-04T17:21:21.125610007Z" level=info msg="StartContainer for \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\""
Sep  4 17:21:21.213792 kubelet[3635]: I0904 17:21:21.213674    3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-48g2t" podStartSLOduration=2.935345693 podCreationTimestamp="2024-09-04 17:21:05 +0000 UTC" firstStartedPulling="2024-09-04 17:21:07.214218138 +0000 UTC m=+15.669664425" lastFinishedPulling="2024-09-04 17:21:20.489946634 +0000 UTC m=+28.945392928" observedRunningTime="2024-09-04 17:21:21.210176223 +0000 UTC m=+29.665622529" watchObservedRunningTime="2024-09-04 17:21:21.211074196 +0000 UTC m=+29.666520511"
Sep  4 17:21:21.459623 containerd[2090]: time="2024-09-04T17:21:21.459550288Z" level=info msg="StartContainer for \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\" returns successfully"
Sep  4 17:21:22.148954 kubelet[3635]: I0904 17:21:22.145422    3635 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Sep  4 17:21:22.380164 kubelet[3635]: I0904 17:21:22.380127    3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-b8zfg" podStartSLOduration=7.9933539289999995 podCreationTimestamp="2024-09-04 17:21:05 +0000 UTC" firstStartedPulling="2024-09-04 17:21:07.191611478 +0000 UTC m=+15.647057768" lastFinishedPulling="2024-09-04 17:21:16.578313734 +0000 UTC m=+25.033760029" observedRunningTime="2024-09-04 17:21:22.214058877 +0000 UTC m=+30.669505178" watchObservedRunningTime="2024-09-04 17:21:22.38005619 +0000 UTC m=+30.835502499"
Sep  4 17:21:22.381934 kubelet[3635]: I0904 17:21:22.381057    3635 topology_manager.go:215] "Topology Admit Handler" podUID="292c483c-1a75-4a49-accd-28202b13dcdd" podNamespace="kube-system" podName="coredns-5dd5756b68-dqh8w"
Sep  4 17:21:22.395824 kubelet[3635]: I0904 17:21:22.393819    3635 topology_manager.go:215] "Topology Admit Handler" podUID="cf0e91d0-a95d-4be7-a6ad-4d74a0fdeb90" podNamespace="kube-system" podName="coredns-5dd5756b68-9zr7m"
Sep  4 17:21:22.481201 kubelet[3635]: I0904 17:21:22.480173    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/292c483c-1a75-4a49-accd-28202b13dcdd-config-volume\") pod \"coredns-5dd5756b68-dqh8w\" (UID: \"292c483c-1a75-4a49-accd-28202b13dcdd\") " pod="kube-system/coredns-5dd5756b68-dqh8w"
Sep  4 17:21:22.481201 kubelet[3635]: I0904 17:21:22.480233    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf0e91d0-a95d-4be7-a6ad-4d74a0fdeb90-config-volume\") pod \"coredns-5dd5756b68-9zr7m\" (UID: \"cf0e91d0-a95d-4be7-a6ad-4d74a0fdeb90\") " pod="kube-system/coredns-5dd5756b68-9zr7m"
Sep  4 17:21:22.481201 kubelet[3635]: I0904 17:21:22.480270    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgxlg\" (UniqueName: \"kubernetes.io/projected/cf0e91d0-a95d-4be7-a6ad-4d74a0fdeb90-kube-api-access-cgxlg\") pod \"coredns-5dd5756b68-9zr7m\" (UID: \"cf0e91d0-a95d-4be7-a6ad-4d74a0fdeb90\") " pod="kube-system/coredns-5dd5756b68-9zr7m"
Sep  4 17:21:22.481201 kubelet[3635]: I0904 17:21:22.480311    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt7s8\" (UniqueName: \"kubernetes.io/projected/292c483c-1a75-4a49-accd-28202b13dcdd-kube-api-access-bt7s8\") pod \"coredns-5dd5756b68-dqh8w\" (UID: \"292c483c-1a75-4a49-accd-28202b13dcdd\") " pod="kube-system/coredns-5dd5756b68-dqh8w"
Sep  4 17:21:22.700634 containerd[2090]: time="2024-09-04T17:21:22.700508256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dqh8w,Uid:292c483c-1a75-4a49-accd-28202b13dcdd,Namespace:kube-system,Attempt:0,}"
Sep  4 17:21:22.710157 containerd[2090]: time="2024-09-04T17:21:22.710109039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9zr7m,Uid:cf0e91d0-a95d-4be7-a6ad-4d74a0fdeb90,Namespace:kube-system,Attempt:0,}"
Sep  4 17:21:25.493877 (udev-worker)[4413]: Network interface NamePolicy= disabled on kernel command line.
Sep  4 17:21:25.498899 systemd-networkd[1653]: cilium_host: Link UP
Sep  4 17:21:25.499293 systemd-networkd[1653]: cilium_net: Link UP
Sep  4 17:21:25.499603 systemd-networkd[1653]: cilium_net: Gained carrier
Sep  4 17:21:25.499781 systemd-networkd[1653]: cilium_host: Gained carrier
Sep  4 17:21:25.499906 systemd-networkd[1653]: cilium_net: Gained IPv6LL
Sep  4 17:21:25.502750 systemd-networkd[1653]: cilium_host: Gained IPv6LL
Sep  4 17:21:25.510345 (udev-worker)[4449]: Network interface NamePolicy= disabled on kernel command line.
Sep  4 17:21:25.659875 (udev-worker)[4468]: Network interface NamePolicy= disabled on kernel command line.
Sep  4 17:21:25.667012 systemd-networkd[1653]: cilium_vxlan: Link UP
Sep  4 17:21:25.667020 systemd-networkd[1653]: cilium_vxlan: Gained carrier
Sep  4 17:21:26.194268 kernel: NET: Registered PF_ALG protocol family
Sep  4 17:21:27.266175 systemd-networkd[1653]: lxc_health: Link UP
Sep  4 17:21:27.277183 systemd-networkd[1653]: lxc_health: Gained carrier
Sep  4 17:21:27.495124 systemd-networkd[1653]: cilium_vxlan: Gained IPv6LL
Sep  4 17:21:27.843056 systemd-networkd[1653]: lxca56bb42c0c42: Link UP
Sep  4 17:21:27.848014 kernel: eth0: renamed from tmp6f3d5
Sep  4 17:21:27.852078 systemd-networkd[1653]: lxca56bb42c0c42: Gained carrier
Sep  4 17:21:27.915111 systemd-networkd[1653]: lxcdbb8aa711de0: Link UP
Sep  4 17:21:27.922044 kernel: eth0: renamed from tmpac746
Sep  4 17:21:27.930076 systemd-networkd[1653]: lxcdbb8aa711de0: Gained carrier
Sep  4 17:21:28.583249 systemd-networkd[1653]: lxc_health: Gained IPv6LL
Sep  4 17:21:28.967011 systemd-networkd[1653]: lxca56bb42c0c42: Gained IPv6LL
Sep  4 17:21:29.928714 systemd-networkd[1653]: lxcdbb8aa711de0: Gained IPv6LL
Sep  4 17:21:32.245735 ntpd[2042]: Listen normally on 6 cilium_host 192.168.0.231:123
Sep  4 17:21:32.247775 ntpd[2042]:  4 Sep 17:21:32 ntpd[2042]: Listen normally on 6 cilium_host 192.168.0.231:123
Sep  4 17:21:32.247775 ntpd[2042]:  4 Sep 17:21:32 ntpd[2042]: Listen normally on 7 cilium_net [fe80::805:efff:fec2:1061%4]:123
Sep  4 17:21:32.247775 ntpd[2042]:  4 Sep 17:21:32 ntpd[2042]: Listen normally on 8 cilium_host [fe80::286d:4dff:fe6c:1044%5]:123
Sep  4 17:21:32.247775 ntpd[2042]:  4 Sep 17:21:32 ntpd[2042]: Listen normally on 9 cilium_vxlan [fe80::94ea:44ff:fe98:3004%6]:123
Sep  4 17:21:32.247775 ntpd[2042]:  4 Sep 17:21:32 ntpd[2042]: Listen normally on 10 lxc_health [fe80::c8e5:b0ff:fe70:cd69%8]:123
Sep  4 17:21:32.247775 ntpd[2042]:  4 Sep 17:21:32 ntpd[2042]: Listen normally on 11 lxca56bb42c0c42 [fe80::1cb1:c5ff:fe41:dd02%10]:123
Sep  4 17:21:32.247775 ntpd[2042]:  4 Sep 17:21:32 ntpd[2042]: Listen normally on 12 lxcdbb8aa711de0 [fe80::9c65:57ff:fe6f:7bba%12]:123
Sep  4 17:21:32.245829 ntpd[2042]: Listen normally on 7 cilium_net [fe80::805:efff:fec2:1061%4]:123
Sep  4 17:21:32.245884 ntpd[2042]: Listen normally on 8 cilium_host [fe80::286d:4dff:fe6c:1044%5]:123
Sep  4 17:21:32.246023 ntpd[2042]: Listen normally on 9 cilium_vxlan [fe80::94ea:44ff:fe98:3004%6]:123
Sep  4 17:21:32.246086 ntpd[2042]: Listen normally on 10 lxc_health [fe80::c8e5:b0ff:fe70:cd69%8]:123
Sep  4 17:21:32.246128 ntpd[2042]: Listen normally on 11 lxca56bb42c0c42 [fe80::1cb1:c5ff:fe41:dd02%10]:123
Sep  4 17:21:32.246169 ntpd[2042]: Listen normally on 12 lxcdbb8aa711de0 [fe80::9c65:57ff:fe6f:7bba%12]:123
Sep  4 17:21:34.124469 containerd[2090]: time="2024-09-04T17:21:34.123416419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:21:34.124469 containerd[2090]: time="2024-09-04T17:21:34.123514805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:21:34.124469 containerd[2090]: time="2024-09-04T17:21:34.123544504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:21:34.124469 containerd[2090]: time="2024-09-04T17:21:34.123567012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:21:34.156402 containerd[2090]: time="2024-09-04T17:21:34.150371101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:21:34.156402 containerd[2090]: time="2024-09-04T17:21:34.150459622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:21:34.156402 containerd[2090]: time="2024-09-04T17:21:34.150492259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:21:34.156402 containerd[2090]: time="2024-09-04T17:21:34.150508618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:21:34.226952 kubelet[3635]: I0904 17:21:34.226596    3635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Sep  4 17:21:34.435818 containerd[2090]: time="2024-09-04T17:21:34.435610731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9zr7m,Uid:cf0e91d0-a95d-4be7-a6ad-4d74a0fdeb90,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac746797354aa5826022eb413d94c1908db9a7d18b40a3b96a50eaa78ee53226\""
Sep  4 17:21:34.443630 containerd[2090]: time="2024-09-04T17:21:34.443556934Z" level=info msg="CreateContainer within sandbox \"ac746797354aa5826022eb413d94c1908db9a7d18b40a3b96a50eaa78ee53226\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Sep  4 17:21:34.455313 containerd[2090]: time="2024-09-04T17:21:34.455273556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dqh8w,Uid:292c483c-1a75-4a49-accd-28202b13dcdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f3d5fda7651e4d128363296411ee171e70caa8fc0f9fd1377423cf9811483c2\""
Sep  4 17:21:34.460907 containerd[2090]: time="2024-09-04T17:21:34.460865052Z" level=info msg="CreateContainer within sandbox \"6f3d5fda7651e4d128363296411ee171e70caa8fc0f9fd1377423cf9811483c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Sep  4 17:21:34.481552 containerd[2090]: time="2024-09-04T17:21:34.481505082Z" level=info msg="CreateContainer within sandbox \"ac746797354aa5826022eb413d94c1908db9a7d18b40a3b96a50eaa78ee53226\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b8769b1ad9d23b6548c83cc7bfe454a98992199d5e622521c9219008952b8e0\""
Sep  4 17:21:34.485084 containerd[2090]: time="2024-09-04T17:21:34.482228763Z" level=info msg="StartContainer for \"9b8769b1ad9d23b6548c83cc7bfe454a98992199d5e622521c9219008952b8e0\""
Sep  4 17:21:34.490506 containerd[2090]: time="2024-09-04T17:21:34.490465059Z" level=info msg="CreateContainer within sandbox \"6f3d5fda7651e4d128363296411ee171e70caa8fc0f9fd1377423cf9811483c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"200d45812938d8572ae7c9d0002c0c6b2de5c410d97b9bdd2adfe082ee97e95b\""
Sep  4 17:21:34.494767 containerd[2090]: time="2024-09-04T17:21:34.494684629Z" level=info msg="StartContainer for \"200d45812938d8572ae7c9d0002c0c6b2de5c410d97b9bdd2adfe082ee97e95b\""
Sep  4 17:21:34.671519 containerd[2090]: time="2024-09-04T17:21:34.671471543Z" level=info msg="StartContainer for \"200d45812938d8572ae7c9d0002c0c6b2de5c410d97b9bdd2adfe082ee97e95b\" returns successfully"
Sep  4 17:21:34.671934 containerd[2090]: time="2024-09-04T17:21:34.671652667Z" level=info msg="StartContainer for \"9b8769b1ad9d23b6548c83cc7bfe454a98992199d5e622521c9219008952b8e0\" returns successfully"
Sep  4 17:21:35.155859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789030898.mount: Deactivated successfully.
Sep  4 17:21:35.223970 kubelet[3635]: I0904 17:21:35.201737    3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-9zr7m" podStartSLOduration=30.201686841 podCreationTimestamp="2024-09-04 17:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:21:35.19636674 +0000 UTC m=+43.651813069" watchObservedRunningTime="2024-09-04 17:21:35.201686841 +0000 UTC m=+43.657133145"
Sep  4 17:21:35.309257 kubelet[3635]: I0904 17:21:35.308976    3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-dqh8w" podStartSLOduration=30.308914455 podCreationTimestamp="2024-09-04 17:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:21:35.308295983 +0000 UTC m=+43.763742289" watchObservedRunningTime="2024-09-04 17:21:35.308914455 +0000 UTC m=+43.764360758"
Sep  4 17:21:38.642283 systemd[1]: Started sshd@7-172.31.19.141:22-139.178.68.195:34800.service - OpenSSH per-connection server daemon (139.178.68.195:34800).
Sep  4 17:21:38.868232 sshd[4980]: Accepted publickey for core from 139.178.68.195 port 34800 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:21:38.884637 sshd[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:21:38.910480 systemd-logind[2061]: New session 8 of user core.
Sep  4 17:21:38.926546 systemd[1]: Started session-8.scope - Session 8 of User core.
Sep  4 17:21:39.789053 sshd[4980]: pam_unix(sshd:session): session closed for user core
Sep  4 17:21:39.807566 systemd[1]: sshd@7-172.31.19.141:22-139.178.68.195:34800.service: Deactivated successfully.
Sep  4 17:21:39.817774 systemd-logind[2061]: Session 8 logged out. Waiting for processes to exit.
Sep  4 17:21:39.818208 systemd[1]: session-8.scope: Deactivated successfully.
Sep  4 17:21:39.820873 systemd-logind[2061]: Removed session 8.
Sep  4 17:21:44.819550 systemd[1]: Started sshd@8-172.31.19.141:22-139.178.68.195:34802.service - OpenSSH per-connection server daemon (139.178.68.195:34802).
Sep  4 17:21:45.018141 sshd[4994]: Accepted publickey for core from 139.178.68.195 port 34802 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:21:45.022191 sshd[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:21:45.038876 systemd-logind[2061]: New session 9 of user core.
Sep  4 17:21:45.044278 systemd[1]: Started session-9.scope - Session 9 of User core.
Sep  4 17:21:45.282873 sshd[4994]: pam_unix(sshd:session): session closed for user core
Sep  4 17:21:45.308850 systemd[1]: sshd@8-172.31.19.141:22-139.178.68.195:34802.service: Deactivated successfully.
Sep  4 17:21:45.333253 systemd-logind[2061]: Session 9 logged out. Waiting for processes to exit.
Sep  4 17:21:45.333829 systemd[1]: session-9.scope: Deactivated successfully.
Sep  4 17:21:45.345680 systemd-logind[2061]: Removed session 9.
Sep  4 17:21:50.312348 systemd[1]: Started sshd@9-172.31.19.141:22-139.178.68.195:50792.service - OpenSSH per-connection server daemon (139.178.68.195:50792).
Sep  4 17:21:50.474781 sshd[5009]: Accepted publickey for core from 139.178.68.195 port 50792 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:21:50.476748 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:21:50.483481 systemd-logind[2061]: New session 10 of user core.
Sep  4 17:21:50.490358 systemd[1]: Started session-10.scope - Session 10 of User core.
Sep  4 17:21:50.693756 sshd[5009]: pam_unix(sshd:session): session closed for user core
Sep  4 17:21:50.698137 systemd-logind[2061]: Session 10 logged out. Waiting for processes to exit.
Sep  4 17:21:50.699897 systemd[1]: sshd@9-172.31.19.141:22-139.178.68.195:50792.service: Deactivated successfully.
Sep  4 17:21:50.706288 systemd[1]: session-10.scope: Deactivated successfully.
Sep  4 17:21:50.708487 systemd-logind[2061]: Removed session 10.
Sep  4 17:21:55.721782 systemd[1]: Started sshd@10-172.31.19.141:22-139.178.68.195:50794.service - OpenSSH per-connection server daemon (139.178.68.195:50794).
Sep  4 17:21:55.906996 sshd[5026]: Accepted publickey for core from 139.178.68.195 port 50794 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:21:55.908544 sshd[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:21:55.914988 systemd-logind[2061]: New session 11 of user core.
Sep  4 17:21:55.922633 systemd[1]: Started session-11.scope - Session 11 of User core.
Sep  4 17:21:56.155294 sshd[5026]: pam_unix(sshd:session): session closed for user core
Sep  4 17:21:56.161809 systemd-logind[2061]: Session 11 logged out. Waiting for processes to exit.
Sep  4 17:21:56.164503 systemd[1]: sshd@10-172.31.19.141:22-139.178.68.195:50794.service: Deactivated successfully.
Sep  4 17:21:56.168889 systemd[1]: session-11.scope: Deactivated successfully.
Sep  4 17:21:56.171404 systemd-logind[2061]: Removed session 11.
Sep  4 17:22:01.192873 systemd[1]: Started sshd@11-172.31.19.141:22-139.178.68.195:56658.service - OpenSSH per-connection server daemon (139.178.68.195:56658).
Sep  4 17:22:01.381153 sshd[5041]: Accepted publickey for core from 139.178.68.195 port 56658 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:01.384071 sshd[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:01.396095 systemd-logind[2061]: New session 12 of user core.
Sep  4 17:22:01.408408 systemd[1]: Started session-12.scope - Session 12 of User core.
Sep  4 17:22:01.813424 sshd[5041]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:01.821384 systemd[1]: sshd@11-172.31.19.141:22-139.178.68.195:56658.service: Deactivated successfully.
Sep  4 17:22:01.827665 systemd-logind[2061]: Session 12 logged out. Waiting for processes to exit.
Sep  4 17:22:01.827707 systemd[1]: session-12.scope: Deactivated successfully.
Sep  4 17:22:01.830341 systemd-logind[2061]: Removed session 12.
Sep  4 17:22:01.841303 systemd[1]: Started sshd@12-172.31.19.141:22-139.178.68.195:56670.service - OpenSSH per-connection server daemon (139.178.68.195:56670).
Sep  4 17:22:02.013174 sshd[5057]: Accepted publickey for core from 139.178.68.195 port 56670 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:02.015195 sshd[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:02.024411 systemd-logind[2061]: New session 13 of user core.
Sep  4 17:22:02.031789 systemd[1]: Started session-13.scope - Session 13 of User core.
Sep  4 17:22:03.620128 sshd[5057]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:03.636281 systemd[1]: sshd@12-172.31.19.141:22-139.178.68.195:56670.service: Deactivated successfully.
Sep  4 17:22:03.659613 systemd[1]: session-13.scope: Deactivated successfully.
Sep  4 17:22:03.662596 systemd-logind[2061]: Session 13 logged out. Waiting for processes to exit.
Sep  4 17:22:03.675660 systemd[1]: Started sshd@13-172.31.19.141:22-139.178.68.195:56678.service - OpenSSH per-connection server daemon (139.178.68.195:56678).
Sep  4 17:22:03.678392 systemd-logind[2061]: Removed session 13.
Sep  4 17:22:03.855343 sshd[5070]: Accepted publickey for core from 139.178.68.195 port 56678 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:03.857641 sshd[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:03.864171 systemd-logind[2061]: New session 14 of user core.
Sep  4 17:22:03.871531 systemd[1]: Started session-14.scope - Session 14 of User core.
Sep  4 17:22:04.106611 sshd[5070]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:04.117570 systemd[1]: sshd@13-172.31.19.141:22-139.178.68.195:56678.service: Deactivated successfully.
Sep  4 17:22:04.118001 systemd-logind[2061]: Session 14 logged out. Waiting for processes to exit.
Sep  4 17:22:04.123879 systemd[1]: session-14.scope: Deactivated successfully.
Sep  4 17:22:04.126672 systemd-logind[2061]: Removed session 14.
Sep  4 17:22:09.133999 systemd[1]: Started sshd@14-172.31.19.141:22-139.178.68.195:60352.service - OpenSSH per-connection server daemon (139.178.68.195:60352).
Sep  4 17:22:09.303961 sshd[5088]: Accepted publickey for core from 139.178.68.195 port 60352 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:09.306161 sshd[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:09.312051 systemd-logind[2061]: New session 15 of user core.
Sep  4 17:22:09.317430 systemd[1]: Started session-15.scope - Session 15 of User core.
Sep  4 17:22:09.538086 sshd[5088]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:09.542410 systemd[1]: sshd@14-172.31.19.141:22-139.178.68.195:60352.service: Deactivated successfully.
Sep  4 17:22:09.550405 systemd-logind[2061]: Session 15 logged out. Waiting for processes to exit.
Sep  4 17:22:09.552034 systemd[1]: session-15.scope: Deactivated successfully.
Sep  4 17:22:09.555325 systemd-logind[2061]: Removed session 15.
Sep  4 17:22:14.569339 systemd[1]: Started sshd@15-172.31.19.141:22-139.178.68.195:60368.service - OpenSSH per-connection server daemon (139.178.68.195:60368).
Sep  4 17:22:14.749937 sshd[5102]: Accepted publickey for core from 139.178.68.195 port 60368 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:14.752086 sshd[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:14.757588 systemd-logind[2061]: New session 16 of user core.
Sep  4 17:22:14.764234 systemd[1]: Started session-16.scope - Session 16 of User core.
Sep  4 17:22:14.991065 sshd[5102]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:14.997347 systemd[1]: sshd@15-172.31.19.141:22-139.178.68.195:60368.service: Deactivated successfully.
Sep  4 17:22:15.003573 systemd[1]: session-16.scope: Deactivated successfully.
Sep  4 17:22:15.004877 systemd-logind[2061]: Session 16 logged out. Waiting for processes to exit.
Sep  4 17:22:15.006265 systemd-logind[2061]: Removed session 16.
Sep  4 17:22:15.019304 systemd[1]: Started sshd@16-172.31.19.141:22-139.178.68.195:60370.service - OpenSSH per-connection server daemon (139.178.68.195:60370).
Sep  4 17:22:15.187243 sshd[5117]: Accepted publickey for core from 139.178.68.195 port 60370 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:15.188997 sshd[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:15.196714 systemd-logind[2061]: New session 17 of user core.
Sep  4 17:22:15.203798 systemd[1]: Started session-17.scope - Session 17 of User core.
Sep  4 17:22:15.860532 sshd[5117]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:15.871171 systemd[1]: sshd@16-172.31.19.141:22-139.178.68.195:60370.service: Deactivated successfully.
Sep  4 17:22:15.877150 systemd[1]: session-17.scope: Deactivated successfully.
Sep  4 17:22:15.886598 systemd-logind[2061]: Session 17 logged out. Waiting for processes to exit.
Sep  4 17:22:15.896867 systemd-logind[2061]: Removed session 17.
Sep  4 17:22:15.904332 systemd[1]: Started sshd@17-172.31.19.141:22-139.178.68.195:60378.service - OpenSSH per-connection server daemon (139.178.68.195:60378).
Sep  4 17:22:16.087887 sshd[5129]: Accepted publickey for core from 139.178.68.195 port 60378 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:16.089783 sshd[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:16.096542 systemd-logind[2061]: New session 18 of user core.
Sep  4 17:22:16.105869 systemd[1]: Started session-18.scope - Session 18 of User core.
Sep  4 17:22:17.316098 sshd[5129]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:17.328749 systemd-logind[2061]: Session 18 logged out. Waiting for processes to exit.
Sep  4 17:22:17.330514 systemd[1]: sshd@17-172.31.19.141:22-139.178.68.195:60378.service: Deactivated successfully.
Sep  4 17:22:17.349225 systemd[1]: Started sshd@18-172.31.19.141:22-139.178.68.195:60354.service - OpenSSH per-connection server daemon (139.178.68.195:60354).
Sep  4 17:22:17.349837 systemd[1]: session-18.scope: Deactivated successfully.
Sep  4 17:22:17.351636 systemd-logind[2061]: Removed session 18.
Sep  4 17:22:17.507365 sshd[5148]: Accepted publickey for core from 139.178.68.195 port 60354 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:17.509026 sshd[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:17.514214 systemd-logind[2061]: New session 19 of user core.
Sep  4 17:22:17.519629 systemd[1]: Started session-19.scope - Session 19 of User core.
Sep  4 17:22:18.193371 sshd[5148]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:18.200355 systemd-logind[2061]: Session 19 logged out. Waiting for processes to exit.
Sep  4 17:22:18.201437 systemd[1]: sshd@18-172.31.19.141:22-139.178.68.195:60354.service: Deactivated successfully.
Sep  4 17:22:18.208549 systemd[1]: session-19.scope: Deactivated successfully.
Sep  4 17:22:18.209886 systemd-logind[2061]: Removed session 19.
Sep  4 17:22:18.226300 systemd[1]: Started sshd@19-172.31.19.141:22-139.178.68.195:60362.service - OpenSSH per-connection server daemon (139.178.68.195:60362).
Sep  4 17:22:18.391196 sshd[5161]: Accepted publickey for core from 139.178.68.195 port 60362 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:18.392992 sshd[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:18.398511 systemd-logind[2061]: New session 20 of user core.
Sep  4 17:22:18.404214 systemd[1]: Started session-20.scope - Session 20 of User core.
Sep  4 17:22:18.608808 sshd[5161]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:18.615517 systemd-logind[2061]: Session 20 logged out. Waiting for processes to exit.
Sep  4 17:22:18.616907 systemd[1]: sshd@19-172.31.19.141:22-139.178.68.195:60362.service: Deactivated successfully.
Sep  4 17:22:18.623411 systemd[1]: session-20.scope: Deactivated successfully.
Sep  4 17:22:18.628374 systemd-logind[2061]: Removed session 20.
Sep  4 17:22:23.640575 systemd[1]: Started sshd@20-172.31.19.141:22-139.178.68.195:60374.service - OpenSSH per-connection server daemon (139.178.68.195:60374).
Sep  4 17:22:23.824160 sshd[5175]: Accepted publickey for core from 139.178.68.195 port 60374 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:23.827132 sshd[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:23.834157 systemd-logind[2061]: New session 21 of user core.
Sep  4 17:22:23.843252 systemd[1]: Started session-21.scope - Session 21 of User core.
Sep  4 17:22:24.104676 sshd[5175]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:24.110403 systemd-logind[2061]: Session 21 logged out. Waiting for processes to exit.
Sep  4 17:22:24.110684 systemd[1]: sshd@20-172.31.19.141:22-139.178.68.195:60374.service: Deactivated successfully.
Sep  4 17:22:24.116978 systemd[1]: session-21.scope: Deactivated successfully.
Sep  4 17:22:24.119127 systemd-logind[2061]: Removed session 21.
Sep  4 17:22:29.146116 systemd[1]: Started sshd@21-172.31.19.141:22-139.178.68.195:54416.service - OpenSSH per-connection server daemon (139.178.68.195:54416).
Sep  4 17:22:29.333863 sshd[5192]: Accepted publickey for core from 139.178.68.195 port 54416 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:29.334654 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:29.343316 systemd-logind[2061]: New session 22 of user core.
Sep  4 17:22:29.348750 systemd[1]: Started session-22.scope - Session 22 of User core.
Sep  4 17:22:29.581708 sshd[5192]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:29.592904 systemd[1]: sshd@21-172.31.19.141:22-139.178.68.195:54416.service: Deactivated successfully.
Sep  4 17:22:29.619081 systemd-logind[2061]: Session 22 logged out. Waiting for processes to exit.
Sep  4 17:22:29.620300 systemd[1]: session-22.scope: Deactivated successfully.
Sep  4 17:22:29.629373 systemd-logind[2061]: Removed session 22.
Sep  4 17:22:34.616464 systemd[1]: Started sshd@22-172.31.19.141:22-139.178.68.195:54430.service - OpenSSH per-connection server daemon (139.178.68.195:54430).
Sep  4 17:22:34.810785 sshd[5206]: Accepted publickey for core from 139.178.68.195 port 54430 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:34.813061 sshd[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:34.820558 systemd-logind[2061]: New session 23 of user core.
Sep  4 17:22:34.828683 systemd[1]: Started session-23.scope - Session 23 of User core.
Sep  4 17:22:35.038054 sshd[5206]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:35.051866 systemd-logind[2061]: Session 23 logged out. Waiting for processes to exit.
Sep  4 17:22:35.055895 systemd[1]: sshd@22-172.31.19.141:22-139.178.68.195:54430.service: Deactivated successfully.
Sep  4 17:22:35.069207 systemd[1]: session-23.scope: Deactivated successfully.
Sep  4 17:22:35.071600 systemd-logind[2061]: Removed session 23.
Sep  4 17:22:40.067496 systemd[1]: Started sshd@23-172.31.19.141:22-139.178.68.195:59784.service - OpenSSH per-connection server daemon (139.178.68.195:59784).
Sep  4 17:22:40.241165 sshd[5221]: Accepted publickey for core from 139.178.68.195 port 59784 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:40.243439 sshd[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:40.262407 systemd-logind[2061]: New session 24 of user core.
Sep  4 17:22:40.271378 systemd[1]: Started session-24.scope - Session 24 of User core.
Sep  4 17:22:40.472770 sshd[5221]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:40.476746 systemd[1]: sshd@23-172.31.19.141:22-139.178.68.195:59784.service: Deactivated successfully.
Sep  4 17:22:40.483096 systemd[1]: session-24.scope: Deactivated successfully.
Sep  4 17:22:40.484884 systemd-logind[2061]: Session 24 logged out. Waiting for processes to exit.
Sep  4 17:22:40.486353 systemd-logind[2061]: Removed session 24.
Sep  4 17:22:40.501826 systemd[1]: Started sshd@24-172.31.19.141:22-139.178.68.195:59790.service - OpenSSH per-connection server daemon (139.178.68.195:59790).
Sep  4 17:22:40.663512 sshd[5235]: Accepted publickey for core from 139.178.68.195 port 59790 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:40.665066 sshd[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:40.670094 systemd-logind[2061]: New session 25 of user core.
Sep  4 17:22:40.675492 systemd[1]: Started session-25.scope - Session 25 of User core.
Sep  4 17:22:42.518805 containerd[2090]: time="2024-09-04T17:22:42.518758142Z" level=info msg="StopContainer for \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\" with timeout 30 (s)"
Sep  4 17:22:42.520624 containerd[2090]: time="2024-09-04T17:22:42.520590501Z" level=info msg="Stop container \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\" with signal terminated"
Sep  4 17:22:42.610235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937-rootfs.mount: Deactivated successfully.
Sep  4 17:22:42.612797 containerd[2090]: time="2024-09-04T17:22:42.612755922Z" level=info msg="StopContainer for \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\" with timeout 2 (s)"
Sep  4 17:22:42.614160 containerd[2090]: time="2024-09-04T17:22:42.614040950Z" level=info msg="Stop container \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\" with signal terminated"
Sep  4 17:22:42.620636 containerd[2090]: time="2024-09-04T17:22:42.620574474Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Sep  4 17:22:42.629235 systemd-networkd[1653]: lxc_health: Link DOWN
Sep  4 17:22:42.629244 systemd-networkd[1653]: lxc_health: Lost carrier
Sep  4 17:22:42.650094 containerd[2090]: time="2024-09-04T17:22:42.650028038Z" level=info msg="shim disconnected" id=83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937 namespace=k8s.io
Sep  4 17:22:42.650094 containerd[2090]: time="2024-09-04T17:22:42.650093092Z" level=warning msg="cleaning up after shim disconnected" id=83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937 namespace=k8s.io
Sep  4 17:22:42.650094 containerd[2090]: time="2024-09-04T17:22:42.650104774Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:22:42.702942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae-rootfs.mount: Deactivated successfully.
Sep  4 17:22:42.720888 containerd[2090]: time="2024-09-04T17:22:42.720349346Z" level=info msg="shim disconnected" id=28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae namespace=k8s.io
Sep  4 17:22:42.720888 containerd[2090]: time="2024-09-04T17:22:42.720472954Z" level=warning msg="cleaning up after shim disconnected" id=28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae namespace=k8s.io
Sep  4 17:22:42.720888 containerd[2090]: time="2024-09-04T17:22:42.720486501Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:22:42.729968 containerd[2090]: time="2024-09-04T17:22:42.729865392Z" level=info msg="StopContainer for \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\" returns successfully"
Sep  4 17:22:42.732018 containerd[2090]: time="2024-09-04T17:22:42.731972886Z" level=info msg="StopPodSandbox for \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\""
Sep  4 17:22:42.741810 containerd[2090]: time="2024-09-04T17:22:42.732028502Z" level=info msg="Container to stop \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep  4 17:22:42.750052 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937-shm.mount: Deactivated successfully.
Sep  4 17:22:42.778466 containerd[2090]: time="2024-09-04T17:22:42.778272114Z" level=info msg="StopContainer for \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\" returns successfully"
Sep  4 17:22:42.779417 containerd[2090]: time="2024-09-04T17:22:42.779369877Z" level=info msg="StopPodSandbox for \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\""
Sep  4 17:22:42.779519 containerd[2090]: time="2024-09-04T17:22:42.779422698Z" level=info msg="Container to stop \"84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep  4 17:22:42.779519 containerd[2090]: time="2024-09-04T17:22:42.779465993Z" level=info msg="Container to stop \"1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep  4 17:22:42.779519 containerd[2090]: time="2024-09-04T17:22:42.779481025Z" level=info msg="Container to stop \"eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep  4 17:22:42.779647 containerd[2090]: time="2024-09-04T17:22:42.779494690Z" level=info msg="Container to stop \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep  4 17:22:42.779647 containerd[2090]: time="2024-09-04T17:22:42.779590244Z" level=info msg="Container to stop \"cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep  4 17:22:42.784065 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5-shm.mount: Deactivated successfully.
Sep  4 17:22:42.822402 containerd[2090]: time="2024-09-04T17:22:42.822336415Z" level=info msg="shim disconnected" id=5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937 namespace=k8s.io
Sep  4 17:22:42.822402 containerd[2090]: time="2024-09-04T17:22:42.822401784Z" level=warning msg="cleaning up after shim disconnected" id=5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937 namespace=k8s.io
Sep  4 17:22:42.822676 containerd[2090]: time="2024-09-04T17:22:42.822414400Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:22:42.856090 containerd[2090]: time="2024-09-04T17:22:42.855814164Z" level=info msg="TearDown network for sandbox \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\" successfully"
Sep  4 17:22:42.856090 containerd[2090]: time="2024-09-04T17:22:42.855854537Z" level=info msg="StopPodSandbox for \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\" returns successfully"
Sep  4 17:22:42.858165 containerd[2090]: time="2024-09-04T17:22:42.858092735Z" level=info msg="shim disconnected" id=dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5 namespace=k8s.io
Sep  4 17:22:42.858568 containerd[2090]: time="2024-09-04T17:22:42.858540349Z" level=warning msg="cleaning up after shim disconnected" id=dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5 namespace=k8s.io
Sep  4 17:22:42.858649 containerd[2090]: time="2024-09-04T17:22:42.858567193Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:22:42.893799 containerd[2090]: time="2024-09-04T17:22:42.893746843Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:22:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Sep  4 17:22:42.895037 containerd[2090]: time="2024-09-04T17:22:42.895000634Z" level=info msg="TearDown network for sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" successfully"
Sep  4 17:22:42.895037 containerd[2090]: time="2024-09-04T17:22:42.895066650Z" level=info msg="StopPodSandbox for \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" returns successfully"
Sep  4 17:22:42.951749 kubelet[3635]: I0904 17:22:42.951702    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7f9a8ad-be41-4c98-92d6-4c89d001f0a1-cilium-config-path\") pod \"f7f9a8ad-be41-4c98-92d6-4c89d001f0a1\" (UID: \"f7f9a8ad-be41-4c98-92d6-4c89d001f0a1\") "
Sep  4 17:22:42.952292 kubelet[3635]: I0904 17:22:42.951769    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99mts\" (UniqueName: \"kubernetes.io/projected/f7f9a8ad-be41-4c98-92d6-4c89d001f0a1-kube-api-access-99mts\") pod \"f7f9a8ad-be41-4c98-92d6-4c89d001f0a1\" (UID: \"f7f9a8ad-be41-4c98-92d6-4c89d001f0a1\") "
Sep  4 17:22:42.972256 kubelet[3635]: I0904 17:22:42.971051    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7f9a8ad-be41-4c98-92d6-4c89d001f0a1-kube-api-access-99mts" (OuterVolumeSpecName: "kube-api-access-99mts") pod "f7f9a8ad-be41-4c98-92d6-4c89d001f0a1" (UID: "f7f9a8ad-be41-4c98-92d6-4c89d001f0a1"). InnerVolumeSpecName "kube-api-access-99mts". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep  4 17:22:42.976965 kubelet[3635]: I0904 17:22:42.970167    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7f9a8ad-be41-4c98-92d6-4c89d001f0a1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f7f9a8ad-be41-4c98-92d6-4c89d001f0a1" (UID: "f7f9a8ad-be41-4c98-92d6-4c89d001f0a1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Sep  4 17:22:43.052582 kubelet[3635]: I0904 17:22:43.052460    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c29159c9-f066-43bc-8013-1523b3f97584-hubble-tls\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.052582 kubelet[3635]: I0904 17:22:43.052506    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-bpf-maps\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.052582 kubelet[3635]: I0904 17:22:43.052530    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-lib-modules\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.052582 kubelet[3635]: I0904 17:22:43.052561    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c29159c9-f066-43bc-8013-1523b3f97584-cilium-config-path\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.052861 kubelet[3635]: I0904 17:22:43.052591    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c29159c9-f066-43bc-8013-1523b3f97584-clustermesh-secrets\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.052861 kubelet[3635]: I0904 17:22:43.052614    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-cilium-cgroup\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.052861 kubelet[3635]: I0904 17:22:43.052635    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-etc-cni-netd\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.052861 kubelet[3635]: I0904 17:22:43.052659    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-host-proc-sys-net\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.052861 kubelet[3635]: I0904 17:22:43.052687    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh6nx\" (UniqueName: \"kubernetes.io/projected/c29159c9-f066-43bc-8013-1523b3f97584-kube-api-access-nh6nx\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.052861 kubelet[3635]: I0904 17:22:43.052714    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-hostproc\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.053134 kubelet[3635]: I0904 17:22:43.052740    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-xtables-lock\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.053134 kubelet[3635]: I0904 17:22:43.052769    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-cni-path\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.053134 kubelet[3635]: I0904 17:22:43.052796    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-cilium-run\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.053134 kubelet[3635]: I0904 17:22:43.052825    3635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-host-proc-sys-kernel\") pod \"c29159c9-f066-43bc-8013-1523b3f97584\" (UID: \"c29159c9-f066-43bc-8013-1523b3f97584\") "
Sep  4 17:22:43.053134 kubelet[3635]: I0904 17:22:43.052877    3635 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7f9a8ad-be41-4c98-92d6-4c89d001f0a1-cilium-config-path\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.053134 kubelet[3635]: I0904 17:22:43.052897    3635 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-99mts\" (UniqueName: \"kubernetes.io/projected/f7f9a8ad-be41-4c98-92d6-4c89d001f0a1-kube-api-access-99mts\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.055760 kubelet[3635]: I0904 17:22:43.052958    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 17:22:43.055760 kubelet[3635]: I0904 17:22:43.053402    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 17:22:43.055760 kubelet[3635]: I0904 17:22:43.053439    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 17:22:43.055760 kubelet[3635]: I0904 17:22:43.053461    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 17:22:43.057089 kubelet[3635]: I0904 17:22:43.056968    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 17:22:43.058638 kubelet[3635]: I0904 17:22:43.058295    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-hostproc" (OuterVolumeSpecName: "hostproc") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 17:22:43.058638 kubelet[3635]: I0904 17:22:43.058347    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 17:22:43.059039 kubelet[3635]: I0904 17:22:43.059011    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 17:22:43.060195 kubelet[3635]: I0904 17:22:43.059054    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-cni-path" (OuterVolumeSpecName: "cni-path") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 17:22:43.060195 kubelet[3635]: I0904 17:22:43.059076    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep  4 17:22:43.060479 kubelet[3635]: I0904 17:22:43.060444    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29159c9-f066-43bc-8013-1523b3f97584-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Sep  4 17:22:43.061381 kubelet[3635]: I0904 17:22:43.061356    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29159c9-f066-43bc-8013-1523b3f97584-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep  4 17:22:43.062104 kubelet[3635]: I0904 17:22:43.062080    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29159c9-f066-43bc-8013-1523b3f97584-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Sep  4 17:22:43.063881 kubelet[3635]: I0904 17:22:43.063845    3635 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29159c9-f066-43bc-8013-1523b3f97584-kube-api-access-nh6nx" (OuterVolumeSpecName: "kube-api-access-nh6nx") pod "c29159c9-f066-43bc-8013-1523b3f97584" (UID: "c29159c9-f066-43bc-8013-1523b3f97584"). InnerVolumeSpecName "kube-api-access-nh6nx". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep  4 17:22:43.153367 kubelet[3635]: I0904 17:22:43.153319    3635 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-bpf-maps\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153367 kubelet[3635]: I0904 17:22:43.153362    3635 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-lib-modules\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153367 kubelet[3635]: I0904 17:22:43.153377    3635 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c29159c9-f066-43bc-8013-1523b3f97584-cilium-config-path\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153637 kubelet[3635]: I0904 17:22:43.153393    3635 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c29159c9-f066-43bc-8013-1523b3f97584-clustermesh-secrets\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153637 kubelet[3635]: I0904 17:22:43.153406    3635 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-cilium-cgroup\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153637 kubelet[3635]: I0904 17:22:43.153419    3635 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-etc-cni-netd\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153637 kubelet[3635]: I0904 17:22:43.153432    3635 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-host-proc-sys-net\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153637 kubelet[3635]: I0904 17:22:43.153446    3635 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nh6nx\" (UniqueName: \"kubernetes.io/projected/c29159c9-f066-43bc-8013-1523b3f97584-kube-api-access-nh6nx\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153637 kubelet[3635]: I0904 17:22:43.153459    3635 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-hostproc\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153637 kubelet[3635]: I0904 17:22:43.153474    3635 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-xtables-lock\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153637 kubelet[3635]: I0904 17:22:43.153488    3635 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-cni-path\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153859 kubelet[3635]: I0904 17:22:43.153501    3635 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-host-proc-sys-kernel\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153859 kubelet[3635]: I0904 17:22:43.153514    3635 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c29159c9-f066-43bc-8013-1523b3f97584-cilium-run\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.153859 kubelet[3635]: I0904 17:22:43.153528    3635 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c29159c9-f066-43bc-8013-1523b3f97584-hubble-tls\") on node \"ip-172-31-19-141\" DevicePath \"\""
Sep  4 17:22:43.455421 kubelet[3635]: I0904 17:22:43.455380    3635 scope.go:117] "RemoveContainer" containerID="28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae"
Sep  4 17:22:43.472865 containerd[2090]: time="2024-09-04T17:22:43.471324727Z" level=info msg="RemoveContainer for \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\""
Sep  4 17:22:43.479339 containerd[2090]: time="2024-09-04T17:22:43.479277244Z" level=info msg="RemoveContainer for \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\" returns successfully"
Sep  4 17:22:43.479822 kubelet[3635]: I0904 17:22:43.479770    3635 scope.go:117] "RemoveContainer" containerID="eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f"
Sep  4 17:22:43.482236 containerd[2090]: time="2024-09-04T17:22:43.481877808Z" level=info msg="RemoveContainer for \"eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f\""
Sep  4 17:22:43.487044 containerd[2090]: time="2024-09-04T17:22:43.486995854Z" level=info msg="RemoveContainer for \"eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f\" returns successfully"
Sep  4 17:22:43.487806 kubelet[3635]: I0904 17:22:43.487777    3635 scope.go:117] "RemoveContainer" containerID="1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c"
Sep  4 17:22:43.489360 containerd[2090]: time="2024-09-04T17:22:43.489024573Z" level=info msg="RemoveContainer for \"1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c\""
Sep  4 17:22:43.493688 containerd[2090]: time="2024-09-04T17:22:43.493645735Z" level=info msg="RemoveContainer for \"1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c\" returns successfully"
Sep  4 17:22:43.493973 kubelet[3635]: I0904 17:22:43.493949    3635 scope.go:117] "RemoveContainer" containerID="cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96"
Sep  4 17:22:43.503948 containerd[2090]: time="2024-09-04T17:22:43.503103810Z" level=info msg="RemoveContainer for \"cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96\""
Sep  4 17:22:43.507657 containerd[2090]: time="2024-09-04T17:22:43.507613679Z" level=info msg="RemoveContainer for \"cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96\" returns successfully"
Sep  4 17:22:43.507907 kubelet[3635]: I0904 17:22:43.507880    3635 scope.go:117] "RemoveContainer" containerID="84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29"
Sep  4 17:22:43.509235 containerd[2090]: time="2024-09-04T17:22:43.509201572Z" level=info msg="RemoveContainer for \"84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29\""
Sep  4 17:22:43.513885 containerd[2090]: time="2024-09-04T17:22:43.513844640Z" level=info msg="RemoveContainer for \"84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29\" returns successfully"
Sep  4 17:22:43.514190 kubelet[3635]: I0904 17:22:43.514166    3635 scope.go:117] "RemoveContainer" containerID="28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae"
Sep  4 17:22:43.514446 containerd[2090]: time="2024-09-04T17:22:43.514411916Z" level=error msg="ContainerStatus for \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\": not found"
Sep  4 17:22:43.519711 kubelet[3635]: E0904 17:22:43.519656    3635 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\": not found" containerID="28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae"
Sep  4 17:22:43.526745 kubelet[3635]: I0904 17:22:43.526701    3635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae"} err="failed to get container status \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"28570b27ed7a5633f512e76d65dd004a50ddc9f6b402f3577987b6bdb8d679ae\": not found"
Sep  4 17:22:43.526745 kubelet[3635]: I0904 17:22:43.526750    3635 scope.go:117] "RemoveContainer" containerID="eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f"
Sep  4 17:22:43.527238 containerd[2090]: time="2024-09-04T17:22:43.527195879Z" level=error msg="ContainerStatus for \"eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f\": not found"
Sep  4 17:22:43.527735 kubelet[3635]: E0904 17:22:43.527708    3635 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f\": not found" containerID="eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f"
Sep  4 17:22:43.527808 kubelet[3635]: I0904 17:22:43.527759    3635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f"} err="failed to get container status \"eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb0a89842a0115832ce606e307a66e7f6f8a10caa772f50431f16de0a799c39f\": not found"
Sep  4 17:22:43.527808 kubelet[3635]: I0904 17:22:43.527776    3635 scope.go:117] "RemoveContainer" containerID="1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c"
Sep  4 17:22:43.528158 containerd[2090]: time="2024-09-04T17:22:43.528123526Z" level=error msg="ContainerStatus for \"1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c\": not found"
Sep  4 17:22:43.528326 kubelet[3635]: E0904 17:22:43.528301    3635 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c\": not found" containerID="1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c"
Sep  4 17:22:43.528418 kubelet[3635]: I0904 17:22:43.528338    3635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c"} err="failed to get container status \"1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ab0d5f648ab4949dad96e009cdf99990ac098753ef87836156afd55d2f6bc0c\": not found"
Sep  4 17:22:43.528418 kubelet[3635]: I0904 17:22:43.528353    3635 scope.go:117] "RemoveContainer" containerID="cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96"
Sep  4 17:22:43.528567 containerd[2090]: time="2024-09-04T17:22:43.528538012Z" level=error msg="ContainerStatus for \"cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96\": not found"
Sep  4 17:22:43.528720 kubelet[3635]: E0904 17:22:43.528699    3635 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96\": not found" containerID="cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96"
Sep  4 17:22:43.528791 kubelet[3635]: I0904 17:22:43.528734    3635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96"} err="failed to get container status \"cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96\": rpc error: code = NotFound desc = an error occurred when try to find container \"cac5f49ad1cd90fc18ca54aac3ab9605bbff73dd7977faec6a0ddc63743b5a96\": not found"
Sep  4 17:22:43.528791 kubelet[3635]: I0904 17:22:43.528747    3635 scope.go:117] "RemoveContainer" containerID="84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29"
Sep  4 17:22:43.528954 containerd[2090]: time="2024-09-04T17:22:43.528910215Z" level=error msg="ContainerStatus for \"84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29\": not found"
Sep  4 17:22:43.529137 kubelet[3635]: E0904 17:22:43.529110    3635 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29\": not found" containerID="84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29"
Sep  4 17:22:43.529196 kubelet[3635]: I0904 17:22:43.529143    3635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29"} err="failed to get container status \"84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29\": rpc error: code = NotFound desc = an error occurred when try to find container \"84b6c2972a9f5a5940ea5447b10f3308730acfc94fdef9dca5971649772d9c29\": not found"
Sep  4 17:22:43.529196 kubelet[3635]: I0904 17:22:43.529155    3635 scope.go:117] "RemoveContainer" containerID="83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937"
Sep  4 17:22:43.530274 containerd[2090]: time="2024-09-04T17:22:43.530245046Z" level=info msg="RemoveContainer for \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\""
Sep  4 17:22:43.534515 containerd[2090]: time="2024-09-04T17:22:43.534482923Z" level=info msg="RemoveContainer for \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\" returns successfully"
Sep  4 17:22:43.534761 kubelet[3635]: I0904 17:22:43.534737    3635 scope.go:117] "RemoveContainer" containerID="83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937"
Sep  4 17:22:43.535023 containerd[2090]: time="2024-09-04T17:22:43.534982376Z" level=error msg="ContainerStatus for \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\": not found"
Sep  4 17:22:43.535216 kubelet[3635]: E0904 17:22:43.535193    3635 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\": not found" containerID="83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937"
Sep  4 17:22:43.535287 kubelet[3635]: I0904 17:22:43.535236    3635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937"} err="failed to get container status \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\": rpc error: code = NotFound desc = an error occurred when try to find container \"83356890c60915d301ebac577c7cfa4fe4f36c31c0e5dabea4bb0b48d8dbc937\": not found"
Sep  4 17:22:43.554458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937-rootfs.mount: Deactivated successfully.
Sep  4 17:22:43.554677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5-rootfs.mount: Deactivated successfully.
Sep  4 17:22:43.554811 systemd[1]: var-lib-kubelet-pods-c29159c9\x2df066\x2d43bc\x2d8013\x2d1523b3f97584-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Sep  4 17:22:43.554987 systemd[1]: var-lib-kubelet-pods-c29159c9\x2df066\x2d43bc\x2d8013\x2d1523b3f97584-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Sep  4 17:22:43.555230 systemd[1]: var-lib-kubelet-pods-c29159c9\x2df066\x2d43bc\x2d8013\x2d1523b3f97584-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnh6nx.mount: Deactivated successfully.
Sep  4 17:22:43.555372 systemd[1]: var-lib-kubelet-pods-f7f9a8ad\x2dbe41\x2d4c98\x2d92d6\x2d4c89d001f0a1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d99mts.mount: Deactivated successfully.
Sep  4 17:22:43.869480 kubelet[3635]: I0904 17:22:43.869444    3635 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c29159c9-f066-43bc-8013-1523b3f97584" path="/var/lib/kubelet/pods/c29159c9-f066-43bc-8013-1523b3f97584/volumes"
Sep  4 17:22:43.870232 kubelet[3635]: I0904 17:22:43.870202    3635 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f7f9a8ad-be41-4c98-92d6-4c89d001f0a1" path="/var/lib/kubelet/pods/f7f9a8ad-be41-4c98-92d6-4c89d001f0a1/volumes"
Sep  4 17:22:44.421996 sshd[5235]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:44.428394 systemd[1]: sshd@24-172.31.19.141:22-139.178.68.195:59790.service: Deactivated successfully.
Sep  4 17:22:44.435319 systemd[1]: session-25.scope: Deactivated successfully.
Sep  4 17:22:44.435769 systemd-logind[2061]: Session 25 logged out. Waiting for processes to exit.
Sep  4 17:22:44.439281 systemd-logind[2061]: Removed session 25.
Sep  4 17:22:44.451348 systemd[1]: Started sshd@25-172.31.19.141:22-139.178.68.195:59804.service - OpenSSH per-connection server daemon (139.178.68.195:59804).
Sep  4 17:22:44.618489 sshd[5402]: Accepted publickey for core from 139.178.68.195 port 59804 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:44.620450 sshd[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:44.630692 systemd-logind[2061]: New session 26 of user core.
Sep  4 17:22:44.641615 systemd[1]: Started session-26.scope - Session 26 of User core.
Sep  4 17:22:45.245727 ntpd[2042]: Deleting interface #10 lxc_health, fe80::c8e5:b0ff:fe70:cd69%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs
Sep  4 17:22:45.246288 ntpd[2042]:  4 Sep 17:22:45 ntpd[2042]: Deleting interface #10 lxc_health, fe80::c8e5:b0ff:fe70:cd69%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs
Sep  4 17:22:45.528688 sshd[5402]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:45.536824 systemd[1]: sshd@25-172.31.19.141:22-139.178.68.195:59804.service: Deactivated successfully.
Sep  4 17:22:45.551465 systemd[1]: session-26.scope: Deactivated successfully.
Sep  4 17:22:45.551752 systemd-logind[2061]: Session 26 logged out. Waiting for processes to exit.
Sep  4 17:22:45.569596 systemd[1]: Started sshd@26-172.31.19.141:22-139.178.68.195:59806.service - OpenSSH per-connection server daemon (139.178.68.195:59806).
Sep  4 17:22:45.573179 systemd-logind[2061]: Removed session 26.
Sep  4 17:22:45.580276 kubelet[3635]: I0904 17:22:45.580154    3635 topology_manager.go:215] "Topology Admit Handler" podUID="9fd564ae-ff7b-4f0d-98ee-27fe61fbf588" podNamespace="kube-system" podName="cilium-2nxk9"
Sep  4 17:22:45.585483 kubelet[3635]: E0904 17:22:45.581023    3635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c29159c9-f066-43bc-8013-1523b3f97584" containerName="mount-cgroup"
Sep  4 17:22:45.585483 kubelet[3635]: E0904 17:22:45.581064    3635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c29159c9-f066-43bc-8013-1523b3f97584" containerName="mount-bpf-fs"
Sep  4 17:22:45.585483 kubelet[3635]: E0904 17:22:45.581075    3635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c29159c9-f066-43bc-8013-1523b3f97584" containerName="clean-cilium-state"
Sep  4 17:22:45.585483 kubelet[3635]: E0904 17:22:45.581085    3635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c29159c9-f066-43bc-8013-1523b3f97584" containerName="apply-sysctl-overwrites"
Sep  4 17:22:45.585483 kubelet[3635]: E0904 17:22:45.581096    3635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7f9a8ad-be41-4c98-92d6-4c89d001f0a1" containerName="cilium-operator"
Sep  4 17:22:45.585483 kubelet[3635]: E0904 17:22:45.581127    3635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c29159c9-f066-43bc-8013-1523b3f97584" containerName="cilium-agent"
Sep  4 17:22:45.596482 kubelet[3635]: I0904 17:22:45.596448    3635 memory_manager.go:346] "RemoveStaleState removing state" podUID="f7f9a8ad-be41-4c98-92d6-4c89d001f0a1" containerName="cilium-operator"
Sep  4 17:22:45.597016 kubelet[3635]: I0904 17:22:45.596713    3635 memory_manager.go:346] "RemoveStaleState removing state" podUID="c29159c9-f066-43bc-8013-1523b3f97584" containerName="cilium-agent"
Sep  4 17:22:45.763653 sshd[5415]: Accepted publickey for core from 139.178.68.195 port 59806 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:45.764485 sshd[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:45.770086 systemd-logind[2061]: New session 27 of user core.
Sep  4 17:22:45.774433 systemd[1]: Started session-27.scope - Session 27 of User core.
Sep  4 17:22:45.780749 kubelet[3635]: I0904 17:22:45.779233    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-cni-path\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.780749 kubelet[3635]: I0904 17:22:45.779289    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-host-proc-sys-kernel\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.780749 kubelet[3635]: I0904 17:22:45.779321    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-xtables-lock\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.780749 kubelet[3635]: I0904 17:22:45.779354    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-cilium-cgroup\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.780749 kubelet[3635]: I0904 17:22:45.779382    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-cilium-ipsec-secrets\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.780749 kubelet[3635]: I0904 17:22:45.779411    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-etc-cni-netd\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.785043 kubelet[3635]: I0904 17:22:45.779440    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-cilium-config-path\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.785043 kubelet[3635]: I0904 17:22:45.779573    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-bpf-maps\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.785043 kubelet[3635]: I0904 17:22:45.779600    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-hostproc\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.785043 kubelet[3635]: I0904 17:22:45.780478    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-clustermesh-secrets\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.785043 kubelet[3635]: I0904 17:22:45.780551    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-hubble-tls\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.785043 kubelet[3635]: I0904 17:22:45.780760    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-lib-modules\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.785322 kubelet[3635]: I0904 17:22:45.780798    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58hc9\" (UniqueName: \"kubernetes.io/projected/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-kube-api-access-58hc9\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.785322 kubelet[3635]: I0904 17:22:45.780827    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-cilium-run\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.785322 kubelet[3635]: I0904 17:22:45.780854    3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fd564ae-ff7b-4f0d-98ee-27fe61fbf588-host-proc-sys-net\") pod \"cilium-2nxk9\" (UID: \"9fd564ae-ff7b-4f0d-98ee-27fe61fbf588\") " pod="kube-system/cilium-2nxk9"
Sep  4 17:22:45.898557 sshd[5415]: pam_unix(sshd:session): session closed for user core
Sep  4 17:22:45.918913 systemd[1]: sshd@26-172.31.19.141:22-139.178.68.195:59806.service: Deactivated successfully.
Sep  4 17:22:45.939951 systemd[1]: session-27.scope: Deactivated successfully.
Sep  4 17:22:45.941411 systemd-logind[2061]: Session 27 logged out. Waiting for processes to exit.
Sep  4 17:22:45.952684 systemd[1]: Started sshd@27-172.31.19.141:22-139.178.68.195:59810.service - OpenSSH per-connection server daemon (139.178.68.195:59810).
Sep  4 17:22:45.954161 systemd-logind[2061]: Removed session 27.
Sep  4 17:22:46.114199 sshd[5428]: Accepted publickey for core from 139.178.68.195 port 59810 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g
Sep  4 17:22:46.115747 sshd[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:22:46.121303 systemd-logind[2061]: New session 28 of user core.
Sep  4 17:22:46.126569 systemd[1]: Started session-28.scope - Session 28 of User core.
Sep  4 17:22:46.219591 containerd[2090]: time="2024-09-04T17:22:46.219545953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2nxk9,Uid:9fd564ae-ff7b-4f0d-98ee-27fe61fbf588,Namespace:kube-system,Attempt:0,}"
Sep  4 17:22:46.259681 containerd[2090]: time="2024-09-04T17:22:46.259255523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:22:46.259681 containerd[2090]: time="2024-09-04T17:22:46.259331957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:22:46.259681 containerd[2090]: time="2024-09-04T17:22:46.259376101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:22:46.259681 containerd[2090]: time="2024-09-04T17:22:46.259399307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:22:46.357147 containerd[2090]: time="2024-09-04T17:22:46.357032998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2nxk9,Uid:9fd564ae-ff7b-4f0d-98ee-27fe61fbf588,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2f045719560ba8b12c62236de36b7a49fca622a67b1e6c76745de5c6775d197\""
Sep  4 17:22:46.371224 containerd[2090]: time="2024-09-04T17:22:46.370751783Z" level=info msg="CreateContainer within sandbox \"a2f045719560ba8b12c62236de36b7a49fca622a67b1e6c76745de5c6775d197\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Sep  4 17:22:46.396251 containerd[2090]: time="2024-09-04T17:22:46.396201529Z" level=info msg="CreateContainer within sandbox \"a2f045719560ba8b12c62236de36b7a49fca622a67b1e6c76745de5c6775d197\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"347a404a7a1afc8ff32ece4b25e558b75cc7a6975322e321972bdc75986067ee\""
Sep  4 17:22:46.397770 containerd[2090]: time="2024-09-04T17:22:46.397736916Z" level=info msg="StartContainer for \"347a404a7a1afc8ff32ece4b25e558b75cc7a6975322e321972bdc75986067ee\""
Sep  4 17:22:46.464282 containerd[2090]: time="2024-09-04T17:22:46.464239637Z" level=info msg="StartContainer for \"347a404a7a1afc8ff32ece4b25e558b75cc7a6975322e321972bdc75986067ee\" returns successfully"
Sep  4 17:22:46.568764 containerd[2090]: time="2024-09-04T17:22:46.568698945Z" level=info msg="shim disconnected" id=347a404a7a1afc8ff32ece4b25e558b75cc7a6975322e321972bdc75986067ee namespace=k8s.io
Sep  4 17:22:46.568764 containerd[2090]: time="2024-09-04T17:22:46.568757975Z" level=warning msg="cleaning up after shim disconnected" id=347a404a7a1afc8ff32ece4b25e558b75cc7a6975322e321972bdc75986067ee namespace=k8s.io
Sep  4 17:22:46.568764 containerd[2090]: time="2024-09-04T17:22:46.568769036Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:22:47.124862 kubelet[3635]: E0904 17:22:47.124829    3635 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Sep  4 17:22:47.495994 containerd[2090]: time="2024-09-04T17:22:47.495523279Z" level=info msg="CreateContainer within sandbox \"a2f045719560ba8b12c62236de36b7a49fca622a67b1e6c76745de5c6775d197\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Sep  4 17:22:47.523168 containerd[2090]: time="2024-09-04T17:22:47.523121739Z" level=info msg="CreateContainer within sandbox \"a2f045719560ba8b12c62236de36b7a49fca622a67b1e6c76745de5c6775d197\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a6d16706fa02f3521176df70c2a9ba47645c13fa0df488179ed8997ba6069e2\""
Sep  4 17:22:47.524164 containerd[2090]: time="2024-09-04T17:22:47.524102015Z" level=info msg="StartContainer for \"2a6d16706fa02f3521176df70c2a9ba47645c13fa0df488179ed8997ba6069e2\""
Sep  4 17:22:47.593230 containerd[2090]: time="2024-09-04T17:22:47.593131268Z" level=info msg="StartContainer for \"2a6d16706fa02f3521176df70c2a9ba47645c13fa0df488179ed8997ba6069e2\" returns successfully"
Sep  4 17:22:47.644320 containerd[2090]: time="2024-09-04T17:22:47.644251196Z" level=info msg="shim disconnected" id=2a6d16706fa02f3521176df70c2a9ba47645c13fa0df488179ed8997ba6069e2 namespace=k8s.io
Sep  4 17:22:47.644557 containerd[2090]: time="2024-09-04T17:22:47.644326011Z" level=warning msg="cleaning up after shim disconnected" id=2a6d16706fa02f3521176df70c2a9ba47645c13fa0df488179ed8997ba6069e2 namespace=k8s.io
Sep  4 17:22:47.644557 containerd[2090]: time="2024-09-04T17:22:47.644339358Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:22:47.895665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a6d16706fa02f3521176df70c2a9ba47645c13fa0df488179ed8997ba6069e2-rootfs.mount: Deactivated successfully.
Sep  4 17:22:48.500022 containerd[2090]: time="2024-09-04T17:22:48.499143904Z" level=info msg="CreateContainer within sandbox \"a2f045719560ba8b12c62236de36b7a49fca622a67b1e6c76745de5c6775d197\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Sep  4 17:22:48.578471 containerd[2090]: time="2024-09-04T17:22:48.578424432Z" level=info msg="CreateContainer within sandbox \"a2f045719560ba8b12c62236de36b7a49fca622a67b1e6c76745de5c6775d197\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e8f7d15ce295202188feb1fef68924e5647fb111fb3e273ccb15042fcdac00d0\""
Sep  4 17:22:48.579231 containerd[2090]: time="2024-09-04T17:22:48.579164823Z" level=info msg="StartContainer for \"e8f7d15ce295202188feb1fef68924e5647fb111fb3e273ccb15042fcdac00d0\""
Sep  4 17:22:48.666764 containerd[2090]: time="2024-09-04T17:22:48.666728788Z" level=info msg="StartContainer for \"e8f7d15ce295202188feb1fef68924e5647fb111fb3e273ccb15042fcdac00d0\" returns successfully"
Sep  4 17:22:48.707936 containerd[2090]: time="2024-09-04T17:22:48.707849801Z" level=info msg="shim disconnected" id=e8f7d15ce295202188feb1fef68924e5647fb111fb3e273ccb15042fcdac00d0 namespace=k8s.io
Sep  4 17:22:48.707936 containerd[2090]: time="2024-09-04T17:22:48.707910520Z" level=warning msg="cleaning up after shim disconnected" id=e8f7d15ce295202188feb1fef68924e5647fb111fb3e273ccb15042fcdac00d0 namespace=k8s.io
Sep  4 17:22:48.707936 containerd[2090]: time="2024-09-04T17:22:48.707942878Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:22:48.894733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8f7d15ce295202188feb1fef68924e5647fb111fb3e273ccb15042fcdac00d0-rootfs.mount: Deactivated successfully.
Sep  4 17:22:49.504577 containerd[2090]: time="2024-09-04T17:22:49.504318461Z" level=info msg="CreateContainer within sandbox \"a2f045719560ba8b12c62236de36b7a49fca622a67b1e6c76745de5c6775d197\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Sep  4 17:22:49.543653 containerd[2090]: time="2024-09-04T17:22:49.543599582Z" level=info msg="CreateContainer within sandbox \"a2f045719560ba8b12c62236de36b7a49fca622a67b1e6c76745de5c6775d197\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c77653688cd3f780f28d0fd6a9e76257d2686c623b78362912fb43164c73f64\""
Sep  4 17:22:49.545042 containerd[2090]: time="2024-09-04T17:22:49.544995843Z" level=info msg="StartContainer for \"1c77653688cd3f780f28d0fd6a9e76257d2686c623b78362912fb43164c73f64\""
Sep  4 17:22:49.638720 containerd[2090]: time="2024-09-04T17:22:49.638556387Z" level=info msg="StartContainer for \"1c77653688cd3f780f28d0fd6a9e76257d2686c623b78362912fb43164c73f64\" returns successfully"
Sep  4 17:22:49.687770 containerd[2090]: time="2024-09-04T17:22:49.687707466Z" level=info msg="shim disconnected" id=1c77653688cd3f780f28d0fd6a9e76257d2686c623b78362912fb43164c73f64 namespace=k8s.io
Sep  4 17:22:49.687770 containerd[2090]: time="2024-09-04T17:22:49.687765181Z" level=warning msg="cleaning up after shim disconnected" id=1c77653688cd3f780f28d0fd6a9e76257d2686c623b78362912fb43164c73f64 namespace=k8s.io
Sep  4 17:22:49.687770 containerd[2090]: time="2024-09-04T17:22:49.687776433Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:22:49.895466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c77653688cd3f780f28d0fd6a9e76257d2686c623b78362912fb43164c73f64-rootfs.mount: Deactivated successfully.
Sep  4 17:22:50.510551 containerd[2090]: time="2024-09-04T17:22:50.510511568Z" level=info msg="CreateContainer within sandbox \"a2f045719560ba8b12c62236de36b7a49fca622a67b1e6c76745de5c6775d197\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Sep  4 17:22:50.549596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1744128317.mount: Deactivated successfully.
Sep  4 17:22:50.551233 containerd[2090]: time="2024-09-04T17:22:50.551191030Z" level=info msg="CreateContainer within sandbox \"a2f045719560ba8b12c62236de36b7a49fca622a67b1e6c76745de5c6775d197\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"86764a77be8537590ad297416faca1350d4d0425e3baea27d5042e39f993a7d6\""
Sep  4 17:22:50.552931 containerd[2090]: time="2024-09-04T17:22:50.551731867Z" level=info msg="StartContainer for \"86764a77be8537590ad297416faca1350d4d0425e3baea27d5042e39f993a7d6\""
Sep  4 17:22:50.686025 containerd[2090]: time="2024-09-04T17:22:50.685983295Z" level=info msg="StartContainer for \"86764a77be8537590ad297416faca1350d4d0425e3baea27d5042e39f993a7d6\" returns successfully"
Sep  4 17:22:51.414690 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Sep  4 17:22:51.547664 kubelet[3635]: I0904 17:22:51.547274    3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2nxk9" podStartSLOduration=6.547200045 podCreationTimestamp="2024-09-04 17:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:22:51.54374133 +0000 UTC m=+119.999187638" watchObservedRunningTime="2024-09-04 17:22:51.547200045 +0000 UTC m=+120.002646350"
Sep  4 17:22:51.834511 containerd[2090]: time="2024-09-04T17:22:51.834384336Z" level=info msg="StopPodSandbox for \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\""
Sep  4 17:22:51.834511 containerd[2090]: time="2024-09-04T17:22:51.834503850Z" level=info msg="TearDown network for sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" successfully"
Sep  4 17:22:51.835287 containerd[2090]: time="2024-09-04T17:22:51.834520369Z" level=info msg="StopPodSandbox for \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" returns successfully"
Sep  4 17:22:51.835629 containerd[2090]: time="2024-09-04T17:22:51.835594002Z" level=info msg="RemovePodSandbox for \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\""
Sep  4 17:22:51.835726 containerd[2090]: time="2024-09-04T17:22:51.835635951Z" level=info msg="Forcibly stopping sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\""
Sep  4 17:22:51.835795 containerd[2090]: time="2024-09-04T17:22:51.835726935Z" level=info msg="TearDown network for sandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" successfully"
Sep  4 17:22:51.844505 containerd[2090]: time="2024-09-04T17:22:51.844451702Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Sep  4 17:22:51.844638 containerd[2090]: time="2024-09-04T17:22:51.844538305Z" level=info msg="RemovePodSandbox \"dbcb0f5ae70bba5369c6fa154adb08a7f466bfd0c8e86546001d51e01e063ca5\" returns successfully"
Sep  4 17:22:51.845757 containerd[2090]: time="2024-09-04T17:22:51.845574277Z" level=info msg="StopPodSandbox for \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\""
Sep  4 17:22:51.845757 containerd[2090]: time="2024-09-04T17:22:51.845682990Z" level=info msg="TearDown network for sandbox \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\" successfully"
Sep  4 17:22:51.845757 containerd[2090]: time="2024-09-04T17:22:51.845695097Z" level=info msg="StopPodSandbox for \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\" returns successfully"
Sep  4 17:22:51.846271 containerd[2090]: time="2024-09-04T17:22:51.846179848Z" level=info msg="RemovePodSandbox for \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\""
Sep  4 17:22:51.847791 containerd[2090]: time="2024-09-04T17:22:51.846210906Z" level=info msg="Forcibly stopping sandbox \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\""
Sep  4 17:22:51.847875 containerd[2090]: time="2024-09-04T17:22:51.847845608Z" level=info msg="TearDown network for sandbox \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\" successfully"
Sep  4 17:22:51.853942 containerd[2090]: time="2024-09-04T17:22:51.853457780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Sep  4 17:22:51.853942 containerd[2090]: time="2024-09-04T17:22:51.853524779Z" level=info msg="RemovePodSandbox \"5419427166b31bea5ea51e1441ae844fcffdd78c00051d6d26c013f671c8f937\" returns successfully"
Sep  4 17:22:55.283721 systemd-networkd[1653]: lxc_health: Link UP
Sep  4 17:22:55.293402 systemd-networkd[1653]: lxc_health: Gained carrier
Sep  4 17:22:55.303375 (udev-worker)[6291]: Network interface NamePolicy= disabled on kernel command line.
Sep  4 17:22:57.030128 systemd-networkd[1653]: lxc_health: Gained IPv6LL
Sep  4 17:22:57.812451 systemd[1]: run-containerd-runc-k8s.io-86764a77be8537590ad297416faca1350d4d0425e3baea27d5042e39f993a7d6-runc.NxaSkm.mount: Deactivated successfully.
Sep  4 17:22:59.246585 ntpd[2042]: Listen normally on 13 lxc_health [fe80::60f6:baff:fe8f:d541%14]:123
Sep  4 17:22:59.248615 ntpd[2042]:  4 Sep 17:22:59 ntpd[2042]: Listen normally on 13 lxc_health [fe80::60f6:baff:fe8f:d541%14]:123
Sep  4 17:23:00.314610 kubelet[3635]: E0904 17:23:00.312909    3635 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35988->127.0.0.1:42271: write tcp 127.0.0.1:35988->127.0.0.1:42271: write: broken pipe
Sep  4 17:23:00.534682 sshd[5428]: pam_unix(sshd:session): session closed for user core
Sep  4 17:23:00.546463 systemd-logind[2061]: Session 28 logged out. Waiting for processes to exit.
Sep  4 17:23:00.547541 systemd[1]: sshd@27-172.31.19.141:22-139.178.68.195:59810.service: Deactivated successfully.
Sep  4 17:23:00.561400 systemd[1]: session-28.scope: Deactivated successfully.
Sep  4 17:23:00.574811 systemd-logind[2061]: Removed session 28.
Sep  4 17:23:16.052848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-357292d98d0b316a03921ac220f2d79c8c86f43dbe8785a0b8e477c2e514e59c-rootfs.mount: Deactivated successfully.
Sep  4 17:23:16.073911 containerd[2090]: time="2024-09-04T17:23:16.073843918Z" level=info msg="shim disconnected" id=357292d98d0b316a03921ac220f2d79c8c86f43dbe8785a0b8e477c2e514e59c namespace=k8s.io
Sep  4 17:23:16.073911 containerd[2090]: time="2024-09-04T17:23:16.073897878Z" level=warning msg="cleaning up after shim disconnected" id=357292d98d0b316a03921ac220f2d79c8c86f43dbe8785a0b8e477c2e514e59c namespace=k8s.io
Sep  4 17:23:16.074725 containerd[2090]: time="2024-09-04T17:23:16.073910337Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:23:16.601779 kubelet[3635]: I0904 17:23:16.601591    3635 scope.go:117] "RemoveContainer" containerID="357292d98d0b316a03921ac220f2d79c8c86f43dbe8785a0b8e477c2e514e59c"
Sep  4 17:23:16.615201 containerd[2090]: time="2024-09-04T17:23:16.613460655Z" level=info msg="CreateContainer within sandbox \"ac84d114dbd91c6d113632d29ec669d3219bf7b4dd0c4f2a1f21abc2ca7c0786\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
Sep  4 17:23:16.655426 containerd[2090]: time="2024-09-04T17:23:16.651344369Z" level=info msg="CreateContainer within sandbox \"ac84d114dbd91c6d113632d29ec669d3219bf7b4dd0c4f2a1f21abc2ca7c0786\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9386d7259b033812e0bee30ee07fdbdccd2fc72764d7c124120059e9198715af\""
Sep  4 17:23:16.655470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322963369.mount: Deactivated successfully.
Sep  4 17:23:16.661666 containerd[2090]: time="2024-09-04T17:23:16.661623460Z" level=info msg="StartContainer for \"9386d7259b033812e0bee30ee07fdbdccd2fc72764d7c124120059e9198715af\""
Sep  4 17:23:16.820224 containerd[2090]: time="2024-09-04T17:23:16.819893207Z" level=info msg="StartContainer for \"9386d7259b033812e0bee30ee07fdbdccd2fc72764d7c124120059e9198715af\" returns successfully"
Sep  4 17:23:17.053373 systemd[1]: run-containerd-runc-k8s.io-9386d7259b033812e0bee30ee07fdbdccd2fc72764d7c124120059e9198715af-runc.c2fKEs.mount: Deactivated successfully.
Sep  4 17:23:20.014165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33c3c5b34cc04e178f07e342b2ce42d7e79d5feba122ad4868e1d8128dc7075e-rootfs.mount: Deactivated successfully.
Sep  4 17:23:20.028710 containerd[2090]: time="2024-09-04T17:23:20.028443596Z" level=info msg="shim disconnected" id=33c3c5b34cc04e178f07e342b2ce42d7e79d5feba122ad4868e1d8128dc7075e namespace=k8s.io
Sep  4 17:23:20.028710 containerd[2090]: time="2024-09-04T17:23:20.028702778Z" level=warning msg="cleaning up after shim disconnected" id=33c3c5b34cc04e178f07e342b2ce42d7e79d5feba122ad4868e1d8128dc7075e namespace=k8s.io
Sep  4 17:23:20.029404 containerd[2090]: time="2024-09-04T17:23:20.028724429Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:23:20.614382 kubelet[3635]: I0904 17:23:20.614305    3635 scope.go:117] "RemoveContainer" containerID="33c3c5b34cc04e178f07e342b2ce42d7e79d5feba122ad4868e1d8128dc7075e"
Sep  4 17:23:20.617307 containerd[2090]: time="2024-09-04T17:23:20.617270510Z" level=info msg="CreateContainer within sandbox \"a7fd0b1ce60aaf8c07e934028b747bfee14f94f3d6774c4e1a8951b79bc20b0f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
Sep  4 17:23:20.644832 containerd[2090]: time="2024-09-04T17:23:20.644781276Z" level=info msg="CreateContainer within sandbox \"a7fd0b1ce60aaf8c07e934028b747bfee14f94f3d6774c4e1a8951b79bc20b0f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2aa36a6250a13a7e4de2d4dfefabbad5dbd2759d50f53eb20ad1d80ea089f4a1\""
Sep  4 17:23:20.645427 containerd[2090]: time="2024-09-04T17:23:20.645399605Z" level=info msg="StartContainer for \"2aa36a6250a13a7e4de2d4dfefabbad5dbd2759d50f53eb20ad1d80ea089f4a1\""
Sep  4 17:23:20.746083 containerd[2090]: time="2024-09-04T17:23:20.746039783Z" level=info msg="StartContainer for \"2aa36a6250a13a7e4de2d4dfefabbad5dbd2759d50f53eb20ad1d80ea089f4a1\" returns successfully"
Sep  4 17:23:24.326271 kubelet[3635]: E0904 17:23:24.326210    3635 controller.go:193] "Failed to update lease" err="Put \"https://172.31.19.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-141?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
Sep  4 17:23:34.332827 kubelet[3635]: E0904 17:23:34.332770    3635 request.go:1116] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
Sep  4 17:23:34.333462 kubelet[3635]: E0904 17:23:34.332880    3635 controller.go:193] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)"