Feb 13 15:32:24.226826 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025
Feb 13 15:32:24.226867 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2
Feb 13 15:32:24.226882 kernel: BIOS-provided physical RAM map:
Feb 13 15:32:24.226892 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 13 15:32:24.226904 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 13 15:32:24.226915 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 13 15:32:24.226934 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable
Feb 13 15:32:24.226944 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved
Feb 13 15:32:24.226955 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved
Feb 13 15:32:24.226968 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 13 15:32:24.226980 kernel: NX (Execute Disable) protection: active
Feb 13 15:32:24.226992 kernel: APIC: Static calls initialized
Feb 13 15:32:24.227004 kernel: SMBIOS 2.7 present.
Feb 13 15:32:24.227017 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017
Feb 13 15:32:24.227035 kernel: Hypervisor detected: KVM
Feb 13 15:32:24.227049 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 13 15:32:24.227062 kernel: kvm-clock: using sched offset of 9737608125 cycles
Feb 13 15:32:24.227076 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 13 15:32:24.227091 kernel: tsc: Detected 2499.996 MHz processor
Feb 13 15:32:24.227104 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 13 15:32:24.227258 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 13 15:32:24.227371 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000
Feb 13 15:32:24.227441 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 13 15:32:24.227456 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 13 15:32:24.227470 kernel: Using GB pages for direct mapping
Feb 13 15:32:24.227486 kernel: ACPI: Early table checksum verification disabled
Feb 13 15:32:24.227501 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON)
Feb 13 15:32:24.227517 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001)
Feb 13 15:32:24.227533 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001)
Feb 13 15:32:24.227548 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001)
Feb 13 15:32:24.227568 kernel: ACPI: FACS 0x000000007D9EFF40 000040
Feb 13 15:32:24.227583 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Feb 13 15:32:24.227599 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Feb 13 15:32:24.227614 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001)
Feb 13 15:32:24.227629 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Feb 13 15:32:24.227644 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001)
Feb 13 15:32:24.227659 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001)
Feb 13 15:32:24.227675 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Feb 13 15:32:24.227722 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3]
Feb 13 15:32:24.227742 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488]
Feb 13 15:32:24.227764 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f]
Feb 13 15:32:24.227780 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39]
Feb 13 15:32:24.227797 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645]
Feb 13 15:32:24.227814 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf]
Feb 13 15:32:24.227834 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b]
Feb 13 15:32:24.227850 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7]
Feb 13 15:32:24.227866 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037]
Feb 13 15:32:24.227883 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba]
Feb 13 15:32:24.227899 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Feb 13 15:32:24.227915 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Feb 13 15:32:24.227932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff]
Feb 13 15:32:24.227948 kernel: NUMA: Initialized distance table, cnt=1
Feb 13 15:32:24.227964 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff]
Feb 13 15:32:24.227984 kernel: Zone ranges:
Feb 13 15:32:24.228000 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 13 15:32:24.228016 kernel:   DMA32    [mem 0x0000000001000000-0x000000007d9e9fff]
Feb 13 15:32:24.228033 kernel:   Normal   empty
Feb 13 15:32:24.228049 kernel: Movable zone start for each node
Feb 13 15:32:24.228065 kernel: Early memory node ranges
Feb 13 15:32:24.228083 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 13 15:32:24.228216 kernel:   node   0: [mem 0x0000000000100000-0x000000007d9e9fff]
Feb 13 15:32:24.228238 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff]
Feb 13 15:32:24.228258 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 13 15:32:24.228274 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 13 15:32:24.228291 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges
Feb 13 15:32:24.228306 kernel: ACPI: PM-Timer IO Port: 0xb008
Feb 13 15:32:24.228323 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 13 15:32:24.228341 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
Feb 13 15:32:24.228357 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 13 15:32:24.228374 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 13 15:32:24.228391 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 13 15:32:24.228470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 13 15:32:24.228494 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 13 15:32:24.228511 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Feb 13 15:32:24.228528 kernel: TSC deadline timer available
Feb 13 15:32:24.228544 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Feb 13 15:32:24.228560 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 13 15:32:24.228576 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices
Feb 13 15:32:24.228592 kernel: Booting paravirtualized kernel on KVM
Feb 13 15:32:24.228609 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 13 15:32:24.228626 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Feb 13 15:32:24.228647 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576
Feb 13 15:32:24.228674 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152
Feb 13 15:32:24.228710 kernel: pcpu-alloc: [0] 0 1 
Feb 13 15:32:24.228722 kernel: kvm-guest: PV spinlocks enabled
Feb 13 15:32:24.228735 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Feb 13 15:32:24.228749 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2
Feb 13 15:32:24.228763 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 15:32:24.228774 kernel: random: crng init done
Feb 13 15:32:24.228792 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 15:32:24.230409 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 13 15:32:24.230440 kernel: Fallback order for Node 0: 0 
Feb 13 15:32:24.230455 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 506242
Feb 13 15:32:24.230468 kernel: Policy zone: DMA32
Feb 13 15:32:24.230482 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 15:32:24.230496 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved)
Feb 13 15:32:24.230509 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 15:32:24.230529 kernel: Kernel/User page tables isolation: enabled
Feb 13 15:32:24.230541 kernel: ftrace: allocating 37920 entries in 149 pages
Feb 13 15:32:24.230555 kernel: ftrace: allocated 149 pages with 4 groups
Feb 13 15:32:24.230570 kernel: Dynamic Preempt: voluntary
Feb 13 15:32:24.230585 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 15:32:24.230599 kernel: rcu:         RCU event tracing is enabled.
Feb 13 15:32:24.230700 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 15:32:24.230717 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 15:32:24.230732 kernel:         Rude variant of Tasks RCU enabled.
Feb 13 15:32:24.230747 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 15:32:24.230767 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 15:32:24.230781 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 15:32:24.230795 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Feb 13 15:32:24.230807 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 15:32:24.230821 kernel: Console: colour VGA+ 80x25
Feb 13 15:32:24.230835 kernel: printk: console [ttyS0] enabled
Feb 13 15:32:24.230848 kernel: ACPI: Core revision 20230628
Feb 13 15:32:24.230862 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns
Feb 13 15:32:24.230876 kernel: APIC: Switch to symmetric I/O mode setup
Feb 13 15:32:24.230894 kernel: x2apic enabled
Feb 13 15:32:24.230908 kernel: APIC: Switched APIC routing to: physical x2apic
Feb 13 15:32:24.230932 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns
Feb 13 15:32:24.230950 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996)
Feb 13 15:32:24.230965 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Feb 13 15:32:24.230979 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Feb 13 15:32:24.230993 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 13 15:32:24.231007 kernel: Spectre V2 : Mitigation: Retpolines
Feb 13 15:32:24.231020 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb 13 15:32:24.231034 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb 13 15:32:24.231048 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Feb 13 15:32:24.231062 kernel: RETBleed: Vulnerable
Feb 13 15:32:24.231079 kernel: Speculative Store Bypass: Vulnerable
Feb 13 15:32:24.231093 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 13 15:32:24.231107 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 13 15:32:24.231164 kernel: GDS: Unknown: Dependent on hypervisor status
Feb 13 15:32:24.231180 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 13 15:32:24.231194 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 13 15:32:24.231208 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 13 15:32:24.231226 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Feb 13 15:32:24.231240 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Feb 13 15:32:24.231254 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Feb 13 15:32:24.231268 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Feb 13 15:32:24.231283 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Feb 13 15:32:24.231297 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Feb 13 15:32:24.231311 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 13 15:32:24.231325 kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Feb 13 15:32:24.231448 kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Feb 13 15:32:24.231465 kernel: x86/fpu: xstate_offset[5]:  960, xstate_sizes[5]:   64
Feb 13 15:32:24.231483 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]:  512
Feb 13 15:32:24.231498 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024
Feb 13 15:32:24.231512 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]:    8
Feb 13 15:32:24.231526 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format.
Feb 13 15:32:24.231540 kernel: Freeing SMP alternatives memory: 32K
Feb 13 15:32:24.231555 kernel: pid_max: default: 32768 minimum: 301
Feb 13 15:32:24.231569 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 15:32:24.231584 kernel: landlock: Up and running.
Feb 13 15:32:24.231597 kernel: SELinux:  Initializing.
Feb 13 15:32:24.231611 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb 13 15:32:24.231626 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb 13 15:32:24.231639 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7)
Feb 13 15:32:24.231657 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:32:24.231671 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:32:24.231699 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:32:24.231713 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Feb 13 15:32:24.231728 kernel: signal: max sigframe size: 3632
Feb 13 15:32:24.231741 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 15:32:24.231756 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 15:32:24.231770 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Feb 13 15:32:24.231785 kernel: smp: Bringing up secondary CPUs ...
Feb 13 15:32:24.231802 kernel: smpboot: x86: Booting SMP configuration:
Feb 13 15:32:24.231817 kernel: .... node  #0, CPUs:      #1
Feb 13 15:32:24.231831 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Feb 13 15:32:24.231846 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Feb 13 15:32:24.231859 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 15:32:24.231873 kernel: smpboot: Max logical packages: 1
Feb 13 15:32:24.231886 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS)
Feb 13 15:32:24.231900 kernel: devtmpfs: initialized
Feb 13 15:32:24.231918 kernel: x86/mm: Memory block size: 128MB
Feb 13 15:32:24.231931 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 15:32:24.231947 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 15:32:24.231965 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 15:32:24.231984 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 15:32:24.232003 kernel: audit: initializing netlink subsys (disabled)
Feb 13 15:32:24.232021 kernel: audit: type=2000 audit(1739460742.817:1): state=initialized audit_enabled=0 res=1
Feb 13 15:32:24.232036 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 15:32:24.232051 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 13 15:32:24.232070 kernel: cpuidle: using governor menu
Feb 13 15:32:24.232084 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 15:32:24.232098 kernel: dca service started, version 1.12.1
Feb 13 15:32:24.232219 kernel: PCI: Using configuration type 1 for base access
Feb 13 15:32:24.232240 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 13 15:32:24.232257 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 15:32:24.232272 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 15:32:24.232289 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 15:32:24.232304 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 15:32:24.232324 kernel: ACPI: Added _OSI(Module Device)
Feb 13 15:32:24.232340 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 15:32:24.232456 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 15:32:24.232475 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 15:32:24.232492 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Feb 13 15:32:24.232509 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Feb 13 15:32:24.232527 kernel: ACPI: Interpreter enabled
Feb 13 15:32:24.232544 kernel: ACPI: PM: (supports S0 S5)
Feb 13 15:32:24.232561 kernel: ACPI: Using IOAPIC for interrupt routing
Feb 13 15:32:24.232582 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 13 15:32:24.232599 kernel: PCI: Using E820 reservations for host bridge windows
Feb 13 15:32:24.232617 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Feb 13 15:32:24.232634 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 13 15:32:24.232939 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 15:32:24.233092 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI]
Feb 13 15:32:24.233394 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
Feb 13 15:32:24.233419 kernel: acpiphp: Slot [3] registered
Feb 13 15:32:24.233442 kernel: acpiphp: Slot [4] registered
Feb 13 15:32:24.233459 kernel: acpiphp: Slot [5] registered
Feb 13 15:32:24.233476 kernel: acpiphp: Slot [6] registered
Feb 13 15:32:24.233559 kernel: acpiphp: Slot [7] registered
Feb 13 15:32:24.233577 kernel: acpiphp: Slot [8] registered
Feb 13 15:32:24.233712 kernel: acpiphp: Slot [9] registered
Feb 13 15:32:24.233813 kernel: acpiphp: Slot [10] registered
Feb 13 15:32:24.233833 kernel: acpiphp: Slot [11] registered
Feb 13 15:32:24.233849 kernel: acpiphp: Slot [12] registered
Feb 13 15:32:24.233868 kernel: acpiphp: Slot [13] registered
Feb 13 15:32:24.233884 kernel: acpiphp: Slot [14] registered
Feb 13 15:32:24.233901 kernel: acpiphp: Slot [15] registered
Feb 13 15:32:24.233917 kernel: acpiphp: Slot [16] registered
Feb 13 15:32:24.233934 kernel: acpiphp: Slot [17] registered
Feb 13 15:32:24.233951 kernel: acpiphp: Slot [18] registered
Feb 13 15:32:24.233967 kernel: acpiphp: Slot [19] registered
Feb 13 15:32:24.233984 kernel: acpiphp: Slot [20] registered
Feb 13 15:32:24.234001 kernel: acpiphp: Slot [21] registered
Feb 13 15:32:24.234020 kernel: acpiphp: Slot [22] registered
Feb 13 15:32:24.234037 kernel: acpiphp: Slot [23] registered
Feb 13 15:32:24.234053 kernel: acpiphp: Slot [24] registered
Feb 13 15:32:24.234070 kernel: acpiphp: Slot [25] registered
Feb 13 15:32:24.234086 kernel: acpiphp: Slot [26] registered
Feb 13 15:32:24.234103 kernel: acpiphp: Slot [27] registered
Feb 13 15:32:24.234163 kernel: acpiphp: Slot [28] registered
Feb 13 15:32:24.234182 kernel: acpiphp: Slot [29] registered
Feb 13 15:32:24.234198 kernel: acpiphp: Slot [30] registered
Feb 13 15:32:24.234213 kernel: acpiphp: Slot [31] registered
Feb 13 15:32:24.234232 kernel: PCI host bridge to bus 0000:00
Feb 13 15:32:24.234530 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 13 15:32:24.234698 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 13 15:32:24.234846 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 13 15:32:24.234979 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Feb 13 15:32:24.235106 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 13 15:32:24.235327 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb 13 15:32:24.235639 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Feb 13 15:32:24.235868 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000
Feb 13 15:32:24.237439 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Feb 13 15:32:24.237679 kernel: pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
Feb 13 15:32:24.237951 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff
Feb 13 15:32:24.238087 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff
Feb 13 15:32:24.238267 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff
Feb 13 15:32:24.238406 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff
Feb 13 15:32:24.238532 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff
Feb 13 15:32:24.239633 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff
Feb 13 15:32:24.239895 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000
Feb 13 15:32:24.240044 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref]
Feb 13 15:32:24.240242 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Feb 13 15:32:24.240385 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 13 15:32:24.240622 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Feb 13 15:32:24.240992 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff]
Feb 13 15:32:24.241194 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Feb 13 15:32:24.241340 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff]
Feb 13 15:32:24.241362 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 13 15:32:24.241380 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 13 15:32:24.241403 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 13 15:32:24.241420 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 13 15:32:24.241437 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 13 15:32:24.241455 kernel: iommu: Default domain type: Translated
Feb 13 15:32:24.241720 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 13 15:32:24.241739 kernel: PCI: Using ACPI for IRQ routing
Feb 13 15:32:24.241753 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 13 15:32:24.241767 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 13 15:32:24.241782 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff]
Feb 13 15:32:24.241939 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device
Feb 13 15:32:24.242074 kernel: pci 0000:00:03.0: vgaarb: bridge control possible
Feb 13 15:32:24.242258 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 13 15:32:24.242282 kernel: vgaarb: loaded
Feb 13 15:32:24.242298 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
Feb 13 15:32:24.242315 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter
Feb 13 15:32:24.242330 kernel: clocksource: Switched to clocksource kvm-clock
Feb 13 15:32:24.242346 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 15:32:24.242361 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 15:32:24.242382 kernel: pnp: PnP ACPI init
Feb 13 15:32:24.242397 kernel: pnp: PnP ACPI: found 5 devices
Feb 13 15:32:24.242412 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 13 15:32:24.242427 kernel: NET: Registered PF_INET protocol family
Feb 13 15:32:24.242443 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 15:32:24.242459 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Feb 13 15:32:24.242475 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 15:32:24.242492 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 13 15:32:24.242512 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear)
Feb 13 15:32:24.242528 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Feb 13 15:32:24.242545 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb 13 15:32:24.242562 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb 13 15:32:24.242579 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 15:32:24.242594 kernel: NET: Registered PF_XDP protocol family
Feb 13 15:32:24.242780 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 13 15:32:24.242908 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 13 15:32:24.243035 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 13 15:32:24.243208 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Feb 13 15:32:24.243358 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 13 15:32:24.243381 kernel: PCI: CLS 0 bytes, default 64
Feb 13 15:32:24.243398 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Feb 13 15:32:24.243416 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns
Feb 13 15:32:24.243434 kernel: clocksource: Switched to clocksource tsc
Feb 13 15:32:24.243451 kernel: Initialise system trusted keyrings
Feb 13 15:32:24.243468 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Feb 13 15:32:24.243489 kernel: Key type asymmetric registered
Feb 13 15:32:24.243505 kernel: Asymmetric key parser 'x509' registered
Feb 13 15:32:24.243522 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Feb 13 15:32:24.243540 kernel: io scheduler mq-deadline registered
Feb 13 15:32:24.243556 kernel: io scheduler kyber registered
Feb 13 15:32:24.243574 kernel: io scheduler bfq registered
Feb 13 15:32:24.243591 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb 13 15:32:24.243606 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 15:32:24.243623 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 13 15:32:24.243643 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 13 15:32:24.243660 kernel: i8042: Warning: Keylock active
Feb 13 15:32:24.243677 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 13 15:32:24.243922 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 13 15:32:24.244094 kernel: rtc_cmos 00:00: RTC can wake from S4
Feb 13 15:32:24.244313 kernel: rtc_cmos 00:00: registered as rtc0
Feb 13 15:32:24.244634 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:32:23 UTC (1739460743)
Feb 13 15:32:24.244834 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Feb 13 15:32:24.244862 kernel: intel_pstate: CPU model not supported
Feb 13 15:32:24.244879 kernel: NET: Registered PF_INET6 protocol family
Feb 13 15:32:24.244896 kernel: Segment Routing with IPv6
Feb 13 15:32:24.244914 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 15:32:24.244931 kernel: NET: Registered PF_PACKET protocol family
Feb 13 15:32:24.244948 kernel: Key type dns_resolver registered
Feb 13 15:32:24.244965 kernel: IPI shorthand broadcast: enabled
Feb 13 15:32:24.244983 kernel: sched_clock: Marking stable (881004041, 301550161)->(1320816161, -138261959)
Feb 13 15:32:24.245000 kernel: registered taskstats version 1
Feb 13 15:32:24.245021 kernel: Loading compiled-in X.509 certificates
Feb 13 15:32:24.245038 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0'
Feb 13 15:32:24.245055 kernel: Key type .fscrypt registered
Feb 13 15:32:24.245072 kernel: Key type fscrypt-provisioning registered
Feb 13 15:32:24.245089 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 15:32:24.245107 kernel: ima: Allocated hash algorithm: sha1
Feb 13 15:32:24.245166 kernel: ima: No architecture policies found
Feb 13 15:32:24.245186 kernel: clk: Disabling unused clocks
Feb 13 15:32:24.245208 kernel: Freeing unused kernel image (initmem) memory: 42976K
Feb 13 15:32:24.245225 kernel: Write protecting the kernel read-only data: 36864k
Feb 13 15:32:24.245243 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K
Feb 13 15:32:24.245260 kernel: Run /init as init process
Feb 13 15:32:24.245278 kernel:   with arguments:
Feb 13 15:32:24.245295 kernel:     /init
Feb 13 15:32:24.245312 kernel:   with environment:
Feb 13 15:32:24.245329 kernel:     HOME=/
Feb 13 15:32:24.245346 kernel:     TERM=linux
Feb 13 15:32:24.245362 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 15:32:24.245390 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:32:24.245425 systemd[1]: Detected virtualization amazon.
Feb 13 15:32:24.245447 systemd[1]: Detected architecture x86-64.
Feb 13 15:32:24.245465 systemd[1]: Running in initrd.
Feb 13 15:32:24.245562 systemd[1]: No hostname configured, using default hostname.
Feb 13 15:32:24.245588 systemd[1]: Hostname set to <localhost>.
Feb 13 15:32:24.245610 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:32:24.245628 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb 13 15:32:24.245650 systemd[1]: Queued start job for default target initrd.target.
Feb 13 15:32:24.245673 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:32:24.245732 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:32:24.245750 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 15:32:24.245767 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:32:24.245784 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 15:32:24.245806 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 15:32:24.245888 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 15:32:24.245907 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 15:32:24.245926 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:32:24.245943 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:32:24.245962 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:32:24.245981 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:32:24.246005 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:32:24.246454 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:32:24.246478 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:32:24.246496 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:32:24.246515 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 15:32:24.246534 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 15:32:24.246553 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:32:24.246572 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:32:24.246591 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:32:24.246614 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:32:24.246634 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 15:32:24.246651 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:32:24.246671 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 15:32:24.246789 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 15:32:24.246815 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:32:24.246831 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:32:24.246847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:32:24.247083 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 15:32:24.247180 systemd-journald[178]: Collecting audit messages is disabled.
Feb 13 15:32:24.247226 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:32:24.247242 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 15:32:24.247260 systemd-journald[178]: Journal started
Feb 13 15:32:24.247295 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2a51bc425daac964dc8aafa63ca30c) is 4.8M, max 38.6M, 33.7M free.
Feb 13 15:32:24.252764 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:32:24.286222 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:32:24.289912 systemd-modules-load[180]: Inserted module 'overlay'
Feb 13 15:32:24.466177 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 15:32:24.466219 kernel: Bridge firewalling registered
Feb 13 15:32:24.312959 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:32:24.342905 systemd-modules-load[180]: Inserted module 'br_netfilter'
Feb 13 15:32:24.475450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:32:24.479944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:32:24.484873 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:32:24.502074 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:32:24.503125 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:32:24.504205 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:32:24.509904 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:32:24.534805 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:32:24.544142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:32:24.547478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:32:24.557018 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:32:24.567148 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 15:32:24.598127 dracut-cmdline[215]: dracut-dracut-053
Feb 13 15:32:24.602509 systemd-resolved[206]: Positive Trust Anchors:
Feb 13 15:32:24.602616 systemd-resolved[206]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:32:24.602666 systemd-resolved[206]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:32:24.616181 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2
Feb 13 15:32:24.628542 systemd-resolved[206]: Defaulting to hostname 'linux'.
Feb 13 15:32:24.631634 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:32:24.636133 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:32:24.853718 kernel: SCSI subsystem initialized
Feb 13 15:32:24.868719 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 15:32:24.893707 kernel: iscsi: registered transport (tcp)
Feb 13 15:32:24.925834 kernel: iscsi: registered transport (qla4xxx)
Feb 13 15:32:24.925916 kernel: QLogic iSCSI HBA Driver
Feb 13 15:32:25.032387 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:32:25.042122 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 15:32:25.113714 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 15:32:25.113794 kernel: device-mapper: uevent: version 1.0.3
Feb 13 15:32:25.113815 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 15:32:25.183755 kernel: raid6: avx512x4 gen() 11000 MB/s
Feb 13 15:32:25.202737 kernel: raid6: avx512x2 gen() 10053 MB/s
Feb 13 15:32:25.219747 kernel: raid6: avx512x1 gen() 10123 MB/s
Feb 13 15:32:25.237754 kernel: raid6: avx2x4   gen()  7091 MB/s
Feb 13 15:32:25.254756 kernel: raid6: avx2x2   gen() 11195 MB/s
Feb 13 15:32:25.273347 kernel: raid6: avx2x1   gen()  9870 MB/s
Feb 13 15:32:25.273424 kernel: raid6: using algorithm avx2x2 gen() 11195 MB/s
Feb 13 15:32:25.290752 kernel: raid6: .... xor() 10751 MB/s, rmw enabled
Feb 13 15:32:25.290834 kernel: raid6: using avx512x2 recovery algorithm
Feb 13 15:32:25.316709 kernel: xor: automatically using best checksumming function   avx       
Feb 13 15:32:25.510725 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 15:32:25.525590 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:32:25.533126 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:32:25.611624 systemd-udevd[397]: Using default interface naming scheme 'v255'.
Feb 13 15:32:25.643797 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:32:25.657443 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 15:32:25.695732 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation
Feb 13 15:32:25.745787 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:32:25.750973 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:32:25.857300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:32:25.869016 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 15:32:25.920527 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:32:25.927109 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:32:25.931098 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:32:25.932675 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:32:25.945235 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 15:32:25.968119 kernel: ena 0000:00:05.0: ENA device version: 0.10
Feb 13 15:32:25.985038 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Feb 13 15:32:25.985218 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
Feb 13 15:32:25.985376 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:df:ad:b7:b5:0d
Feb 13 15:32:26.005312 kernel: nvme nvme0: pci function 0000:00:04.0
Feb 13 15:32:26.005562 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 13 15:32:26.005218 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:32:26.011181 kernel: cryptd: max_cpu_qlen set to 1000
Feb 13 15:32:26.021711 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Feb 13 15:32:26.032269 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 15:32:26.032335 kernel: GPT:9289727 != 16777215
Feb 13 15:32:26.032355 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 15:32:26.032375 kernel: GPT:9289727 != 16777215
Feb 13 15:32:26.032393 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 15:32:26.032421 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:32:26.030484 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:32:26.048711 kernel: AVX2 version of gcm_enc/dec engaged.
Feb 13 15:32:26.048775 kernel: AES CTR mode by8 optimization enabled
Feb 13 15:32:26.049259 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:32:26.049558 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:32:26.055621 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:32:26.057869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:32:26.058082 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:32:26.066847 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:32:26.078225 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:32:26.170744 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (448)
Feb 13 15:32:26.195719 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (455)
Feb 13 15:32:26.341484 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT.
Feb 13 15:32:26.357808 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 15:32:26.358378 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:32:26.386503 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM.
Feb 13 15:32:26.392622 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A.
Feb 13 15:32:26.392767 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A.
Feb 13 15:32:26.408928 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 15:32:26.412612 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:32:26.417849 disk-uuid[623]: Primary Header is updated.
Feb 13 15:32:26.417849 disk-uuid[623]: Secondary Entries is updated.
Feb 13 15:32:26.417849 disk-uuid[623]: Secondary Header is updated.
Feb 13 15:32:26.425886 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:32:26.441928 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:32:27.453778 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:32:27.454481 disk-uuid[624]: The operation has completed successfully.
Feb 13 15:32:27.611114 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 15:32:27.611241 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 15:32:27.661925 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 15:32:27.681461 sh[807]: Success
Feb 13 15:32:27.710543 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Feb 13 15:32:27.876044 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 15:32:27.894987 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 15:32:27.900356 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 15:32:27.969675 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2
Feb 13 15:32:27.969760 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:32:27.969779 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 15:32:27.969808 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 15:32:27.970299 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 15:32:28.085838 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Feb 13 15:32:28.107634 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 15:32:28.109170 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 15:32:28.120743 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 15:32:28.125047 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 15:32:28.164719 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:32:28.164964 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:32:28.164991 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 15:32:28.177751 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 15:32:28.197131 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 15:32:28.200312 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:32:28.209853 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 15:32:28.222126 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 15:32:28.271943 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:32:28.280098 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:32:28.311610 systemd-networkd[999]: lo: Link UP
Feb 13 15:32:28.311621 systemd-networkd[999]: lo: Gained carrier
Feb 13 15:32:28.314649 systemd-networkd[999]: Enumeration completed
Feb 13 15:32:28.314871 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:32:28.315657 systemd-networkd[999]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:32:28.315663 systemd-networkd[999]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:32:28.325606 systemd-networkd[999]: eth0: Link UP
Feb 13 15:32:28.325615 systemd-networkd[999]: eth0: Gained carrier
Feb 13 15:32:28.325632 systemd-networkd[999]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:32:28.330640 systemd[1]: Reached target network.target - Network.
Feb 13 15:32:28.343799 systemd-networkd[999]: eth0: DHCPv4 address 172.31.29.108/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 15:32:28.610219 ignition[956]: Ignition 2.20.0
Feb 13 15:32:28.610233 ignition[956]: Stage: fetch-offline
Feb 13 15:32:28.610484 ignition[956]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:32:28.610497 ignition[956]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:32:28.610944 ignition[956]: Ignition finished successfully
Feb 13 15:32:28.622641 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:32:28.629872 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 15:32:28.647080 ignition[1008]: Ignition 2.20.0
Feb 13 15:32:28.647092 ignition[1008]: Stage: fetch
Feb 13 15:32:28.647559 ignition[1008]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:32:28.647569 ignition[1008]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:32:28.647660 ignition[1008]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:32:28.688212 ignition[1008]: PUT result: OK
Feb 13 15:32:28.691083 ignition[1008]: parsed url from cmdline: ""
Feb 13 15:32:28.691092 ignition[1008]: no config URL provided
Feb 13 15:32:28.691099 ignition[1008]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:32:28.691111 ignition[1008]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:32:28.691128 ignition[1008]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:32:28.694713 ignition[1008]: PUT result: OK
Feb 13 15:32:28.694769 ignition[1008]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Feb 13 15:32:28.698773 ignition[1008]: GET result: OK
Feb 13 15:32:28.698837 ignition[1008]: parsing config with SHA512: d05c1a183ce710b4c6d5675ee91511320ef92c21689db3f244fec2ac69c1ab50b9c08ec8f8da72c33154307ecf12550a89239d5e6f17b3d49d7128add3ce5755
Feb 13 15:32:28.702407 unknown[1008]: fetched base config from "system"
Feb 13 15:32:28.702417 unknown[1008]: fetched base config from "system"
Feb 13 15:32:28.702650 ignition[1008]: fetch: fetch complete
Feb 13 15:32:28.702423 unknown[1008]: fetched user config from "aws"
Feb 13 15:32:28.702655 ignition[1008]: fetch: fetch passed
Feb 13 15:32:28.706782 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 15:32:28.703724 ignition[1008]: Ignition finished successfully
Feb 13 15:32:28.718052 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 15:32:28.745788 ignition[1015]: Ignition 2.20.0
Feb 13 15:32:28.745811 ignition[1015]: Stage: kargs
Feb 13 15:32:28.746372 ignition[1015]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:32:28.746385 ignition[1015]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:32:28.746510 ignition[1015]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:32:28.748123 ignition[1015]: PUT result: OK
Feb 13 15:32:28.759788 ignition[1015]: kargs: kargs passed
Feb 13 15:32:28.759972 ignition[1015]: Ignition finished successfully
Feb 13 15:32:28.763640 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 15:32:28.774982 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 15:32:28.802420 ignition[1021]: Ignition 2.20.0
Feb 13 15:32:28.802434 ignition[1021]: Stage: disks
Feb 13 15:32:28.803661 ignition[1021]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:32:28.803674 ignition[1021]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:32:28.803817 ignition[1021]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:32:28.805247 ignition[1021]: PUT result: OK
Feb 13 15:32:28.819611 ignition[1021]: disks: disks passed
Feb 13 15:32:28.819855 ignition[1021]: Ignition finished successfully
Feb 13 15:32:28.824130 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 15:32:28.826374 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 15:32:28.828888 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 15:32:28.831842 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:32:28.834564 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:32:28.838206 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:32:28.847005 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 15:32:28.901965 systemd-fsck[1029]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Feb 13 15:32:28.905947 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 15:32:28.919897 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 15:32:29.114710 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none.
Feb 13 15:32:29.115506 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 15:32:29.116536 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:32:29.128712 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:32:29.140168 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 15:32:29.144223 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 15:32:29.144297 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 15:32:29.144333 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:32:29.150874 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 15:32:29.161992 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 15:32:29.172757 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1048)
Feb 13 15:32:29.174817 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:32:29.174881 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:32:29.177071 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 15:32:29.186164 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 15:32:29.187998 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:32:29.593563 initrd-setup-root[1072]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 15:32:29.612442 initrd-setup-root[1079]: cut: /sysroot/etc/group: No such file or directory
Feb 13 15:32:29.618948 initrd-setup-root[1086]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 15:32:29.638536 initrd-setup-root[1093]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 15:32:29.716806 systemd-networkd[999]: eth0: Gained IPv6LL
Feb 13 15:32:29.986090 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 15:32:29.994830 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 15:32:30.005312 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 15:32:30.020340 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 15:32:30.021463 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:32:30.076661 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 15:32:30.092981 ignition[1160]: INFO     : Ignition 2.20.0
Feb 13 15:32:30.092981 ignition[1160]: INFO     : Stage: mount
Feb 13 15:32:30.096205 ignition[1160]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:32:30.096205 ignition[1160]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:32:30.096205 ignition[1160]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:32:30.096205 ignition[1160]: INFO     : PUT result: OK
Feb 13 15:32:30.112045 ignition[1160]: INFO     : mount: mount passed
Feb 13 15:32:30.113189 ignition[1160]: INFO     : Ignition finished successfully
Feb 13 15:32:30.116881 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 15:32:30.123896 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 15:32:30.161032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:32:30.184705 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1172)
Feb 13 15:32:30.187571 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:32:30.187712 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:32:30.187734 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 15:32:30.218705 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 15:32:30.222824 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:32:30.271297 ignition[1189]: INFO     : Ignition 2.20.0
Feb 13 15:32:30.271297 ignition[1189]: INFO     : Stage: files
Feb 13 15:32:30.273664 ignition[1189]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:32:30.273664 ignition[1189]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:32:30.273664 ignition[1189]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:32:30.278747 ignition[1189]: INFO     : PUT result: OK
Feb 13 15:32:30.281383 ignition[1189]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 15:32:30.282866 ignition[1189]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 15:32:30.282866 ignition[1189]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 15:32:30.309353 ignition[1189]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 15:32:30.312016 ignition[1189]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 15:32:30.313615 ignition[1189]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 15:32:30.312094 unknown[1189]: wrote ssh authorized keys file for user: core
Feb 13 15:32:30.316733 ignition[1189]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 15:32:30.319600 ignition[1189]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 15:32:30.323040 ignition[1189]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:32:30.326952 ignition[1189]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:32:30.326952 ignition[1189]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Feb 13 15:32:30.326952 ignition[1189]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Feb 13 15:32:30.326952 ignition[1189]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Feb 13 15:32:30.326952 ignition[1189]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1
Feb 13 15:32:30.624291 ignition[1189]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Feb 13 15:32:31.133992 ignition[1189]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Feb 13 15:32:31.137591 ignition[1189]: INFO     : files: createResultFile: createFiles: op(7): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:32:31.140536 ignition[1189]: INFO     : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:32:31.140536 ignition[1189]: INFO     : files: files passed
Feb 13 15:32:31.145044 ignition[1189]: INFO     : Ignition finished successfully
Feb 13 15:32:31.150483 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 15:32:31.165067 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 15:32:31.175227 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 15:32:31.192040 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 15:32:31.194218 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 15:32:31.201891 initrd-setup-root-after-ignition[1218]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:32:31.201891 initrd-setup-root-after-ignition[1218]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:32:31.206039 initrd-setup-root-after-ignition[1222]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:32:31.210018 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:32:31.213605 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 15:32:31.228887 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 15:32:31.266206 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 15:32:31.266327 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 15:32:31.271508 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 15:32:31.274147 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 15:32:31.276812 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 15:32:31.282872 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 15:32:31.304214 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:32:31.313191 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 15:32:31.363122 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:32:31.374746 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:32:31.383947 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 15:32:31.394755 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 15:32:31.395018 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:32:31.402952 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 15:32:31.410614 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 15:32:31.416251 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 15:32:31.425576 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:32:31.425838 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 15:32:31.431443 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 15:32:31.433999 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:32:31.437175 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 15:32:31.439853 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 15:32:31.443316 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 15:32:31.445085 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 15:32:31.446344 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:32:31.449412 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:32:31.449596 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:32:31.456383 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 15:32:31.457966 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:32:31.462426 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 15:32:31.464235 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:32:31.470391 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 15:32:31.470535 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:32:31.470741 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 15:32:31.470865 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 15:32:31.491104 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 15:32:31.494149 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 15:32:31.494470 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:32:31.511621 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 15:32:31.512814 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 15:32:31.513035 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:32:31.516268 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 15:32:31.517497 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:32:31.528599 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 15:32:31.528763 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 15:32:31.538378 ignition[1242]: INFO     : Ignition 2.20.0
Feb 13 15:32:31.540073 ignition[1242]: INFO     : Stage: umount
Feb 13 15:32:31.540073 ignition[1242]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:32:31.540073 ignition[1242]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:32:31.540073 ignition[1242]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:32:31.547611 ignition[1242]: INFO     : PUT result: OK
Feb 13 15:32:31.550680 ignition[1242]: INFO     : umount: umount passed
Feb 13 15:32:31.550680 ignition[1242]: INFO     : Ignition finished successfully
Feb 13 15:32:31.552239 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 15:32:31.552382 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 15:32:31.557288 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 15:32:31.557415 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 15:32:31.559545 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 15:32:31.559613 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 15:32:31.561908 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 15:32:31.561974 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 15:32:31.563503 systemd[1]: Stopped target network.target - Network.
Feb 13 15:32:31.567386 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 15:32:31.567479 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:32:31.573301 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 15:32:31.574869 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 15:32:31.578171 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:32:31.583809 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 15:32:31.593586 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 15:32:31.598021 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 15:32:31.598095 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:32:31.606200 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 15:32:31.607758 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:32:31.610614 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 15:32:31.612677 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 15:32:31.615346 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 15:32:31.615447 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 15:32:31.623720 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 15:32:31.640967 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 15:32:31.656897 systemd-networkd[999]: eth0: DHCPv6 lease lost
Feb 13 15:32:31.658813 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 15:32:31.663279 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 15:32:31.665439 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 15:32:31.669046 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 15:32:31.669327 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 15:32:31.677831 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 15:32:31.677919 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:32:31.687862 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 15:32:31.689298 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 15:32:31.689380 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:32:31.691285 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:32:31.691356 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:32:31.694981 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 15:32:31.695053 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:32:31.698394 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 15:32:31.698462 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:32:31.701604 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:32:31.766354 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 15:32:31.766573 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:32:31.768586 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 15:32:31.768648 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:32:31.774035 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 15:32:31.774103 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:32:31.775320 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 15:32:31.775390 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:32:31.776661 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 15:32:31.776730 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:32:31.779628 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:32:31.779704 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:32:31.789425 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 15:32:31.797383 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 15:32:31.797467 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:32:31.800674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:32:31.800776 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:32:31.802958 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 15:32:31.803070 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 15:32:31.812512 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 15:32:31.812663 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 15:32:31.815214 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 15:32:31.815751 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 15:32:31.820633 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 15:32:31.821173 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 15:32:31.821261 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 15:32:31.832017 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 15:32:31.882504 systemd[1]: Switching root.
Feb 13 15:32:31.934516 systemd-journald[178]: Journal stopped
Feb 13 15:32:34.563789 systemd-journald[178]: Received SIGTERM from PID 1 (systemd).
Feb 13 15:32:34.565252 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 15:32:34.565285 kernel: SELinux:  policy capability open_perms=1
Feb 13 15:32:34.565302 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 15:32:34.565319 kernel: SELinux:  policy capability always_check_network=0
Feb 13 15:32:34.565337 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 15:32:34.565356 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 15:32:34.565376 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 15:32:34.565394 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 15:32:34.565413 kernel: audit: type=1403 audit(1739460752.518:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 15:32:34.565440 systemd[1]: Successfully loaded SELinux policy in 60.621ms.
Feb 13 15:32:34.565474 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.929ms.
Feb 13 15:32:34.565498 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:32:34.565520 systemd[1]: Detected virtualization amazon.
Feb 13 15:32:34.565541 systemd[1]: Detected architecture x86-64.
Feb 13 15:32:34.565560 systemd[1]: Detected first boot.
Feb 13 15:32:34.565579 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:32:34.565607 zram_generator::config[1285]: No configuration found.
Feb 13 15:32:34.565639 systemd[1]: Populated /etc with preset unit settings.
Feb 13 15:32:34.565664 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 13 15:32:34.565704 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Feb 13 15:32:34.565725 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:32:34.565746 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 15:32:34.565765 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 15:32:34.565788 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 15:32:34.565807 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 15:32:34.565827 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 15:32:34.565847 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 15:32:34.565866 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 15:32:34.565885 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 15:32:34.565905 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:32:34.565925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:32:34.565945 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 15:32:34.565967 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 15:32:34.565986 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 15:32:34.566004 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:32:34.566024 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Feb 13 15:32:34.566043 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:32:34.566062 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Feb 13 15:32:34.566081 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Feb 13 15:32:34.566100 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:32:34.566122 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 15:32:34.566141 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:32:34.566160 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:32:34.566179 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:32:34.566198 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:32:34.566216 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 15:32:34.566236 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 15:32:34.566254 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:32:34.566273 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:32:34.566294 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:32:34.566313 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 15:32:34.566332 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 15:32:34.566351 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 15:32:34.566370 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 15:32:34.566390 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:32:34.566408 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 15:32:34.566427 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 15:32:34.566447 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 15:32:34.566470 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 15:32:34.566490 systemd[1]: Reached target machines.target - Containers.
Feb 13 15:32:34.566508 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 15:32:34.566527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:32:34.566546 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:32:34.566564 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 15:32:34.566583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:32:34.566602 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:32:34.566623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:32:34.566642 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 15:32:34.567734 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:32:34.567763 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 15:32:34.567782 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 13 15:32:34.567800 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Feb 13 15:32:34.567818 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 13 15:32:34.567836 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 13 15:32:34.567855 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:32:34.567879 kernel: loop: module loaded
Feb 13 15:32:34.567898 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:32:34.567917 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 15:32:34.567936 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 15:32:34.567954 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:32:34.567973 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 13 15:32:34.567991 systemd[1]: Stopped verity-setup.service.
Feb 13 15:32:34.568010 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:32:34.568029 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 15:32:34.568050 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 15:32:34.568068 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 15:32:34.568087 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 15:32:34.568107 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 15:32:34.568129 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 15:32:34.568150 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:32:34.568169 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 15:32:34.568188 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 15:32:34.568207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:32:34.568224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:32:34.568241 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:32:34.568258 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:32:34.568276 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:32:34.568298 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:32:34.568317 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 15:32:34.568335 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:32:34.568357 kernel: fuse: init (API version 7.39)
Feb 13 15:32:34.568377 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:32:34.568412 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 15:32:34.568435 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 15:32:34.568607 systemd-journald[1371]: Collecting audit messages is disabled.
Feb 13 15:32:34.568667 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 15:32:34.570720 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 15:32:34.570751 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 15:32:34.570773 systemd-journald[1371]: Journal started
Feb 13 15:32:34.570818 systemd-journald[1371]: Runtime Journal (/run/log/journal/ec2a51bc425daac964dc8aafa63ca30c) is 4.8M, max 38.6M, 33.7M free.
Feb 13 15:32:33.954225 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 15:32:34.032463 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6.
Feb 13 15:32:34.033332 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 13 15:32:34.573274 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:32:34.574296 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 15:32:34.611165 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 15:32:34.644956 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 15:32:34.646855 kernel: ACPI: bus type drm_connector registered
Feb 13 15:32:34.646720 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 15:32:34.646776 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:32:34.649600 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 15:32:34.656976 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 15:32:34.663870 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 15:32:34.665273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:32:34.668994 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 15:32:34.681724 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 15:32:34.683153 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:32:34.685249 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 15:32:34.694200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:32:34.707217 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 15:32:34.774511 systemd-journald[1371]: Time spent on flushing to /var/log/journal/ec2a51bc425daac964dc8aafa63ca30c is 182.571ms for 936 entries.
Feb 13 15:32:34.774511 systemd-journald[1371]: System Journal (/var/log/journal/ec2a51bc425daac964dc8aafa63ca30c) is 8.0M, max 195.6M, 187.6M free.
Feb 13 15:32:34.994912 systemd-journald[1371]: Received client request to flush runtime journal.
Feb 13 15:32:34.994981 kernel: loop0: detected capacity change from 0 to 138184
Feb 13 15:32:34.995008 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 15:32:34.715323 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 15:32:34.724080 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:32:34.724279 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:32:34.753360 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:32:34.757348 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 15:32:34.764267 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 15:32:34.810001 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 15:32:34.845264 udevadm[1417]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Feb 13 15:32:34.867524 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 15:32:34.869335 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 15:32:34.881976 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 15:32:34.959038 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:32:34.970844 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 15:32:34.984146 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 15:32:35.002799 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 15:32:35.025720 kernel: loop1: detected capacity change from 0 to 205544
Feb 13 15:32:35.049487 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 15:32:35.077851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:32:35.210805 systemd-tmpfiles[1431]: ACLs are not supported, ignoring.
Feb 13 15:32:35.210836 systemd-tmpfiles[1431]: ACLs are not supported, ignoring.
Feb 13 15:32:35.241044 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:32:35.281934 kernel: loop2: detected capacity change from 0 to 140992
Feb 13 15:32:35.439728 kernel: loop3: detected capacity change from 0 to 62848
Feb 13 15:32:35.630756 kernel: loop4: detected capacity change from 0 to 138184
Feb 13 15:32:35.726094 kernel: loop5: detected capacity change from 0 to 205544
Feb 13 15:32:35.805720 kernel: loop6: detected capacity change from 0 to 140992
Feb 13 15:32:35.923732 kernel: loop7: detected capacity change from 0 to 62848
Feb 13 15:32:35.996599 (sd-merge)[1437]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'.
Feb 13 15:32:35.999802 (sd-merge)[1437]: Merged extensions into '/usr'.
Feb 13 15:32:36.011006 systemd[1]: Reloading requested from client PID 1411 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 15:32:36.011028 systemd[1]: Reloading...
Feb 13 15:32:36.207715 zram_generator::config[1460]: No configuration found.
Feb 13 15:32:36.637116 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:32:36.876257 systemd[1]: Reloading finished in 863 ms.
Feb 13 15:32:36.925346 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 15:32:36.941495 systemd[1]: Starting ensure-sysext.service...
Feb 13 15:32:36.954933 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:32:36.992885 systemd[1]: Reloading requested from client PID 1511 ('systemctl') (unit ensure-sysext.service)...
Feb 13 15:32:36.992908 systemd[1]: Reloading...
Feb 13 15:32:37.048880 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 15:32:37.049410 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 15:32:37.051711 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 15:32:37.052602 systemd-tmpfiles[1512]: ACLs are not supported, ignoring.
Feb 13 15:32:37.052721 systemd-tmpfiles[1512]: ACLs are not supported, ignoring.
Feb 13 15:32:37.075341 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:32:37.075360 systemd-tmpfiles[1512]: Skipping /boot
Feb 13 15:32:37.200040 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:32:37.200057 systemd-tmpfiles[1512]: Skipping /boot
Feb 13 15:32:37.256711 zram_generator::config[1540]: No configuration found.
Feb 13 15:32:37.426982 ldconfig[1406]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 15:32:37.463128 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:32:37.543410 systemd[1]: Reloading finished in 549 ms.
Feb 13 15:32:37.562872 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 15:32:37.565007 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 15:32:37.574453 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:32:37.591217 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:32:37.604939 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 15:32:37.621871 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 15:32:37.628017 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:32:37.638860 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:32:37.649078 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 15:32:37.654616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:32:37.654914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:32:37.665902 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:32:37.679066 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:32:37.684142 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:32:37.685813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:32:37.698762 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 15:32:37.699977 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:32:37.701624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:32:37.702204 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:32:37.722769 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:32:37.723100 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:32:37.733245 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 15:32:37.735880 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:32:37.736250 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:32:37.750641 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:32:37.752620 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:32:37.766621 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 15:32:37.768640 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:32:37.771436 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 15:32:37.771491 systemd-udevd[1599]: Using default interface naming scheme 'v255'.
Feb 13 15:32:37.774742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:32:37.775798 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:32:37.797972 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:32:37.798519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:32:37.807469 systemd[1]: Finished ensure-sysext.service.
Feb 13 15:32:37.810761 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 15:32:37.818580 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:32:37.822486 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:32:37.844321 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:32:37.862138 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:32:37.871014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:32:37.872447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:32:37.872553 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 15:32:37.874864 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:32:37.877126 augenrules[1632]: No rules
Feb 13 15:32:37.885131 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:32:37.885523 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:32:37.894627 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:32:37.895355 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:32:37.917352 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:32:37.917795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:32:37.920069 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:32:37.920295 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:32:37.924378 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:32:37.924507 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:32:37.930914 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 15:32:37.932760 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:32:37.940930 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:32:37.976173 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 15:32:37.988819 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 15:32:38.182824 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Feb 13 15:32:38.283305 systemd-networkd[1649]: lo: Link UP
Feb 13 15:32:38.283316 systemd-networkd[1649]: lo: Gained carrier
Feb 13 15:32:38.284871 systemd-networkd[1649]: Enumeration completed
Feb 13 15:32:38.285009 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:32:38.298773 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 15:32:38.308970 systemd-resolved[1598]: Positive Trust Anchors:
Feb 13 15:32:38.308993 systemd-resolved[1598]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:32:38.309051 systemd-resolved[1598]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:32:38.317318 (udev-worker)[1652]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:32:38.336616 systemd-resolved[1598]: Defaulting to hostname 'linux'.
Feb 13 15:32:38.341040 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:32:38.342619 systemd[1]: Reached target network.target - Network.
Feb 13 15:32:38.344067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:32:38.351704 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
Feb 13 15:32:38.360730 kernel: ACPI: button: Power Button [PWRF]
Feb 13 15:32:38.369713 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255
Feb 13 15:32:38.373421 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4
Feb 13 15:32:38.384774 kernel: ACPI: button: Sleep Button [SLPF]
Feb 13 15:32:38.410724 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5
Feb 13 15:32:38.475887 systemd-networkd[1649]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:32:38.475905 systemd-networkd[1649]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:32:38.484008 systemd-networkd[1649]: eth0: Link UP
Feb 13 15:32:38.484222 systemd-networkd[1649]: eth0: Gained carrier
Feb 13 15:32:38.484258 systemd-networkd[1649]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:32:38.488715 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1656)
Feb 13 15:32:38.488810 kernel: mousedev: PS/2 mouse device common for all mice
Feb 13 15:32:38.495044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:32:38.497768 systemd-networkd[1649]: eth0: DHCPv4 address 172.31.29.108/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 15:32:38.726496 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 15:32:38.976581 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 15:32:38.984066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:32:38.992892 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 15:32:39.004897 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 15:32:39.042781 lvm[1767]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:32:39.053961 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 15:32:39.085713 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 15:32:39.088813 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:32:39.091652 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:32:39.095888 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 15:32:39.099147 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 15:32:39.103144 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 15:32:39.104613 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 15:32:39.106583 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 15:32:39.108487 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 15:32:39.108554 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:32:39.110120 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:32:39.114048 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 15:32:39.116989 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 15:32:39.124311 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 15:32:39.131740 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 15:32:39.139945 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 15:32:39.146707 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:32:39.153655 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:32:39.157367 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:32:39.157752 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:32:39.171881 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 15:32:39.180350 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Feb 13 15:32:39.205072 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 15:32:39.207965 lvm[1774]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:32:39.217023 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 15:32:39.223416 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 15:32:39.227029 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 15:32:39.232265 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 15:32:39.240149 systemd[1]: Started ntpd.service - Network Time Service.
Feb 13 15:32:39.247608 systemd[1]: Starting setup-oem.service - Setup OEM...
Feb 13 15:32:39.253237 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 15:32:39.263957 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 15:32:39.301871 jq[1778]: false
Feb 13 15:32:39.331244 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 15:32:39.333626 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 15:32:39.337052 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 15:32:39.347420 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 15:32:39.355830 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 15:32:39.362205 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 15:32:39.363911 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 15:32:39.385993 jq[1789]: true
Feb 13 15:32:39.403727 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 15:32:39.421214 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 15:32:39.432374 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 15:32:39.433551 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 15:32:39.437871 jq[1796]: true
Feb 13 15:32:39.459305 (ntainerd)[1803]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 15:32:39.460260 update_engine[1788]: I20250213 15:32:39.458284  1788 main.cc:92] Flatcar Update Engine starting
Feb 13 15:32:39.479613 dbus-daemon[1777]: [system] SELinux support is enabled
Feb 13 15:32:39.481955 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 15:32:39.488871 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 15:32:39.488912 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 15:32:39.494910 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 15:32:39.494939 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 15:32:39.504311 dbus-daemon[1777]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1649 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Feb 13 15:32:39.508135 dbus-daemon[1777]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb 13 15:32:39.526338 update_engine[1788]: I20250213 15:32:39.526108  1788 update_check_scheduler.cc:74] Next update check in 2m12s
Feb 13 15:32:39.528897 systemd[1]: Starting systemd-hostnamed.service - Hostname Service...
Feb 13 15:32:39.530460 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 15:32:39.539957 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 15:32:39.541890 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 15:32:39.542127 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found loop4
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found loop5
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found loop6
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found loop7
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found nvme0n1
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found nvme0n1p1
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found nvme0n1p2
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found nvme0n1p3
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found usr
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found nvme0n1p4
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found nvme0n1p6
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found nvme0n1p7
Feb 13 15:32:39.558793 extend-filesystems[1779]: Found nvme0n1p9
Feb 13 15:32:39.558793 extend-filesystems[1779]: Checking size of /dev/nvme0n1p9
Feb 13 15:32:39.563897 systemd-logind[1787]: Watching system buttons on /dev/input/event1 (Power Button)
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.564 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.564 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.564 INFO Fetch successful
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.564 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.564 INFO Fetch successful
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.564 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetch successful
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetch successful
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetch failed with 404: resource not found
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetch successful
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetch successful
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetch successful
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetch successful
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1
Feb 13 15:32:39.614409 coreos-metadata[1776]: Feb 13 15:32:39.565 INFO Fetch successful
Feb 13 15:32:39.615654 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting
Feb 13 15:32:39.615654 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 15:32:39.615654 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: ----------------------------------------------------
Feb 13 15:32:39.615654 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: ntp-4 is maintained by Network Time Foundation,
Feb 13 15:32:39.615654 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 15:32:39.615654 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: corporation.  Support and training for ntp-4 are
Feb 13 15:32:39.615654 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: available at https://www.nwtime.org/support
Feb 13 15:32:39.615654 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: ----------------------------------------------------
Feb 13 15:32:39.598124 ntpd[1781]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting
Feb 13 15:32:39.563927 systemd-logind[1787]: Watching system buttons on /dev/input/event2 (Sleep Button)
Feb 13 15:32:39.598152 ntpd[1781]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 15:32:39.563955 systemd-logind[1787]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Feb 13 15:32:39.598164 ntpd[1781]: ----------------------------------------------------
Feb 13 15:32:39.622243 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: proto: precision = 0.083 usec (-23)
Feb 13 15:32:39.564210 systemd-logind[1787]: New seat seat0.
Feb 13 15:32:39.598174 ntpd[1781]: ntp-4 is maintained by Network Time Foundation,
Feb 13 15:32:39.574064 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 15:32:39.598186 ntpd[1781]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 15:32:39.618201 systemd[1]: Finished setup-oem.service - Setup OEM.
Feb 13 15:32:39.598196 ntpd[1781]: corporation.  Support and training for ntp-4 are
Feb 13 15:32:39.598205 ntpd[1781]: available at https://www.nwtime.org/support
Feb 13 15:32:39.598214 ntpd[1781]: ----------------------------------------------------
Feb 13 15:32:39.619315 ntpd[1781]: proto: precision = 0.083 usec (-23)
Feb 13 15:32:39.626957 ntpd[1781]: basedate set to 2025-02-01
Feb 13 15:32:39.627671 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: basedate set to 2025-02-01
Feb 13 15:32:39.627671 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: gps base set to 2025-02-02 (week 2352)
Feb 13 15:32:39.626986 ntpd[1781]: gps base set to 2025-02-02 (week 2352)
Feb 13 15:32:39.635430 ntpd[1781]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 15:32:39.637737 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 15:32:39.637737 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 15:32:39.637737 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 15:32:39.637737 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: Listen normally on 3 eth0 172.31.29.108:123
Feb 13 15:32:39.637737 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: Listen normally on 4 lo [::1]:123
Feb 13 15:32:39.637737 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: bind(21) AF_INET6 fe80::4df:adff:feb7:b50d%2#123 flags 0x11 failed: Cannot assign requested address
Feb 13 15:32:39.637737 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: unable to create socket on eth0 (5) for fe80::4df:adff:feb7:b50d%2#123
Feb 13 15:32:39.637737 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: failed to init interface for address fe80::4df:adff:feb7:b50d%2
Feb 13 15:32:39.637737 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: Listening on routing socket on fd #21 for interface updates
Feb 13 15:32:39.636856 ntpd[1781]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 15:32:39.637050 ntpd[1781]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 15:32:39.637087 ntpd[1781]: Listen normally on 3 eth0 172.31.29.108:123
Feb 13 15:32:39.637126 ntpd[1781]: Listen normally on 4 lo [::1]:123
Feb 13 15:32:39.637173 ntpd[1781]: bind(21) AF_INET6 fe80::4df:adff:feb7:b50d%2#123 flags 0x11 failed: Cannot assign requested address
Feb 13 15:32:39.637194 ntpd[1781]: unable to create socket on eth0 (5) for fe80::4df:adff:feb7:b50d%2#123
Feb 13 15:32:39.637210 ntpd[1781]: failed to init interface for address fe80::4df:adff:feb7:b50d%2
Feb 13 15:32:39.637242 ntpd[1781]: Listening on routing socket on fd #21 for interface updates
Feb 13 15:32:39.645679 ntpd[1781]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:32:39.659843 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:32:39.659843 ntpd[1781]: 13 Feb 15:32:39 ntpd[1781]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:32:39.652365 ntpd[1781]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:32:39.677265 extend-filesystems[1779]: Resized partition /dev/nvme0n1p9
Feb 13 15:32:39.710844 extend-filesystems[1846]: resize2fs 1.47.1 (20-May-2024)
Feb 13 15:32:39.716853 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Feb 13 15:32:39.722872 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Feb 13 15:32:39.731886 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 15:32:39.870763 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1652)
Feb 13 15:32:39.906490 dbus-daemon[1777]: [system] Successfully activated service 'org.freedesktop.hostname1'
Feb 13 15:32:39.907055 systemd[1]: Started systemd-hostnamed.service - Hostname Service.
Feb 13 15:32:39.913974 dbus-daemon[1777]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1817 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Feb 13 15:32:39.934404 systemd[1]: Starting polkit.service - Authorization Manager...
Feb 13 15:32:39.945709 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Feb 13 15:32:39.972714 extend-filesystems[1846]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Feb 13 15:32:39.972714 extend-filesystems[1846]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 13 15:32:39.972714 extend-filesystems[1846]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Feb 13 15:32:39.972621 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 15:32:39.981559 bash[1848]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:32:39.981677 extend-filesystems[1779]: Resized filesystem in /dev/nvme0n1p9
Feb 13 15:32:39.974858 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 15:32:39.991367 polkitd[1869]: Started polkitd version 121
Feb 13 15:32:39.982605 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 15:32:40.005233 systemd[1]: Starting sshkeys.service...
Feb 13 15:32:40.028218 polkitd[1869]: Loading rules from directory /etc/polkit-1/rules.d
Feb 13 15:32:40.028323 polkitd[1869]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 13 15:32:40.029187 locksmithd[1818]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 15:32:40.032745 polkitd[1869]: Finished loading, compiling and executing 2 rules
Feb 13 15:32:40.034100 dbus-daemon[1777]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Feb 13 15:32:40.034846 systemd[1]: Started polkit.service - Authorization Manager.
Feb 13 15:32:40.036586 polkitd[1869]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 13 15:32:40.076324 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Feb 13 15:32:40.085984 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Feb 13 15:32:40.165906 systemd-hostnamed[1817]: Hostname set to <ip-172-31-29-108> (transient)
Feb 13 15:32:40.177768 systemd-resolved[1598]: System hostname changed to 'ip-172-31-29-108'.
Feb 13 15:32:40.260803 coreos-metadata[1926]: Feb 13 15:32:40.260 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 15:32:40.263709 coreos-metadata[1926]: Feb 13 15:32:40.261 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1
Feb 13 15:32:40.263709 coreos-metadata[1926]: Feb 13 15:32:40.263 INFO Fetch successful
Feb 13 15:32:40.263709 coreos-metadata[1926]: Feb 13 15:32:40.263 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1
Feb 13 15:32:40.268928 coreos-metadata[1926]: Feb 13 15:32:40.268 INFO Fetch successful
Feb 13 15:32:40.277849 unknown[1926]: wrote ssh authorized keys file for user: core
Feb 13 15:32:40.319340 containerd[1803]: time="2025-02-13T15:32:40.319136965Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Feb 13 15:32:40.341055 update-ssh-keys[1961]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:32:40.342531 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Feb 13 15:32:40.356486 systemd[1]: Finished sshkeys.service.
Feb 13 15:32:40.400872 systemd-networkd[1649]: eth0: Gained IPv6LL
Feb 13 15:32:40.405851 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 15:32:40.408049 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 15:32:40.420202 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent.
Feb 13 15:32:40.426972 containerd[1803]: time="2025-02-13T15:32:40.426884003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:32:40.435212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:32:40.435473 containerd[1803]: time="2025-02-13T15:32:40.435028056Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:32:40.435473 containerd[1803]: time="2025-02-13T15:32:40.435074159Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 15:32:40.435473 containerd[1803]: time="2025-02-13T15:32:40.435098974Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 15:32:40.439648 containerd[1803]: time="2025-02-13T15:32:40.437353248Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 15:32:40.439648 containerd[1803]: time="2025-02-13T15:32:40.437461788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 15:32:40.439648 containerd[1803]: time="2025-02-13T15:32:40.437542381Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:32:40.439648 containerd[1803]: time="2025-02-13T15:32:40.437560533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:32:40.439648 containerd[1803]: time="2025-02-13T15:32:40.437883129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:32:40.439648 containerd[1803]: time="2025-02-13T15:32:40.437908473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 15:32:40.439648 containerd[1803]: time="2025-02-13T15:32:40.437929041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:32:40.439648 containerd[1803]: time="2025-02-13T15:32:40.437943364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 15:32:40.439648 containerd[1803]: time="2025-02-13T15:32:40.438045425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:32:40.439648 containerd[1803]: time="2025-02-13T15:32:40.438887936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:32:40.439648 containerd[1803]: time="2025-02-13T15:32:40.439055703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:32:40.440288 containerd[1803]: time="2025-02-13T15:32:40.439077870Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 15:32:40.440288 containerd[1803]: time="2025-02-13T15:32:40.439187982Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 15:32:40.440288 containerd[1803]: time="2025-02-13T15:32:40.439257439Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 15:32:40.440737 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.450838587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.450907801Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.450936102Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.450960667Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.450980040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.451146494Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.451517040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.451638301Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.451657863Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.451677449Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.451717369Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.451735799Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.451753776Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 15:32:40.452842 containerd[1803]: time="2025-02-13T15:32:40.451772938Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.451801045Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.451819446Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.451837253Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.451857399Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.451884503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.451904752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.451924610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.451943008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.451961579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.451980196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.451996906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.452016491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.452037004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453383 containerd[1803]: time="2025-02-13T15:32:40.452057496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452074510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452096200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452114913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452135282Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452164294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452182776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452198906Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452363119Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452397621Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452413448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452430595Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452444436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452462059Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 15:32:40.453987 containerd[1803]: time="2025-02-13T15:32:40.452475799Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 15:32:40.454492 containerd[1803]: time="2025-02-13T15:32:40.452490620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 15:32:40.460356 containerd[1803]: time="2025-02-13T15:32:40.456242067Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 15:32:40.460356 containerd[1803]: time="2025-02-13T15:32:40.456323826Z" level=info msg="Connect containerd service"
Feb 13 15:32:40.460356 containerd[1803]: time="2025-02-13T15:32:40.457889705Z" level=info msg="using legacy CRI server"
Feb 13 15:32:40.460356 containerd[1803]: time="2025-02-13T15:32:40.457915896Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 15:32:40.460356 containerd[1803]: time="2025-02-13T15:32:40.458095723Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 15:32:40.460356 containerd[1803]: time="2025-02-13T15:32:40.459005177Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:32:40.460356 containerd[1803]: time="2025-02-13T15:32:40.459399305Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 15:32:40.460356 containerd[1803]: time="2025-02-13T15:32:40.459623824Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 15:32:40.467657 containerd[1803]: time="2025-02-13T15:32:40.464860229Z" level=info msg="Start subscribing containerd event"
Feb 13 15:32:40.467657 containerd[1803]: time="2025-02-13T15:32:40.465048310Z" level=info msg="Start recovering state"
Feb 13 15:32:40.467657 containerd[1803]: time="2025-02-13T15:32:40.465346502Z" level=info msg="Start event monitor"
Feb 13 15:32:40.467657 containerd[1803]: time="2025-02-13T15:32:40.465470181Z" level=info msg="Start snapshots syncer"
Feb 13 15:32:40.467657 containerd[1803]: time="2025-02-13T15:32:40.465485726Z" level=info msg="Start cni network conf syncer for default"
Feb 13 15:32:40.467657 containerd[1803]: time="2025-02-13T15:32:40.465497929Z" level=info msg="Start streaming server"
Feb 13 15:32:40.467657 containerd[1803]: time="2025-02-13T15:32:40.466046782Z" level=info msg="containerd successfully booted in 0.149375s"
Feb 13 15:32:40.467136 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 15:32:40.546728 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 15:32:40.564303 amazon-ssm-agent[1976]: Initializing new seelog logger
Feb 13 15:32:40.564867 amazon-ssm-agent[1976]: New Seelog Logger Creation Complete
Feb 13 15:32:40.565006 amazon-ssm-agent[1976]: 2025/02/13 15:32:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:32:40.565057 amazon-ssm-agent[1976]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:32:40.565552 amazon-ssm-agent[1976]: 2025/02/13 15:32:40 processing appconfig overrides
Feb 13 15:32:40.566603 amazon-ssm-agent[1976]: 2025/02/13 15:32:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:32:40.566790 amazon-ssm-agent[1976]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:32:40.566936 amazon-ssm-agent[1976]: 2025/02/13 15:32:40 processing appconfig overrides
Feb 13 15:32:40.567288 amazon-ssm-agent[1976]: 2025/02/13 15:32:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:32:40.567345 amazon-ssm-agent[1976]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:32:40.567468 amazon-ssm-agent[1976]: 2025/02/13 15:32:40 processing appconfig overrides
Feb 13 15:32:40.568264 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO Proxy environment variables:
Feb 13 15:32:40.570239 amazon-ssm-agent[1976]: 2025/02/13 15:32:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:32:40.570316 amazon-ssm-agent[1976]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:32:40.570474 amazon-ssm-agent[1976]: 2025/02/13 15:32:40 processing appconfig overrides
Feb 13 15:32:40.668961 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO http_proxy:
Feb 13 15:32:40.767396 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO no_proxy:
Feb 13 15:32:40.865263 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO https_proxy:
Feb 13 15:32:40.963484 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO Checking if agent identity type OnPrem can be assumed
Feb 13 15:32:41.045130 sshd_keygen[1824]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 15:32:41.063752 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO Checking if agent identity type EC2 can be assumed
Feb 13 15:32:41.067158 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO Agent will take identity from EC2
Feb 13 15:32:41.067158 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 15:32:41.067158 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 15:32:41.067445 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 15:32:41.067445 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0
Feb 13 15:32:41.067445 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO [amazon-ssm-agent] OS: linux, Arch: amd64
Feb 13 15:32:41.067445 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO [amazon-ssm-agent] Starting Core Agent
Feb 13 15:32:41.067445 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO [amazon-ssm-agent] registrar detected. Attempting registration
Feb 13 15:32:41.067445 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO [Registrar] Starting registrar module
Feb 13 15:32:41.067445 amazon-ssm-agent[1976]: 2025-02-13 15:32:40 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration
Feb 13 15:32:41.067445 amazon-ssm-agent[1976]: 2025-02-13 15:32:41 INFO [EC2Identity] EC2 registration was successful.
Feb 13 15:32:41.067445 amazon-ssm-agent[1976]: 2025-02-13 15:32:41 INFO [CredentialRefresher] credentialRefresher has started
Feb 13 15:32:41.067445 amazon-ssm-agent[1976]: 2025-02-13 15:32:41 INFO [CredentialRefresher] Starting credentials refresher loop
Feb 13 15:32:41.067445 amazon-ssm-agent[1976]: 2025-02-13 15:32:41 INFO EC2RoleProvider Successfully connected with instance profile role credentials
Feb 13 15:32:41.074626 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 15:32:41.083274 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 15:32:41.112884 systemd[1]: Started sshd@0-172.31.29.108:22-139.178.89.65:58636.service - OpenSSH per-connection server daemon (139.178.89.65:58636).
Feb 13 15:32:41.120459 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 15:32:41.120736 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 15:32:41.145414 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 15:32:41.159919 amazon-ssm-agent[1976]: 2025-02-13 15:32:41 INFO [CredentialRefresher] Next credential rotation will be in 31.624988864316666 minutes
Feb 13 15:32:41.174250 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 15:32:41.185844 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 15:32:41.234473 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Feb 13 15:32:41.237436 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 15:32:41.425457 sshd[2006]: Accepted publickey for core from 139.178.89.65 port 58636 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM
Feb 13 15:32:41.429146 sshd-session[2006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:32:41.439286 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 15:32:41.446541 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 15:32:41.453215 systemd-logind[1787]: New session 1 of user core.
Feb 13 15:32:41.470875 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 15:32:41.484118 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 15:32:41.509609 (systemd)[2018]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 15:32:41.973115 systemd[2018]: Queued start job for default target default.target.
Feb 13 15:32:41.980151 systemd[2018]: Created slice app.slice - User Application Slice.
Feb 13 15:32:41.980265 systemd[2018]: Reached target paths.target - Paths.
Feb 13 15:32:41.980349 systemd[2018]: Reached target timers.target - Timers.
Feb 13 15:32:41.983876 systemd[2018]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 15:32:42.000088 systemd[2018]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 15:32:42.000366 systemd[2018]: Reached target sockets.target - Sockets.
Feb 13 15:32:42.000409 systemd[2018]: Reached target basic.target - Basic System.
Feb 13 15:32:42.000465 systemd[2018]: Reached target default.target - Main User Target.
Feb 13 15:32:42.000505 systemd[2018]: Startup finished in 458ms.
Feb 13 15:32:42.001804 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 15:32:42.014952 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 15:32:42.084365 amazon-ssm-agent[1976]: 2025-02-13 15:32:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process
Feb 13 15:32:42.179978 systemd[1]: Started sshd@1-172.31.29.108:22-139.178.89.65:58652.service - OpenSSH per-connection server daemon (139.178.89.65:58652).
Feb 13 15:32:42.186968 amazon-ssm-agent[1976]: 2025-02-13 15:32:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2028) started
Feb 13 15:32:42.286672 amazon-ssm-agent[1976]: 2025-02-13 15:32:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds
Feb 13 15:32:42.387705 sshd[2035]: Accepted publickey for core from 139.178.89.65 port 58652 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM
Feb 13 15:32:42.390360 sshd-session[2035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:32:42.397743 systemd-logind[1787]: New session 2 of user core.
Feb 13 15:32:42.399886 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 15:32:42.534728 sshd[2042]: Connection closed by 139.178.89.65 port 58652
Feb 13 15:32:42.535367 sshd-session[2035]: pam_unix(sshd:session): session closed for user core
Feb 13 15:32:42.541827 systemd-logind[1787]: Session 2 logged out. Waiting for processes to exit.
Feb 13 15:32:42.542521 systemd[1]: sshd@1-172.31.29.108:22-139.178.89.65:58652.service: Deactivated successfully.
Feb 13 15:32:42.545471 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 15:32:42.547162 systemd-logind[1787]: Removed session 2.
Feb 13 15:32:42.572126 systemd[1]: Started sshd@2-172.31.29.108:22-139.178.89.65:58654.service - OpenSSH per-connection server daemon (139.178.89.65:58654).
Feb 13 15:32:42.598738 ntpd[1781]: Listen normally on 6 eth0 [fe80::4df:adff:feb7:b50d%2]:123
Feb 13 15:32:42.599270 ntpd[1781]: 13 Feb 15:32:42 ntpd[1781]: Listen normally on 6 eth0 [fe80::4df:adff:feb7:b50d%2]:123
Feb 13 15:32:42.780823 sshd[2047]: Accepted publickey for core from 139.178.89.65 port 58654 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM
Feb 13 15:32:42.788067 sshd-session[2047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:32:42.803298 systemd-logind[1787]: New session 3 of user core.
Feb 13 15:32:42.810986 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 15:32:42.954180 sshd[2049]: Connection closed by 139.178.89.65 port 58654
Feb 13 15:32:42.958935 sshd-session[2047]: pam_unix(sshd:session): session closed for user core
Feb 13 15:32:42.977508 systemd[1]: sshd@2-172.31.29.108:22-139.178.89.65:58654.service: Deactivated successfully.
Feb 13 15:32:42.989893 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 15:32:42.998092 systemd-logind[1787]: Session 3 logged out. Waiting for processes to exit.
Feb 13 15:32:43.009922 systemd-logind[1787]: Removed session 3.
Feb 13 15:32:43.338406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:32:43.355640 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 15:32:43.358797 systemd[1]: Startup finished in 1.102s (kernel) + 8.701s (initrd) + 10.899s (userspace) = 20.704s.
Feb 13 15:32:43.598106 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:32:45.123274 kubelet[2058]: E0213 15:32:45.123212    2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:32:45.128573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:32:45.129447 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:32:45.135923 systemd[1]: kubelet.service: Consumed 1.003s CPU time.
Feb 13 15:32:47.128113 systemd-resolved[1598]: Clock change detected. Flushing caches.
Feb 13 15:32:53.531386 systemd[1]: Started sshd@3-172.31.29.108:22-139.178.89.65:42758.service - OpenSSH per-connection server daemon (139.178.89.65:42758).
Feb 13 15:32:53.703949 sshd[2070]: Accepted publickey for core from 139.178.89.65 port 42758 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM
Feb 13 15:32:53.708182 sshd-session[2070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:32:53.723340 systemd-logind[1787]: New session 4 of user core.
Feb 13 15:32:53.728064 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 15:32:53.882382 sshd[2072]: Connection closed by 139.178.89.65 port 42758
Feb 13 15:32:53.884065 sshd-session[2070]: pam_unix(sshd:session): session closed for user core
Feb 13 15:32:53.898470 systemd[1]: sshd@3-172.31.29.108:22-139.178.89.65:42758.service: Deactivated successfully.
Feb 13 15:32:53.906689 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 15:32:53.925238 systemd-logind[1787]: Session 4 logged out. Waiting for processes to exit.
Feb 13 15:32:53.933295 systemd[1]: Started sshd@4-172.31.29.108:22-139.178.89.65:42770.service - OpenSSH per-connection server daemon (139.178.89.65:42770).
Feb 13 15:32:53.936582 systemd-logind[1787]: Removed session 4.
Feb 13 15:32:54.115515 sshd[2077]: Accepted publickey for core from 139.178.89.65 port 42770 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM
Feb 13 15:32:54.117507 sshd-session[2077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:32:54.129671 systemd-logind[1787]: New session 5 of user core.
Feb 13 15:32:54.134498 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 15:32:54.262366 sshd[2079]: Connection closed by 139.178.89.65 port 42770
Feb 13 15:32:54.263089 sshd-session[2077]: pam_unix(sshd:session): session closed for user core
Feb 13 15:32:54.277194 systemd-logind[1787]: Session 5 logged out. Waiting for processes to exit.
Feb 13 15:32:54.277580 systemd[1]: sshd@4-172.31.29.108:22-139.178.89.65:42770.service: Deactivated successfully.
Feb 13 15:32:54.282204 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 15:32:54.307296 systemd-logind[1787]: Removed session 5.
Feb 13 15:32:54.318272 systemd[1]: Started sshd@5-172.31.29.108:22-139.178.89.65:42782.service - OpenSSH per-connection server daemon (139.178.89.65:42782).
Feb 13 15:32:54.496918 sshd[2084]: Accepted publickey for core from 139.178.89.65 port 42782 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM
Feb 13 15:32:54.498331 sshd-session[2084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:32:54.507865 systemd-logind[1787]: New session 6 of user core.
Feb 13 15:32:54.511005 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 15:32:54.651164 sshd[2086]: Connection closed by 139.178.89.65 port 42782
Feb 13 15:32:54.651878 sshd-session[2084]: pam_unix(sshd:session): session closed for user core
Feb 13 15:32:54.665678 systemd[1]: sshd@5-172.31.29.108:22-139.178.89.65:42782.service: Deactivated successfully.
Feb 13 15:32:54.677322 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 15:32:54.704796 systemd-logind[1787]: Session 6 logged out. Waiting for processes to exit.
Feb 13 15:32:54.715435 systemd[1]: Started sshd@6-172.31.29.108:22-139.178.89.65:57108.service - OpenSSH per-connection server daemon (139.178.89.65:57108).
Feb 13 15:32:54.717878 systemd-logind[1787]: Removed session 6.
Feb 13 15:32:54.893970 sshd[2091]: Accepted publickey for core from 139.178.89.65 port 57108 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM
Feb 13 15:32:54.900132 sshd-session[2091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:32:54.907587 systemd-logind[1787]: New session 7 of user core.
Feb 13 15:32:54.916373 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 15:32:55.054424 sudo[2094]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 15:32:55.054859 sudo[2094]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:32:55.072739 sudo[2094]: pam_unix(sudo:session): session closed for user root
Feb 13 15:32:55.095739 sshd[2093]: Connection closed by 139.178.89.65 port 57108
Feb 13 15:32:55.098941 sshd-session[2091]: pam_unix(sshd:session): session closed for user core
Feb 13 15:32:55.107311 systemd[1]: sshd@6-172.31.29.108:22-139.178.89.65:57108.service: Deactivated successfully.
Feb 13 15:32:55.111062 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 15:32:55.114622 systemd-logind[1787]: Session 7 logged out. Waiting for processes to exit.
Feb 13 15:32:55.120010 systemd-logind[1787]: Removed session 7.
Feb 13 15:32:55.146525 systemd[1]: Started sshd@7-172.31.29.108:22-139.178.89.65:57118.service - OpenSSH per-connection server daemon (139.178.89.65:57118).
Feb 13 15:32:55.365932 sshd[2099]: Accepted publickey for core from 139.178.89.65 port 57118 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM
Feb 13 15:32:55.367718 sshd-session[2099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:32:55.387919 systemd-logind[1787]: New session 8 of user core.
Feb 13 15:32:55.398105 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 15:32:55.510951 sudo[2103]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 15:32:55.511628 sudo[2103]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:32:55.516099 sudo[2103]: pam_unix(sudo:session): session closed for user root
Feb 13 15:32:55.522607 sudo[2102]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Feb 13 15:32:55.523043 sudo[2102]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:32:55.549435 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:32:55.594200 augenrules[2125]: No rules
Feb 13 15:32:55.595966 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:32:55.596270 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:32:55.597748 sudo[2102]: pam_unix(sudo:session): session closed for user root
Feb 13 15:32:55.621899 sshd[2101]: Connection closed by 139.178.89.65 port 57118
Feb 13 15:32:55.622567 sshd-session[2099]: pam_unix(sshd:session): session closed for user core
Feb 13 15:32:55.627949 systemd[1]: sshd@7-172.31.29.108:22-139.178.89.65:57118.service: Deactivated successfully.
Feb 13 15:32:55.629970 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 15:32:55.630649 systemd-logind[1787]: Session 8 logged out. Waiting for processes to exit.
Feb 13 15:32:55.632151 systemd-logind[1787]: Removed session 8.
Feb 13 15:32:55.658241 systemd[1]: Started sshd@8-172.31.29.108:22-139.178.89.65:57126.service - OpenSSH per-connection server daemon (139.178.89.65:57126).
Feb 13 15:32:55.660520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:32:55.672978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:32:55.845587 sshd[2133]: Accepted publickey for core from 139.178.89.65 port 57126 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM
Feb 13 15:32:55.847336 sshd-session[2133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:32:55.858067 systemd-logind[1787]: New session 9 of user core.
Feb 13 15:32:55.865096 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 15:32:55.924434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:32:55.936662 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:32:55.970643 sudo[2149]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 15:32:55.972119 sudo[2149]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:32:56.023425 kubelet[2144]: E0213 15:32:56.023363    2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:32:56.029766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:32:56.029994 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:32:57.300604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:32:57.309227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:32:57.441470 systemd[1]: Reloading requested from client PID 2182 ('systemctl') (unit session-9.scope)...
Feb 13 15:32:57.441492 systemd[1]: Reloading...
Feb 13 15:32:57.672840 zram_generator::config[2222]: No configuration found.
Feb 13 15:32:57.859295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:32:58.014319 systemd[1]: Reloading finished in 572 ms.
Feb 13 15:32:58.110309 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Feb 13 15:32:58.110423 systemd[1]: kubelet.service: Failed with result 'signal'.
Feb 13 15:32:58.111046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:32:58.116371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:32:58.491291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:32:58.493790 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:32:58.587675 kubelet[2282]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:32:58.587675 kubelet[2282]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:32:58.587675 kubelet[2282]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:32:58.589849 kubelet[2282]: I0213 15:32:58.589775    2282 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:32:59.268536 kubelet[2282]: I0213 15:32:59.268487    2282 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Feb 13 15:32:59.268536 kubelet[2282]: I0213 15:32:59.268522    2282 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:32:59.269086 kubelet[2282]: I0213 15:32:59.269061    2282 server.go:929] "Client rotation is on, will bootstrap in background"
Feb 13 15:32:59.328558 kubelet[2282]: I0213 15:32:59.328431    2282 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:32:59.355065 kubelet[2282]: E0213 15:32:59.355007    2282 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Feb 13 15:32:59.355065 kubelet[2282]: I0213 15:32:59.355060    2282 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Feb 13 15:32:59.361116 kubelet[2282]: I0213 15:32:59.361076    2282 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:32:59.363333 kubelet[2282]: I0213 15:32:59.363293    2282 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Feb 13 15:32:59.363839 kubelet[2282]: I0213 15:32:59.363654    2282 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:32:59.364113 kubelet[2282]: I0213 15:32:59.363707    2282 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.29.108","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Feb 13 15:32:59.364113 kubelet[2282]: I0213 15:32:59.364001    2282 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:32:59.364113 kubelet[2282]: I0213 15:32:59.364016    2282 container_manager_linux.go:300] "Creating device plugin manager"
Feb 13 15:32:59.364508 kubelet[2282]: I0213 15:32:59.364139    2282 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:32:59.367710 kubelet[2282]: I0213 15:32:59.367424    2282 kubelet.go:408] "Attempting to sync node with API server"
Feb 13 15:32:59.367710 kubelet[2282]: I0213 15:32:59.367462    2282 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:32:59.367710 kubelet[2282]: I0213 15:32:59.367503    2282 kubelet.go:314] "Adding apiserver pod source"
Feb 13 15:32:59.367710 kubelet[2282]: I0213 15:32:59.367534    2282 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:32:59.369308 kubelet[2282]: E0213 15:32:59.369277    2282 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:32:59.369880 kubelet[2282]: E0213 15:32:59.369567    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:32:59.377046 kubelet[2282]: I0213 15:32:59.377015    2282 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:32:59.380132 kubelet[2282]: I0213 15:32:59.380092    2282 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:32:59.381316 kubelet[2282]: W0213 15:32:59.381283    2282 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 15:32:59.382421 kubelet[2282]: I0213 15:32:59.382392    2282 server.go:1269] "Started kubelet"
Feb 13 15:32:59.384857 kubelet[2282]: I0213 15:32:59.384010    2282 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:32:59.386553 kubelet[2282]: I0213 15:32:59.385555    2282 server.go:460] "Adding debug handlers to kubelet server"
Feb 13 15:32:59.389078 kubelet[2282]: I0213 15:32:59.389040    2282 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:32:59.391835 kubelet[2282]: I0213 15:32:59.390768    2282 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:32:59.391835 kubelet[2282]: I0213 15:32:59.391199    2282 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:32:59.392924 kubelet[2282]: I0213 15:32:59.392897    2282 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Feb 13 15:32:59.400966 kubelet[2282]: I0213 15:32:59.399989    2282 volume_manager.go:289] "Starting Kubelet Volume Manager"
Feb 13 15:32:59.400966 kubelet[2282]: E0213 15:32:59.400336    2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.29.108\" not found"
Feb 13 15:32:59.400966 kubelet[2282]: W0213 15:32:59.400386    2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 13 15:32:59.400966 kubelet[2282]: E0213 15:32:59.400413    2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
Feb 13 15:32:59.400966 kubelet[2282]: W0213 15:32:59.400516    2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.29.108" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 13 15:32:59.400966 kubelet[2282]: E0213 15:32:59.400540    2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.29.108\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
Feb 13 15:32:59.400966 kubelet[2282]: I0213 15:32:59.400705    2282 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Feb 13 15:32:59.400966 kubelet[2282]: I0213 15:32:59.400763    2282 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 15:32:59.403568 kubelet[2282]: I0213 15:32:59.403529    2282 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:32:59.403759 kubelet[2282]: I0213 15:32:59.403723    2282 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:32:59.414836 kubelet[2282]: I0213 15:32:59.413862    2282 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:32:59.431846 kubelet[2282]: E0213 15:32:59.429948    2282 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.29.108\" not found" node="172.31.29.108"
Feb 13 15:32:59.439703 kubelet[2282]: E0213 15:32:59.439670    2282 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:32:59.466712 kubelet[2282]: I0213 15:32:59.466683    2282 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 15:32:59.466877 kubelet[2282]: I0213 15:32:59.466859    2282 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 15:32:59.466940 kubelet[2282]: I0213 15:32:59.466892    2282 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:32:59.470842 kubelet[2282]: I0213 15:32:59.469542    2282 policy_none.go:49] "None policy: Start"
Feb 13 15:32:59.474683 kubelet[2282]: I0213 15:32:59.474648    2282 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 15:32:59.474926 kubelet[2282]: I0213 15:32:59.474762    2282 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:32:59.494182 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Feb 13 15:32:59.500437 kubelet[2282]: E0213 15:32:59.500410    2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.29.108\" not found"
Feb 13 15:32:59.510626 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Feb 13 15:32:59.519157 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Feb 13 15:32:59.539304 kubelet[2282]: I0213 15:32:59.539045    2282 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:32:59.542085 kubelet[2282]: I0213 15:32:59.541966    2282 eviction_manager.go:189] "Eviction manager: starting control loop"
Feb 13 15:32:59.544945 kubelet[2282]: I0213 15:32:59.544754    2282 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 15:32:59.547630 kubelet[2282]: I0213 15:32:59.547163    2282 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:32:59.556293 kubelet[2282]: E0213 15:32:59.556266    2282 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.29.108\" not found"
Feb 13 15:32:59.584389 kubelet[2282]: I0213 15:32:59.584340    2282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:32:59.586527 kubelet[2282]: I0213 15:32:59.586410    2282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:32:59.586527 kubelet[2282]: I0213 15:32:59.586443    2282 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 15:32:59.586527 kubelet[2282]: I0213 15:32:59.586466    2282 kubelet.go:2321] "Starting kubelet main sync loop"
Feb 13 15:32:59.586908 kubelet[2282]: E0213 15:32:59.586675    2282 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Feb 13 15:32:59.648133 kubelet[2282]: I0213 15:32:59.648091    2282 kubelet_node_status.go:72] "Attempting to register node" node="172.31.29.108"
Feb 13 15:32:59.656647 kubelet[2282]: I0213 15:32:59.656615    2282 kubelet_node_status.go:75] "Successfully registered node" node="172.31.29.108"
Feb 13 15:32:59.656647 kubelet[2282]: E0213 15:32:59.656646    2282 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.29.108\": node \"172.31.29.108\" not found"
Feb 13 15:32:59.676490 kubelet[2282]: I0213 15:32:59.676456    2282 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Feb 13 15:32:59.677331 containerd[1803]: time="2025-02-13T15:32:59.677281355Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 15:32:59.678112 kubelet[2282]: I0213 15:32:59.677567    2282 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Feb 13 15:32:59.685115 kubelet[2282]: E0213 15:32:59.685075    2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.29.108\" not found"
Feb 13 15:32:59.785993 kubelet[2282]: E0213 15:32:59.785855    2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.29.108\" not found"
Feb 13 15:32:59.886269 kubelet[2282]: E0213 15:32:59.886216    2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.29.108\" not found"
Feb 13 15:32:59.987147 kubelet[2282]: E0213 15:32:59.987089    2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.29.108\" not found"
Feb 13 15:33:00.059373 sudo[2149]: pam_unix(sudo:session): session closed for user root
Feb 13 15:33:00.087940 sshd[2138]: Connection closed by 139.178.89.65 port 57126
Feb 13 15:33:00.088509 kubelet[2282]: E0213 15:33:00.087875    2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.29.108\" not found"
Feb 13 15:33:00.089793 sshd-session[2133]: pam_unix(sshd:session): session closed for user core
Feb 13 15:33:00.094948 systemd[1]: sshd@8-172.31.29.108:22-139.178.89.65:57126.service: Deactivated successfully.
Feb 13 15:33:00.101115 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 15:33:00.105289 systemd-logind[1787]: Session 9 logged out. Waiting for processes to exit.
Feb 13 15:33:00.108940 systemd-logind[1787]: Removed session 9.
Feb 13 15:33:00.189048 kubelet[2282]: E0213 15:33:00.188997    2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.29.108\" not found"
Feb 13 15:33:00.272923 kubelet[2282]: I0213 15:33:00.272878    2282 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Feb 13 15:33:00.273113 kubelet[2282]: W0213 15:33:00.273081    2282 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Feb 13 15:33:00.273170 kubelet[2282]: W0213 15:33:00.273123    2282 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Feb 13 15:33:00.290141 kubelet[2282]: E0213 15:33:00.290096    2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.29.108\" not found"
Feb 13 15:33:00.370315 kubelet[2282]: E0213 15:33:00.370176    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:00.391267 kubelet[2282]: E0213 15:33:00.391213    2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.29.108\" not found"
Feb 13 15:33:00.491418 kubelet[2282]: E0213 15:33:00.491366    2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.29.108\" not found"
Feb 13 15:33:01.371636 kubelet[2282]: I0213 15:33:01.371264    2282 apiserver.go:52] "Watching apiserver"
Feb 13 15:33:01.371636 kubelet[2282]: E0213 15:33:01.371291    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:01.403216 kubelet[2282]: I0213 15:33:01.403185    2282 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Feb 13 15:33:01.410383 systemd[1]: Created slice kubepods-besteffort-podeeca25b0_6b04_4290_a747_3f8cfa47c41c.slice - libcontainer container kubepods-besteffort-podeeca25b0_6b04_4290_a747_3f8cfa47c41c.slice.
Feb 13 15:33:01.419942 kubelet[2282]: I0213 15:33:01.419101    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cni-path\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.419942 kubelet[2282]: I0213 15:33:01.419150    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42f6q\" (UniqueName: \"kubernetes.io/projected/583be3f0-8721-4ffd-9143-ffe6b61ebc63-kube-api-access-42f6q\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.419942 kubelet[2282]: I0213 15:33:01.419180    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cilium-cgroup\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.419942 kubelet[2282]: I0213 15:33:01.419216    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/583be3f0-8721-4ffd-9143-ffe6b61ebc63-hubble-tls\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.419942 kubelet[2282]: I0213 15:33:01.419241    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eeca25b0-6b04-4290-a747-3f8cfa47c41c-kube-proxy\") pod \"kube-proxy-c2cxs\" (UID: \"eeca25b0-6b04-4290-a747-3f8cfa47c41c\") " pod="kube-system/kube-proxy-c2cxs"
Feb 13 15:33:01.419942 kubelet[2282]: I0213 15:33:01.419264    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eeca25b0-6b04-4290-a747-3f8cfa47c41c-xtables-lock\") pod \"kube-proxy-c2cxs\" (UID: \"eeca25b0-6b04-4290-a747-3f8cfa47c41c\") " pod="kube-system/kube-proxy-c2cxs"
Feb 13 15:33:01.420312 kubelet[2282]: I0213 15:33:01.419309    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeca25b0-6b04-4290-a747-3f8cfa47c41c-lib-modules\") pod \"kube-proxy-c2cxs\" (UID: \"eeca25b0-6b04-4290-a747-3f8cfa47c41c\") " pod="kube-system/kube-proxy-c2cxs"
Feb 13 15:33:01.420312 kubelet[2282]: I0213 15:33:01.419328    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs68d\" (UniqueName: \"kubernetes.io/projected/eeca25b0-6b04-4290-a747-3f8cfa47c41c-kube-api-access-gs68d\") pod \"kube-proxy-c2cxs\" (UID: \"eeca25b0-6b04-4290-a747-3f8cfa47c41c\") " pod="kube-system/kube-proxy-c2cxs"
Feb 13 15:33:01.420312 kubelet[2282]: I0213 15:33:01.419356    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cilium-run\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.420312 kubelet[2282]: I0213 15:33:01.419376    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-bpf-maps\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.420312 kubelet[2282]: I0213 15:33:01.419395    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-xtables-lock\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.420312 kubelet[2282]: I0213 15:33:01.419416    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cilium-config-path\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.421075 kubelet[2282]: I0213 15:33:01.419437    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-host-proc-sys-kernel\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.421075 kubelet[2282]: I0213 15:33:01.419462    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-hostproc\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.421075 kubelet[2282]: I0213 15:33:01.419480    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-etc-cni-netd\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.421075 kubelet[2282]: I0213 15:33:01.419516    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-lib-modules\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.421075 kubelet[2282]: I0213 15:33:01.419656    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/583be3f0-8721-4ffd-9143-ffe6b61ebc63-clustermesh-secrets\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.421075 kubelet[2282]: I0213 15:33:01.419711    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-host-proc-sys-net\") pod \"cilium-vrrb5\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") " pod="kube-system/cilium-vrrb5"
Feb 13 15:33:01.451452 systemd[1]: Created slice kubepods-burstable-pod583be3f0_8721_4ffd_9143_ffe6b61ebc63.slice - libcontainer container kubepods-burstable-pod583be3f0_8721_4ffd_9143_ffe6b61ebc63.slice.
Feb 13 15:33:01.750754 containerd[1803]: time="2025-02-13T15:33:01.750616123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c2cxs,Uid:eeca25b0-6b04-4290-a747-3f8cfa47c41c,Namespace:kube-system,Attempt:0,}"
Feb 13 15:33:01.762480 containerd[1803]: time="2025-02-13T15:33:01.762355499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrrb5,Uid:583be3f0-8721-4ffd-9143-ffe6b61ebc63,Namespace:kube-system,Attempt:0,}"
Feb 13 15:33:02.371969 kubelet[2282]: E0213 15:33:02.371917    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:02.633299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223123826.mount: Deactivated successfully.
Feb 13 15:33:02.656183 containerd[1803]: time="2025-02-13T15:33:02.656118718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:33:02.659480 containerd[1803]: time="2025-02-13T15:33:02.659427927Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:33:02.661128 containerd[1803]: time="2025-02-13T15:33:02.661065848Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056"
Feb 13 15:33:02.663282 containerd[1803]: time="2025-02-13T15:33:02.663222391Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:33:02.667287 containerd[1803]: time="2025-02-13T15:33:02.665749900Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:33:02.674003 containerd[1803]: time="2025-02-13T15:33:02.673265106Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 910.714489ms"
Feb 13 15:33:02.679058 containerd[1803]: time="2025-02-13T15:33:02.676557930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:33:02.684339 containerd[1803]: time="2025-02-13T15:33:02.684271943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 933.546107ms"
Feb 13 15:33:03.008434 containerd[1803]: time="2025-02-13T15:33:03.008163065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:33:03.009157 containerd[1803]: time="2025-02-13T15:33:03.000343461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:33:03.010135 containerd[1803]: time="2025-02-13T15:33:03.009860067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:33:03.010135 containerd[1803]: time="2025-02-13T15:33:03.009889448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:33:03.010135 containerd[1803]: time="2025-02-13T15:33:03.010040567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:33:03.010135 containerd[1803]: time="2025-02-13T15:33:03.008702120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:33:03.010135 containerd[1803]: time="2025-02-13T15:33:03.009250549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:33:03.010135 containerd[1803]: time="2025-02-13T15:33:03.009359551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:33:03.269480 systemd[1]: Started cri-containerd-c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b.scope - libcontainer container c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b.
Feb 13 15:33:03.320496 systemd[1]: Started cri-containerd-13edc80659447da113af88abbe9b1a1550150739494d9357d656bc8eebc94ba5.scope - libcontainer container 13edc80659447da113af88abbe9b1a1550150739494d9357d656bc8eebc94ba5.
Feb 13 15:33:03.373436 kubelet[2282]: E0213 15:33:03.373320    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:03.492797 containerd[1803]: time="2025-02-13T15:33:03.492535220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrrb5,Uid:583be3f0-8721-4ffd-9143-ffe6b61ebc63,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\""
Feb 13 15:33:03.501674 containerd[1803]: time="2025-02-13T15:33:03.501624959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c2cxs,Uid:eeca25b0-6b04-4290-a747-3f8cfa47c41c,Namespace:kube-system,Attempt:0,} returns sandbox id \"13edc80659447da113af88abbe9b1a1550150739494d9357d656bc8eebc94ba5\""
Feb 13 15:33:03.505465 containerd[1803]: time="2025-02-13T15:33:03.505415430Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb 13 15:33:04.373542 kubelet[2282]: E0213 15:33:04.373489    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:05.374843 kubelet[2282]: E0213 15:33:05.374507    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:06.376569 kubelet[2282]: E0213 15:33:06.376522    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:07.377140 kubelet[2282]: E0213 15:33:07.377069    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:08.377380 kubelet[2282]: E0213 15:33:08.377332    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:09.378479 kubelet[2282]: E0213 15:33:09.378196    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:09.502327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3344199733.mount: Deactivated successfully.
Feb 13 15:33:10.379124 kubelet[2282]: E0213 15:33:10.379071    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:10.728805 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 13 15:33:11.382682 kubelet[2282]: E0213 15:33:11.382634    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:12.383192 kubelet[2282]: E0213 15:33:12.383153    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:13.062659 containerd[1803]: time="2025-02-13T15:33:13.062597684Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:33:13.064437 containerd[1803]: time="2025-02-13T15:33:13.063836635Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503"
Feb 13 15:33:13.070184 containerd[1803]: time="2025-02-13T15:33:13.068743389Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:33:13.076953 containerd[1803]: time="2025-02-13T15:33:13.076847681Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.57135125s"
Feb 13 15:33:13.076953 containerd[1803]: time="2025-02-13T15:33:13.076949898Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Feb 13 15:33:13.081397 containerd[1803]: time="2025-02-13T15:33:13.081355881Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\""
Feb 13 15:33:13.083430 containerd[1803]: time="2025-02-13T15:33:13.083386605Z" level=info msg="CreateContainer within sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 15:33:13.113259 containerd[1803]: time="2025-02-13T15:33:13.113165816Z" level=info msg="CreateContainer within sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090\""
Feb 13 15:33:13.114854 containerd[1803]: time="2025-02-13T15:33:13.114797518Z" level=info msg="StartContainer for \"6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090\""
Feb 13 15:33:13.190333 systemd[1]: Started cri-containerd-6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090.scope - libcontainer container 6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090.
Feb 13 15:33:13.255775 containerd[1803]: time="2025-02-13T15:33:13.255409723Z" level=info msg="StartContainer for \"6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090\" returns successfully"
Feb 13 15:33:13.275232 systemd[1]: cri-containerd-6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090.scope: Deactivated successfully.
Feb 13 15:33:13.314471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090-rootfs.mount: Deactivated successfully.
Feb 13 15:33:13.386628 kubelet[2282]: E0213 15:33:13.384766    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:13.590387 containerd[1803]: time="2025-02-13T15:33:13.590125562Z" level=info msg="shim disconnected" id=6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090 namespace=k8s.io
Feb 13 15:33:13.590387 containerd[1803]: time="2025-02-13T15:33:13.590196008Z" level=warning msg="cleaning up after shim disconnected" id=6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090 namespace=k8s.io
Feb 13 15:33:13.590387 containerd[1803]: time="2025-02-13T15:33:13.590208362Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:33:13.698880 containerd[1803]: time="2025-02-13T15:33:13.698071036Z" level=info msg="CreateContainer within sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 15:33:13.718578 containerd[1803]: time="2025-02-13T15:33:13.718481558Z" level=info msg="CreateContainer within sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1\""
Feb 13 15:33:13.720097 containerd[1803]: time="2025-02-13T15:33:13.719984577Z" level=info msg="StartContainer for \"8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1\""
Feb 13 15:33:13.775131 systemd[1]: Started cri-containerd-8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1.scope - libcontainer container 8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1.
Feb 13 15:33:13.927924 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:33:13.928617 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:33:13.928709 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:33:13.939466 containerd[1803]: time="2025-02-13T15:33:13.937716082Z" level=info msg="StartContainer for \"8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1\" returns successfully"
Feb 13 15:33:13.947954 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:33:13.948272 systemd[1]: cri-containerd-8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1.scope: Deactivated successfully.
Feb 13 15:33:14.003145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:33:14.095266 containerd[1803]: time="2025-02-13T15:33:14.093982556Z" level=info msg="shim disconnected" id=8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1 namespace=k8s.io
Feb 13 15:33:14.095266 containerd[1803]: time="2025-02-13T15:33:14.094424826Z" level=warning msg="cleaning up after shim disconnected" id=8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1 namespace=k8s.io
Feb 13 15:33:14.095266 containerd[1803]: time="2025-02-13T15:33:14.094495449Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:33:14.393783 kubelet[2282]: E0213 15:33:14.385114    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:14.707346 containerd[1803]: time="2025-02-13T15:33:14.706974623Z" level=info msg="CreateContainer within sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 15:33:14.737314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740460114.mount: Deactivated successfully.
Feb 13 15:33:14.747735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1385229848.mount: Deactivated successfully.
Feb 13 15:33:14.750097 containerd[1803]: time="2025-02-13T15:33:14.750038438Z" level=info msg="CreateContainer within sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1\""
Feb 13 15:33:14.751473 containerd[1803]: time="2025-02-13T15:33:14.751431831Z" level=info msg="StartContainer for \"b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1\""
Feb 13 15:33:14.801030 systemd[1]: Started cri-containerd-b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1.scope - libcontainer container b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1.
Feb 13 15:33:14.860608 containerd[1803]: time="2025-02-13T15:33:14.860085864Z" level=info msg="StartContainer for \"b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1\" returns successfully"
Feb 13 15:33:14.863683 systemd[1]: cri-containerd-b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1.scope: Deactivated successfully.
Feb 13 15:33:14.958652 containerd[1803]: time="2025-02-13T15:33:14.958233137Z" level=info msg="shim disconnected" id=b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1 namespace=k8s.io
Feb 13 15:33:14.958652 containerd[1803]: time="2025-02-13T15:33:14.958403296Z" level=warning msg="cleaning up after shim disconnected" id=b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1 namespace=k8s.io
Feb 13 15:33:14.958652 containerd[1803]: time="2025-02-13T15:33:14.958418024Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:33:15.098109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount433511709.mount: Deactivated successfully.
Feb 13 15:33:15.385766 kubelet[2282]: E0213 15:33:15.385612    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:15.714894 containerd[1803]: time="2025-02-13T15:33:15.714258227Z" level=info msg="CreateContainer within sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 15:33:15.753887 containerd[1803]: time="2025-02-13T15:33:15.753641341Z" level=info msg="CreateContainer within sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d\""
Feb 13 15:33:15.755878 containerd[1803]: time="2025-02-13T15:33:15.754793290Z" level=info msg="StartContainer for \"6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d\""
Feb 13 15:33:15.829041 systemd[1]: Started cri-containerd-6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d.scope - libcontainer container 6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d.
Feb 13 15:33:15.897299 systemd[1]: cri-containerd-6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d.scope: Deactivated successfully.
Feb 13 15:33:15.902259 containerd[1803]: time="2025-02-13T15:33:15.901261659Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod583be3f0_8721_4ffd_9143_ffe6b61ebc63.slice/cri-containerd-6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d.scope/memory.events\": no such file or directory"
Feb 13 15:33:15.906329 containerd[1803]: time="2025-02-13T15:33:15.906224961Z" level=info msg="StartContainer for \"6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d\" returns successfully"
Feb 13 15:33:15.931846 containerd[1803]: time="2025-02-13T15:33:15.931734342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:33:15.932927 containerd[1803]: time="2025-02-13T15:33:15.932875719Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108"
Feb 13 15:33:15.936336 containerd[1803]: time="2025-02-13T15:33:15.936057586Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:33:15.941859 containerd[1803]: time="2025-02-13T15:33:15.940204734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:33:15.941859 containerd[1803]: time="2025-02-13T15:33:15.941155981Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 2.859755817s"
Feb 13 15:33:15.941859 containerd[1803]: time="2025-02-13T15:33:15.941187092Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\""
Feb 13 15:33:15.944341 containerd[1803]: time="2025-02-13T15:33:15.944214814Z" level=info msg="CreateContainer within sandbox \"13edc80659447da113af88abbe9b1a1550150739494d9357d656bc8eebc94ba5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 15:33:16.016214 containerd[1803]: time="2025-02-13T15:33:16.016050975Z" level=info msg="CreateContainer within sandbox \"13edc80659447da113af88abbe9b1a1550150739494d9357d656bc8eebc94ba5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c97183632b543dcb5562f4f55b952c69e6bc3d5a0770aa7357716d0392eeb504\""
Feb 13 15:33:16.017922 containerd[1803]: time="2025-02-13T15:33:16.017858527Z" level=info msg="StartContainer for \"c97183632b543dcb5562f4f55b952c69e6bc3d5a0770aa7357716d0392eeb504\""
Feb 13 15:33:16.020914 containerd[1803]: time="2025-02-13T15:33:16.020588906Z" level=info msg="shim disconnected" id=6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d namespace=k8s.io
Feb 13 15:33:16.020914 containerd[1803]: time="2025-02-13T15:33:16.020673278Z" level=warning msg="cleaning up after shim disconnected" id=6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d namespace=k8s.io
Feb 13 15:33:16.020914 containerd[1803]: time="2025-02-13T15:33:16.020706479Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:33:16.073035 systemd[1]: Started cri-containerd-c97183632b543dcb5562f4f55b952c69e6bc3d5a0770aa7357716d0392eeb504.scope - libcontainer container c97183632b543dcb5562f4f55b952c69e6bc3d5a0770aa7357716d0392eeb504.
Feb 13 15:33:16.100553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d-rootfs.mount: Deactivated successfully.
Feb 13 15:33:16.130936 containerd[1803]: time="2025-02-13T15:33:16.130793940Z" level=info msg="StartContainer for \"c97183632b543dcb5562f4f55b952c69e6bc3d5a0770aa7357716d0392eeb504\" returns successfully"
Feb 13 15:33:16.386616 kubelet[2282]: E0213 15:33:16.386485    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:16.772356 containerd[1803]: time="2025-02-13T15:33:16.772294450Z" level=info msg="CreateContainer within sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 15:33:16.802520 containerd[1803]: time="2025-02-13T15:33:16.802465967Z" level=info msg="CreateContainer within sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\""
Feb 13 15:33:16.805577 containerd[1803]: time="2025-02-13T15:33:16.804063850Z" level=info msg="StartContainer for \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\""
Feb 13 15:33:16.823866 kubelet[2282]: I0213 15:33:16.823762    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c2cxs" podStartSLOduration=5.390034724 podStartE2EDuration="17.823740463s" podCreationTimestamp="2025-02-13 15:32:59 +0000 UTC" firstStartedPulling="2025-02-13 15:33:03.509003029 +0000 UTC m=+5.003992838" lastFinishedPulling="2025-02-13 15:33:15.942708766 +0000 UTC m=+17.437698577" observedRunningTime="2025-02-13 15:33:16.823415527 +0000 UTC m=+18.318405356" watchObservedRunningTime="2025-02-13 15:33:16.823740463 +0000 UTC m=+18.318730513"
Feb 13 15:33:16.878043 systemd[1]: Started cri-containerd-92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53.scope - libcontainer container 92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53.
Feb 13 15:33:16.918840 containerd[1803]: time="2025-02-13T15:33:16.917956741Z" level=info msg="StartContainer for \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\" returns successfully"
Feb 13 15:33:17.050642 kubelet[2282]: I0213 15:33:17.050530    2282 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
Feb 13 15:33:17.098497 systemd[1]: run-containerd-runc-k8s.io-92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53-runc.yfRJe5.mount: Deactivated successfully.
Feb 13 15:33:17.388088 kubelet[2282]: E0213 15:33:17.387228    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:17.459886 kernel: Initializing XFRM netlink socket
Feb 13 15:33:18.388331 kubelet[2282]: E0213 15:33:18.388237    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:18.609572 kubelet[2282]: I0213 15:33:18.609495    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vrrb5" podStartSLOduration=10.033589343 podStartE2EDuration="19.609469977s" podCreationTimestamp="2025-02-13 15:32:59 +0000 UTC" firstStartedPulling="2025-02-13 15:33:03.502860559 +0000 UTC m=+4.997850376" lastFinishedPulling="2025-02-13 15:33:13.078741191 +0000 UTC m=+14.573731010" observedRunningTime="2025-02-13 15:33:17.810023302 +0000 UTC m=+19.305013129" watchObservedRunningTime="2025-02-13 15:33:18.609469977 +0000 UTC m=+20.104459802"
Feb 13 15:33:18.625138 systemd[1]: Created slice kubepods-besteffort-pod4a01675b_685a_4f87_ad31_e96e33f31745.slice - libcontainer container kubepods-besteffort-pod4a01675b_685a_4f87_ad31_e96e33f31745.slice.
Feb 13 15:33:18.715886 kubelet[2282]: I0213 15:33:18.715810    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4bdb\" (UniqueName: \"kubernetes.io/projected/4a01675b-685a-4f87-ad31-e96e33f31745-kube-api-access-b4bdb\") pod \"nginx-deployment-8587fbcb89-ffcss\" (UID: \"4a01675b-685a-4f87-ad31-e96e33f31745\") " pod="default/nginx-deployment-8587fbcb89-ffcss"
Feb 13 15:33:18.931328 containerd[1803]: time="2025-02-13T15:33:18.931275785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffcss,Uid:4a01675b-685a-4f87-ad31-e96e33f31745,Namespace:default,Attempt:0,}"
Feb 13 15:33:19.173377 systemd-networkd[1649]: cilium_host: Link UP
Feb 13 15:33:19.174442 (udev-worker)[2722]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:33:19.177098 systemd-networkd[1649]: cilium_net: Link UP
Feb 13 15:33:19.180316 systemd-networkd[1649]: cilium_net: Gained carrier
Feb 13 15:33:19.180619 systemd-networkd[1649]: cilium_host: Gained carrier
Feb 13 15:33:19.181642 (udev-worker)[2983]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:33:19.225610 systemd-networkd[1649]: cilium_net: Gained IPv6LL
Feb 13 15:33:19.323216 systemd-networkd[1649]: cilium_host: Gained IPv6LL
Feb 13 15:33:19.367936 kubelet[2282]: E0213 15:33:19.367886    2282 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:19.391344 kubelet[2282]: E0213 15:33:19.391236    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:19.393585 (udev-worker)[2994]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:33:19.411718 systemd-networkd[1649]: cilium_vxlan: Link UP
Feb 13 15:33:19.411729 systemd-networkd[1649]: cilium_vxlan: Gained carrier
Feb 13 15:33:19.758491 kernel: NET: Registered PF_ALG protocol family
Feb 13 15:33:20.392044 kubelet[2282]: E0213 15:33:20.391993    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:20.887392 systemd-networkd[1649]: lxc_health: Link UP
Feb 13 15:33:20.900120 systemd-networkd[1649]: lxc_health: Gained carrier
Feb 13 15:33:21.122085 systemd-networkd[1649]: cilium_vxlan: Gained IPv6LL
Feb 13 15:33:21.393569 kubelet[2282]: E0213 15:33:21.393295    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:21.495801 systemd-networkd[1649]: lxcf9be8ac8b248: Link UP
Feb 13 15:33:21.515848 kernel: eth0: renamed from tmp6b298
Feb 13 15:33:21.567340 systemd-networkd[1649]: lxcf9be8ac8b248: Gained carrier
Feb 13 15:33:22.394578 kubelet[2282]: E0213 15:33:22.394526    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:22.404487 systemd-networkd[1649]: lxc_health: Gained IPv6LL
Feb 13 15:33:23.235917 systemd-networkd[1649]: lxcf9be8ac8b248: Gained IPv6LL
Feb 13 15:33:23.394910 kubelet[2282]: E0213 15:33:23.394731    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:24.396009 kubelet[2282]: E0213 15:33:24.395957    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:25.192089 update_engine[1788]: I20250213 15:33:25.191893  1788 update_attempter.cc:509] Updating boot flags...
Feb 13 15:33:25.316044 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3350)
Feb 13 15:33:25.399219 kubelet[2282]: E0213 15:33:25.399128    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:26.128175 ntpd[1781]: Listen normally on 7 cilium_host 192.168.1.51:123
Feb 13 15:33:26.129394 ntpd[1781]: 13 Feb 15:33:26 ntpd[1781]: Listen normally on 7 cilium_host 192.168.1.51:123
Feb 13 15:33:26.129394 ntpd[1781]: 13 Feb 15:33:26 ntpd[1781]: Listen normally on 8 cilium_net [fe80::f42c:2eff:fe8c:a94b%3]:123
Feb 13 15:33:26.129394 ntpd[1781]: 13 Feb 15:33:26 ntpd[1781]: Listen normally on 9 cilium_host [fe80::d8e7:cbff:fee2:9fd9%4]:123
Feb 13 15:33:26.129394 ntpd[1781]: 13 Feb 15:33:26 ntpd[1781]: Listen normally on 10 cilium_vxlan [fe80::f4b9:fdff:fe88:f132%5]:123
Feb 13 15:33:26.129394 ntpd[1781]: 13 Feb 15:33:26 ntpd[1781]: Listen normally on 11 lxc_health [fe80::480f:9bff:fe86:59f7%7]:123
Feb 13 15:33:26.129394 ntpd[1781]: 13 Feb 15:33:26 ntpd[1781]: Listen normally on 12 lxcf9be8ac8b248 [fe80::c084:c9ff:fe51:c0ef%9]:123
Feb 13 15:33:26.128272 ntpd[1781]: Listen normally on 8 cilium_net [fe80::f42c:2eff:fe8c:a94b%3]:123
Feb 13 15:33:26.128327 ntpd[1781]: Listen normally on 9 cilium_host [fe80::d8e7:cbff:fee2:9fd9%4]:123
Feb 13 15:33:26.128367 ntpd[1781]: Listen normally on 10 cilium_vxlan [fe80::f4b9:fdff:fe88:f132%5]:123
Feb 13 15:33:26.128405 ntpd[1781]: Listen normally on 11 lxc_health [fe80::480f:9bff:fe86:59f7%7]:123
Feb 13 15:33:26.128442 ntpd[1781]: Listen normally on 12 lxcf9be8ac8b248 [fe80::c084:c9ff:fe51:c0ef%9]:123
Feb 13 15:33:26.399724 kubelet[2282]: E0213 15:33:26.399673    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:27.400041 kubelet[2282]: E0213 15:33:27.399987    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:28.312627 containerd[1803]: time="2025-02-13T15:33:28.312513089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:33:28.312627 containerd[1803]: time="2025-02-13T15:33:28.312572214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:33:28.312627 containerd[1803]: time="2025-02-13T15:33:28.312592583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:33:28.313294 containerd[1803]: time="2025-02-13T15:33:28.313199583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:33:28.364065 systemd[1]: Started cri-containerd-6b29805abe61a7a245997edf4099a928bd05ff341a7da91b3ae63d8d9a27d578.scope - libcontainer container 6b29805abe61a7a245997edf4099a928bd05ff341a7da91b3ae63d8d9a27d578.
Feb 13 15:33:28.400687 kubelet[2282]: E0213 15:33:28.400654    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:28.410049 containerd[1803]: time="2025-02-13T15:33:28.409914875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffcss,Uid:4a01675b-685a-4f87-ad31-e96e33f31745,Namespace:default,Attempt:0,} returns sandbox id \"6b29805abe61a7a245997edf4099a928bd05ff341a7da91b3ae63d8d9a27d578\""
Feb 13 15:33:28.412931 containerd[1803]: time="2025-02-13T15:33:28.412775325Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 15:33:29.400990 kubelet[2282]: E0213 15:33:29.400902    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:30.401441 kubelet[2282]: E0213 15:33:30.401359    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:31.402764 kubelet[2282]: E0213 15:33:31.402672    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:32.088419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3023583328.mount: Deactivated successfully.
Feb 13 15:33:32.403683 kubelet[2282]: E0213 15:33:32.403647    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:33.404690 kubelet[2282]: E0213 15:33:33.404650    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:33.936490 containerd[1803]: time="2025-02-13T15:33:33.936438038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:33:33.938213 containerd[1803]: time="2025-02-13T15:33:33.938023959Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493"
Feb 13 15:33:33.940227 containerd[1803]: time="2025-02-13T15:33:33.939652442Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:33:33.943274 containerd[1803]: time="2025-02-13T15:33:33.943230016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:33:33.946021 containerd[1803]: time="2025-02-13T15:33:33.945972469Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 5.533106837s"
Feb 13 15:33:33.946435 containerd[1803]: time="2025-02-13T15:33:33.946090798Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\""
Feb 13 15:33:33.949462 containerd[1803]: time="2025-02-13T15:33:33.949416466Z" level=info msg="CreateContainer within sandbox \"6b29805abe61a7a245997edf4099a928bd05ff341a7da91b3ae63d8d9a27d578\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Feb 13 15:33:33.967340 containerd[1803]: time="2025-02-13T15:33:33.967295270Z" level=info msg="CreateContainer within sandbox \"6b29805abe61a7a245997edf4099a928bd05ff341a7da91b3ae63d8d9a27d578\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f2e63f0a591b9e45321b1fc6bb347f4a323c52f358569ef6a52f6160df1c401b\""
Feb 13 15:33:33.967944 containerd[1803]: time="2025-02-13T15:33:33.967896223Z" level=info msg="StartContainer for \"f2e63f0a591b9e45321b1fc6bb347f4a323c52f358569ef6a52f6160df1c401b\""
Feb 13 15:33:34.019149 systemd[1]: Started cri-containerd-f2e63f0a591b9e45321b1fc6bb347f4a323c52f358569ef6a52f6160df1c401b.scope - libcontainer container f2e63f0a591b9e45321b1fc6bb347f4a323c52f358569ef6a52f6160df1c401b.
Feb 13 15:33:34.063534 containerd[1803]: time="2025-02-13T15:33:34.061151455Z" level=info msg="StartContainer for \"f2e63f0a591b9e45321b1fc6bb347f4a323c52f358569ef6a52f6160df1c401b\" returns successfully"
Feb 13 15:33:34.406111 kubelet[2282]: E0213 15:33:34.405570    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:34.852831 kubelet[2282]: I0213 15:33:34.852601    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-ffcss" podStartSLOduration=11.316921097 podStartE2EDuration="16.852583061s" podCreationTimestamp="2025-02-13 15:33:18 +0000 UTC" firstStartedPulling="2025-02-13 15:33:28.412304028 +0000 UTC m=+29.907293834" lastFinishedPulling="2025-02-13 15:33:33.947965992 +0000 UTC m=+35.442955798" observedRunningTime="2025-02-13 15:33:34.851677735 +0000 UTC m=+36.346667540" watchObservedRunningTime="2025-02-13 15:33:34.852583061 +0000 UTC m=+36.347572886"
Feb 13 15:33:35.406307 kubelet[2282]: E0213 15:33:35.406244    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:36.406898 kubelet[2282]: E0213 15:33:36.406838    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:37.407955 kubelet[2282]: E0213 15:33:37.407893    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:38.408503 kubelet[2282]: E0213 15:33:38.408443    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:38.824971 systemd[1]: Created slice kubepods-besteffort-podcf2a5615_0fe1_4a28_987a_3dfe8f320f82.slice - libcontainer container kubepods-besteffort-podcf2a5615_0fe1_4a28_987a_3dfe8f320f82.slice.
Feb 13 15:33:38.905449 kubelet[2282]: I0213 15:33:38.905411    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6bt\" (UniqueName: \"kubernetes.io/projected/cf2a5615-0fe1-4a28-987a-3dfe8f320f82-kube-api-access-bf6bt\") pod \"nfs-server-provisioner-0\" (UID: \"cf2a5615-0fe1-4a28-987a-3dfe8f320f82\") " pod="default/nfs-server-provisioner-0"
Feb 13 15:33:38.905617 kubelet[2282]: I0213 15:33:38.905462    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/cf2a5615-0fe1-4a28-987a-3dfe8f320f82-data\") pod \"nfs-server-provisioner-0\" (UID: \"cf2a5615-0fe1-4a28-987a-3dfe8f320f82\") " pod="default/nfs-server-provisioner-0"
Feb 13 15:33:39.129766 containerd[1803]: time="2025-02-13T15:33:39.128970334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cf2a5615-0fe1-4a28-987a-3dfe8f320f82,Namespace:default,Attempt:0,}"
Feb 13 15:33:39.223421 systemd-networkd[1649]: lxcf52b23eda24d: Link UP
Feb 13 15:33:39.233667 (udev-worker)[3567]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:33:39.240041 kernel: eth0: renamed from tmp527cf
Feb 13 15:33:39.250377 systemd-networkd[1649]: lxcf52b23eda24d: Gained carrier
Feb 13 15:33:39.368296 kubelet[2282]: E0213 15:33:39.368238    2282 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:39.408860 kubelet[2282]: E0213 15:33:39.408790    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:39.578861 containerd[1803]: time="2025-02-13T15:33:39.577723531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:33:39.578861 containerd[1803]: time="2025-02-13T15:33:39.577901963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:33:39.578861 containerd[1803]: time="2025-02-13T15:33:39.577927848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:33:39.578861 containerd[1803]: time="2025-02-13T15:33:39.578197592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:33:39.629106 systemd[1]: Started cri-containerd-527cfcd4185dd5270286e1093695be7701ac2368c72a5d1df183435e2d308daf.scope - libcontainer container 527cfcd4185dd5270286e1093695be7701ac2368c72a5d1df183435e2d308daf.
Feb 13 15:33:39.680700 containerd[1803]: time="2025-02-13T15:33:39.680539176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cf2a5615-0fe1-4a28-987a-3dfe8f320f82,Namespace:default,Attempt:0,} returns sandbox id \"527cfcd4185dd5270286e1093695be7701ac2368c72a5d1df183435e2d308daf\""
Feb 13 15:33:39.688622 containerd[1803]: time="2025-02-13T15:33:39.688577778Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Feb 13 15:33:40.322008 systemd-networkd[1649]: lxcf52b23eda24d: Gained IPv6LL
Feb 13 15:33:40.410845 kubelet[2282]: E0213 15:33:40.410419    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:41.411636 kubelet[2282]: E0213 15:33:41.411569    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:42.412222 kubelet[2282]: E0213 15:33:42.412155    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:42.425892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3328008562.mount: Deactivated successfully.
Feb 13 15:33:43.129104 ntpd[1781]: Listen normally on 13 lxcf52b23eda24d [fe80::d07a:23ff:fea4:314f%11]:123
Feb 13 15:33:43.131948 ntpd[1781]: 13 Feb 15:33:43 ntpd[1781]: Listen normally on 13 lxcf52b23eda24d [fe80::d07a:23ff:fea4:314f%11]:123
Feb 13 15:33:43.414267 kubelet[2282]: E0213 15:33:43.412890    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:44.415489 kubelet[2282]: E0213 15:33:44.415258    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:45.364638 containerd[1803]: time="2025-02-13T15:33:45.364582220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:33:45.366230 containerd[1803]: time="2025-02-13T15:33:45.366038339Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406"
Feb 13 15:33:45.368327 containerd[1803]: time="2025-02-13T15:33:45.367667749Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:33:45.371849 containerd[1803]: time="2025-02-13T15:33:45.371041315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:33:45.382164 containerd[1803]: time="2025-02-13T15:33:45.379972574Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.691346409s"
Feb 13 15:33:45.382164 containerd[1803]: time="2025-02-13T15:33:45.380027105Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\""
Feb 13 15:33:45.400623 containerd[1803]: time="2025-02-13T15:33:45.400559526Z" level=info msg="CreateContainer within sandbox \"527cfcd4185dd5270286e1093695be7701ac2368c72a5d1df183435e2d308daf\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Feb 13 15:33:45.416116 kubelet[2282]: E0213 15:33:45.416049    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:45.448267 containerd[1803]: time="2025-02-13T15:33:45.447790970Z" level=info msg="CreateContainer within sandbox \"527cfcd4185dd5270286e1093695be7701ac2368c72a5d1df183435e2d308daf\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9bb4a6a1f7a0b25fbf7171eca256d85ce1438b58d4299e1db5df4dff9bdc26de\""
Feb 13 15:33:45.455253 containerd[1803]: time="2025-02-13T15:33:45.451289570Z" level=info msg="StartContainer for \"9bb4a6a1f7a0b25fbf7171eca256d85ce1438b58d4299e1db5df4dff9bdc26de\""
Feb 13 15:33:45.525920 systemd[1]: Started cri-containerd-9bb4a6a1f7a0b25fbf7171eca256d85ce1438b58d4299e1db5df4dff9bdc26de.scope - libcontainer container 9bb4a6a1f7a0b25fbf7171eca256d85ce1438b58d4299e1db5df4dff9bdc26de.
Feb 13 15:33:45.582408 containerd[1803]: time="2025-02-13T15:33:45.582154188Z" level=info msg="StartContainer for \"9bb4a6a1f7a0b25fbf7171eca256d85ce1438b58d4299e1db5df4dff9bdc26de\" returns successfully"
Feb 13 15:33:46.417111 kubelet[2282]: E0213 15:33:46.417051    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:47.418123 kubelet[2282]: E0213 15:33:47.418065    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:48.418957 kubelet[2282]: E0213 15:33:48.418901    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:49.419071 kubelet[2282]: E0213 15:33:49.419014    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:50.420034 kubelet[2282]: E0213 15:33:50.419980    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:51.421157 kubelet[2282]: E0213 15:33:51.421065    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:52.421320 kubelet[2282]: E0213 15:33:52.421261    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:53.421457 kubelet[2282]: E0213 15:33:53.421395    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:54.421662 kubelet[2282]: E0213 15:33:54.421605    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:54.961394 kubelet[2282]: I0213 15:33:54.961331    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.260626327 podStartE2EDuration="16.96131412s" podCreationTimestamp="2025-02-13 15:33:38 +0000 UTC" firstStartedPulling="2025-02-13 15:33:39.688191974 +0000 UTC m=+41.183181790" lastFinishedPulling="2025-02-13 15:33:45.388879769 +0000 UTC m=+46.883869583" observedRunningTime="2025-02-13 15:33:45.903479021 +0000 UTC m=+47.398468847" watchObservedRunningTime="2025-02-13 15:33:54.96131412 +0000 UTC m=+56.456303941"
Feb 13 15:33:54.968471 systemd[1]: Created slice kubepods-besteffort-pod35006475_a71b_411f_8283_38989c06f3f4.slice - libcontainer container kubepods-besteffort-pod35006475_a71b_411f_8283_38989c06f3f4.slice.
Feb 13 15:33:55.030182 kubelet[2282]: I0213 15:33:55.030123    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6030ffa5-190b-4437-92e0-9feb2b7690fb\" (UniqueName: \"kubernetes.io/nfs/35006475-a71b-411f-8283-38989c06f3f4-pvc-6030ffa5-190b-4437-92e0-9feb2b7690fb\") pod \"test-pod-1\" (UID: \"35006475-a71b-411f-8283-38989c06f3f4\") " pod="default/test-pod-1"
Feb 13 15:33:55.030182 kubelet[2282]: I0213 15:33:55.030178    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxbp\" (UniqueName: \"kubernetes.io/projected/35006475-a71b-411f-8283-38989c06f3f4-kube-api-access-mzxbp\") pod \"test-pod-1\" (UID: \"35006475-a71b-411f-8283-38989c06f3f4\") " pod="default/test-pod-1"
Feb 13 15:33:55.212882 kernel: FS-Cache: Loaded
Feb 13 15:33:55.294098 kernel: RPC: Registered named UNIX socket transport module.
Feb 13 15:33:55.294250 kernel: RPC: Registered udp transport module.
Feb 13 15:33:55.294278 kernel: RPC: Registered tcp transport module.
Feb 13 15:33:55.294309 kernel: RPC: Registered tcp-with-tls transport module.
Feb 13 15:33:55.295578 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 13 15:33:55.438107 kubelet[2282]: E0213 15:33:55.422431    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:55.618254 kernel: NFS: Registering the id_resolver key type
Feb 13 15:33:55.618358 kernel: Key type id_resolver registered
Feb 13 15:33:55.618389 kernel: Key type id_legacy registered
Feb 13 15:33:55.657207 nfsidmap[3754]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal'
Feb 13 15:33:55.661641 nfsidmap[3755]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal'
Feb 13 15:33:55.872725 containerd[1803]: time="2025-02-13T15:33:55.872659943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:35006475-a71b-411f-8283-38989c06f3f4,Namespace:default,Attempt:0,}"
Feb 13 15:33:55.916866 systemd-networkd[1649]: lxc4b2534939852: Link UP
Feb 13 15:33:55.927845 kernel: eth0: renamed from tmp2ad54
Feb 13 15:33:55.934309 (udev-worker)[3749]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:33:55.936475 systemd-networkd[1649]: lxc4b2534939852: Gained carrier
Feb 13 15:33:56.176074 containerd[1803]: time="2025-02-13T15:33:56.175785285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:33:56.176074 containerd[1803]: time="2025-02-13T15:33:56.175954937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:33:56.176698 containerd[1803]: time="2025-02-13T15:33:56.175979199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:33:56.177579 containerd[1803]: time="2025-02-13T15:33:56.177521561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:33:56.230445 systemd[1]: Started cri-containerd-2ad54bb26841a766b572c21666f6d15abb41db9015e2471143a9ed81b57303f7.scope - libcontainer container 2ad54bb26841a766b572c21666f6d15abb41db9015e2471143a9ed81b57303f7.
Feb 13 15:33:56.279856 containerd[1803]: time="2025-02-13T15:33:56.279695232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:35006475-a71b-411f-8283-38989c06f3f4,Namespace:default,Attempt:0,} returns sandbox id \"2ad54bb26841a766b572c21666f6d15abb41db9015e2471143a9ed81b57303f7\""
Feb 13 15:33:56.288762 containerd[1803]: time="2025-02-13T15:33:56.288724109Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 15:33:56.422834 kubelet[2282]: E0213 15:33:56.422755    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:56.688894 containerd[1803]: time="2025-02-13T15:33:56.688844840Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:33:56.691118 containerd[1803]: time="2025-02-13T15:33:56.691039693Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61"
Feb 13 15:33:56.695832 containerd[1803]: time="2025-02-13T15:33:56.695689118Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 406.92349ms"
Feb 13 15:33:56.695832 containerd[1803]: time="2025-02-13T15:33:56.695808876Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\""
Feb 13 15:33:56.698511 containerd[1803]: time="2025-02-13T15:33:56.698476643Z" level=info msg="CreateContainer within sandbox \"2ad54bb26841a766b572c21666f6d15abb41db9015e2471143a9ed81b57303f7\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Feb 13 15:33:56.717998 containerd[1803]: time="2025-02-13T15:33:56.717950444Z" level=info msg="CreateContainer within sandbox \"2ad54bb26841a766b572c21666f6d15abb41db9015e2471143a9ed81b57303f7\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"8a504a2c70be758e80a9d3b2f4e0e56f1a087ddfc77e7005532ae16cd308b19b\""
Feb 13 15:33:56.718690 containerd[1803]: time="2025-02-13T15:33:56.718534551Z" level=info msg="StartContainer for \"8a504a2c70be758e80a9d3b2f4e0e56f1a087ddfc77e7005532ae16cd308b19b\""
Feb 13 15:33:56.761050 systemd[1]: Started cri-containerd-8a504a2c70be758e80a9d3b2f4e0e56f1a087ddfc77e7005532ae16cd308b19b.scope - libcontainer container 8a504a2c70be758e80a9d3b2f4e0e56f1a087ddfc77e7005532ae16cd308b19b.
Feb 13 15:33:56.818673 containerd[1803]: time="2025-02-13T15:33:56.818627164Z" level=info msg="StartContainer for \"8a504a2c70be758e80a9d3b2f4e0e56f1a087ddfc77e7005532ae16cd308b19b\" returns successfully"
Feb 13 15:33:56.933029 kubelet[2282]: I0213 15:33:56.932961    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.520823721 podStartE2EDuration="17.932944541s" podCreationTimestamp="2025-02-13 15:33:39 +0000 UTC" firstStartedPulling="2025-02-13 15:33:56.284416028 +0000 UTC m=+57.779405833" lastFinishedPulling="2025-02-13 15:33:56.696536826 +0000 UTC m=+58.191526653" observedRunningTime="2025-02-13 15:33:56.932001855 +0000 UTC m=+58.426991681" watchObservedRunningTime="2025-02-13 15:33:56.932944541 +0000 UTC m=+58.427934366"
Feb 13 15:33:57.154183 systemd-networkd[1649]: lxc4b2534939852: Gained IPv6LL
Feb 13 15:33:57.424005 kubelet[2282]: E0213 15:33:57.423505    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:58.423966 kubelet[2282]: E0213 15:33:58.423907    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:59.368564 kubelet[2282]: E0213 15:33:59.368505    2282 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:33:59.425080 kubelet[2282]: E0213 15:33:59.425039    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:00.128064 ntpd[1781]: Listen normally on 14 lxc4b2534939852 [fe80::9035:72ff:feba:c46a%13]:123
Feb 13 15:34:00.128569 ntpd[1781]: 13 Feb 15:34:00 ntpd[1781]: Listen normally on 14 lxc4b2534939852 [fe80::9035:72ff:feba:c46a%13]:123
Feb 13 15:34:00.425485 kubelet[2282]: E0213 15:34:00.425418    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:01.425858 kubelet[2282]: E0213 15:34:01.425765    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:02.426982 kubelet[2282]: E0213 15:34:02.426929    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:03.427201 kubelet[2282]: E0213 15:34:03.427088    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:04.427696 kubelet[2282]: E0213 15:34:04.427633    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:05.428568 kubelet[2282]: E0213 15:34:05.428448    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:05.714567 systemd[1]: run-containerd-runc-k8s.io-92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53-runc.CbupR6.mount: Deactivated successfully.
Feb 13 15:34:05.764673 containerd[1803]: time="2025-02-13T15:34:05.764612000Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:34:05.786418 containerd[1803]: time="2025-02-13T15:34:05.786297565Z" level=info msg="StopContainer for \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\" with timeout 2 (s)"
Feb 13 15:34:05.786805 containerd[1803]: time="2025-02-13T15:34:05.786774784Z" level=info msg="Stop container \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\" with signal terminated"
Feb 13 15:34:05.805357 systemd-networkd[1649]: lxc_health: Link DOWN
Feb 13 15:34:05.805368 systemd-networkd[1649]: lxc_health: Lost carrier
Feb 13 15:34:05.849954 systemd[1]: cri-containerd-92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53.scope: Deactivated successfully.
Feb 13 15:34:05.850269 systemd[1]: cri-containerd-92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53.scope: Consumed 8.684s CPU time.
Feb 13 15:34:05.893514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53-rootfs.mount: Deactivated successfully.
Feb 13 15:34:06.098532 containerd[1803]: time="2025-02-13T15:34:06.097542903Z" level=info msg="shim disconnected" id=92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53 namespace=k8s.io
Feb 13 15:34:06.098532 containerd[1803]: time="2025-02-13T15:34:06.097619607Z" level=warning msg="cleaning up after shim disconnected" id=92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53 namespace=k8s.io
Feb 13 15:34:06.098532 containerd[1803]: time="2025-02-13T15:34:06.097632829Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:34:06.122079 containerd[1803]: time="2025-02-13T15:34:06.122033644Z" level=info msg="StopContainer for \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\" returns successfully"
Feb 13 15:34:06.123183 containerd[1803]: time="2025-02-13T15:34:06.123141949Z" level=info msg="StopPodSandbox for \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\""
Feb 13 15:34:06.123302 containerd[1803]: time="2025-02-13T15:34:06.123190395Z" level=info msg="Container to stop \"6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:34:06.123302 containerd[1803]: time="2025-02-13T15:34:06.123236889Z" level=info msg="Container to stop \"8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:34:06.123302 containerd[1803]: time="2025-02-13T15:34:06.123252136Z" level=info msg="Container to stop \"b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:34:06.123302 containerd[1803]: time="2025-02-13T15:34:06.123265538Z" level=info msg="Container to stop \"6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:34:06.123302 containerd[1803]: time="2025-02-13T15:34:06.123277730Z" level=info msg="Container to stop \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:34:06.126837 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b-shm.mount: Deactivated successfully.
Feb 13 15:34:06.137798 systemd[1]: cri-containerd-c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b.scope: Deactivated successfully.
Feb 13 15:34:06.170626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b-rootfs.mount: Deactivated successfully.
Feb 13 15:34:06.180651 containerd[1803]: time="2025-02-13T15:34:06.180593974Z" level=info msg="shim disconnected" id=c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b namespace=k8s.io
Feb 13 15:34:06.180651 containerd[1803]: time="2025-02-13T15:34:06.180634959Z" level=warning msg="cleaning up after shim disconnected" id=c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b namespace=k8s.io
Feb 13 15:34:06.180651 containerd[1803]: time="2025-02-13T15:34:06.180648298Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:34:06.220189 containerd[1803]: time="2025-02-13T15:34:06.220003197Z" level=info msg="TearDown network for sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" successfully"
Feb 13 15:34:06.220189 containerd[1803]: time="2025-02-13T15:34:06.220043660Z" level=info msg="StopPodSandbox for \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" returns successfully"
Feb 13 15:34:06.320062 kubelet[2282]: I0213 15:34:06.320020    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-hostproc\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320062 kubelet[2282]: I0213 15:34:06.320063    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-etc-cni-netd\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320304 kubelet[2282]: I0213 15:34:06.320086    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-lib-modules\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320304 kubelet[2282]: I0213 15:34:06.320119    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/583be3f0-8721-4ffd-9143-ffe6b61ebc63-clustermesh-secrets\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320304 kubelet[2282]: I0213 15:34:06.320142    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-host-proc-sys-net\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320304 kubelet[2282]: I0213 15:34:06.320163    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cilium-cgroup\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320304 kubelet[2282]: I0213 15:34:06.320185    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-xtables-lock\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320304 kubelet[2282]: I0213 15:34:06.320216    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-host-proc-sys-kernel\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320632 kubelet[2282]: I0213 15:34:06.320246    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cni-path\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320632 kubelet[2282]: I0213 15:34:06.320274    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42f6q\" (UniqueName: \"kubernetes.io/projected/583be3f0-8721-4ffd-9143-ffe6b61ebc63-kube-api-access-42f6q\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320632 kubelet[2282]: I0213 15:34:06.320300    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/583be3f0-8721-4ffd-9143-ffe6b61ebc63-hubble-tls\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320632 kubelet[2282]: I0213 15:34:06.320327    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cilium-run\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320632 kubelet[2282]: I0213 15:34:06.320350    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-bpf-maps\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.320632 kubelet[2282]: I0213 15:34:06.320388    2282 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cilium-config-path\") pod \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\" (UID: \"583be3f0-8721-4ffd-9143-ffe6b61ebc63\") "
Feb 13 15:34:06.323283 kubelet[2282]: I0213 15:34:06.319983    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-hostproc" (OuterVolumeSpecName: "hostproc") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:34:06.323283 kubelet[2282]: I0213 15:34:06.321945    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:34:06.323283 kubelet[2282]: I0213 15:34:06.321983    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cni-path" (OuterVolumeSpecName: "cni-path") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:34:06.323283 kubelet[2282]: I0213 15:34:06.321984    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:34:06.323283 kubelet[2282]: I0213 15:34:06.322002    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:34:06.324003 kubelet[2282]: I0213 15:34:06.323892    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:34:06.324003 kubelet[2282]: I0213 15:34:06.323970    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:34:06.324410 kubelet[2282]: I0213 15:34:06.321868    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:34:06.324580 kubelet[2282]: I0213 15:34:06.324534    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:34:06.324911 kubelet[2282]: I0213 15:34:06.324700    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:34:06.330776 kubelet[2282]: I0213 15:34:06.330730    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/583be3f0-8721-4ffd-9143-ffe6b61ebc63-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 13 15:34:06.331094 kubelet[2282]: I0213 15:34:06.330982    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 13 15:34:06.331557 kubelet[2282]: I0213 15:34:06.331529    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/583be3f0-8721-4ffd-9143-ffe6b61ebc63-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:34:06.332189 kubelet[2282]: I0213 15:34:06.332154    2282 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/583be3f0-8721-4ffd-9143-ffe6b61ebc63-kube-api-access-42f6q" (OuterVolumeSpecName: "kube-api-access-42f6q") pod "583be3f0-8721-4ffd-9143-ffe6b61ebc63" (UID: "583be3f0-8721-4ffd-9143-ffe6b61ebc63"). InnerVolumeSpecName "kube-api-access-42f6q". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:34:06.421617 kubelet[2282]: I0213 15:34:06.421155    2282 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-xtables-lock\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.421888 kubelet[2282]: I0213 15:34:06.421629    2282 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-host-proc-sys-kernel\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.421888 kubelet[2282]: I0213 15:34:06.421652    2282 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-host-proc-sys-net\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.421888 kubelet[2282]: I0213 15:34:06.421666    2282 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cilium-cgroup\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.421888 kubelet[2282]: I0213 15:34:06.421679    2282 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cilium-run\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.421888 kubelet[2282]: I0213 15:34:06.421690    2282 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-bpf-maps\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.421888 kubelet[2282]: I0213 15:34:06.421703    2282 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cilium-config-path\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.421888 kubelet[2282]: I0213 15:34:06.421714    2282 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-cni-path\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.421888 kubelet[2282]: I0213 15:34:06.421727    2282 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-42f6q\" (UniqueName: \"kubernetes.io/projected/583be3f0-8721-4ffd-9143-ffe6b61ebc63-kube-api-access-42f6q\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.422144 kubelet[2282]: I0213 15:34:06.421739    2282 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/583be3f0-8721-4ffd-9143-ffe6b61ebc63-hubble-tls\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.422144 kubelet[2282]: I0213 15:34:06.421751    2282 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/583be3f0-8721-4ffd-9143-ffe6b61ebc63-clustermesh-secrets\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.422144 kubelet[2282]: I0213 15:34:06.421762    2282 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-hostproc\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.422144 kubelet[2282]: I0213 15:34:06.421772    2282 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-etc-cni-netd\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.422144 kubelet[2282]: I0213 15:34:06.421784    2282 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/583be3f0-8721-4ffd-9143-ffe6b61ebc63-lib-modules\") on node \"172.31.29.108\" DevicePath \"\""
Feb 13 15:34:06.429278 kubelet[2282]: E0213 15:34:06.429214    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:06.706984 systemd[1]: var-lib-kubelet-pods-583be3f0\x2d8721\x2d4ffd\x2d9143\x2dffe6b61ebc63-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d42f6q.mount: Deactivated successfully.
Feb 13 15:34:06.707123 systemd[1]: var-lib-kubelet-pods-583be3f0\x2d8721\x2d4ffd\x2d9143\x2dffe6b61ebc63-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 13 15:34:06.707219 systemd[1]: var-lib-kubelet-pods-583be3f0\x2d8721\x2d4ffd\x2d9143\x2dffe6b61ebc63-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 13 15:34:06.942435 kubelet[2282]: I0213 15:34:06.942397    2282 scope.go:117] "RemoveContainer" containerID="92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53"
Feb 13 15:34:06.949675 containerd[1803]: time="2025-02-13T15:34:06.947580561Z" level=info msg="RemoveContainer for \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\""
Feb 13 15:34:06.956151 containerd[1803]: time="2025-02-13T15:34:06.956077071Z" level=info msg="RemoveContainer for \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\" returns successfully"
Feb 13 15:34:06.958432 kubelet[2282]: I0213 15:34:06.958325    2282 scope.go:117] "RemoveContainer" containerID="6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d"
Feb 13 15:34:06.960771 containerd[1803]: time="2025-02-13T15:34:06.960730387Z" level=info msg="RemoveContainer for \"6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d\""
Feb 13 15:34:06.962065 systemd[1]: Removed slice kubepods-burstable-pod583be3f0_8721_4ffd_9143_ffe6b61ebc63.slice - libcontainer container kubepods-burstable-pod583be3f0_8721_4ffd_9143_ffe6b61ebc63.slice.
Feb 13 15:34:06.962402 systemd[1]: kubepods-burstable-pod583be3f0_8721_4ffd_9143_ffe6b61ebc63.slice: Consumed 8.790s CPU time.
Feb 13 15:34:06.965354 containerd[1803]: time="2025-02-13T15:34:06.965321570Z" level=info msg="RemoveContainer for \"6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d\" returns successfully"
Feb 13 15:34:06.965649 kubelet[2282]: I0213 15:34:06.965617    2282 scope.go:117] "RemoveContainer" containerID="b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1"
Feb 13 15:34:06.967332 containerd[1803]: time="2025-02-13T15:34:06.967285776Z" level=info msg="RemoveContainer for \"b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1\""
Feb 13 15:34:06.972547 containerd[1803]: time="2025-02-13T15:34:06.972502659Z" level=info msg="RemoveContainer for \"b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1\" returns successfully"
Feb 13 15:34:06.972764 kubelet[2282]: I0213 15:34:06.972741    2282 scope.go:117] "RemoveContainer" containerID="8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1"
Feb 13 15:34:06.974331 containerd[1803]: time="2025-02-13T15:34:06.973987316Z" level=info msg="RemoveContainer for \"8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1\""
Feb 13 15:34:06.977180 containerd[1803]: time="2025-02-13T15:34:06.977138341Z" level=info msg="RemoveContainer for \"8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1\" returns successfully"
Feb 13 15:34:06.977448 kubelet[2282]: I0213 15:34:06.977360    2282 scope.go:117] "RemoveContainer" containerID="6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090"
Feb 13 15:34:06.978836 containerd[1803]: time="2025-02-13T15:34:06.978789686Z" level=info msg="RemoveContainer for \"6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090\""
Feb 13 15:34:06.982923 containerd[1803]: time="2025-02-13T15:34:06.982723825Z" level=info msg="RemoveContainer for \"6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090\" returns successfully"
Feb 13 15:34:06.983230 kubelet[2282]: I0213 15:34:06.983205    2282 scope.go:117] "RemoveContainer" containerID="92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53"
Feb 13 15:34:06.987416 containerd[1803]: time="2025-02-13T15:34:06.987366318Z" level=error msg="ContainerStatus for \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\": not found"
Feb 13 15:34:06.987717 kubelet[2282]: E0213 15:34:06.987650    2282 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\": not found" containerID="92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53"
Feb 13 15:34:06.987848 kubelet[2282]: I0213 15:34:06.987695    2282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53"} err="failed to get container status \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\": rpc error: code = NotFound desc = an error occurred when try to find container \"92f5f820ec5d09e26b2217bfc20fef7c8880e90713fbef66491e01f35fb19a53\": not found"
Feb 13 15:34:06.987848 kubelet[2282]: I0213 15:34:06.987803    2282 scope.go:117] "RemoveContainer" containerID="6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d"
Feb 13 15:34:06.989383 containerd[1803]: time="2025-02-13T15:34:06.989336334Z" level=error msg="ContainerStatus for \"6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d\": not found"
Feb 13 15:34:06.990665 kubelet[2282]: E0213 15:34:06.990422    2282 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d\": not found" containerID="6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d"
Feb 13 15:34:06.990665 kubelet[2282]: I0213 15:34:06.990461    2282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d"} err="failed to get container status \"6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e1392c7287a36fa25bd794828a46dbfdeeaef9bd2cbbe1738319cf74ed8477d\": not found"
Feb 13 15:34:06.990665 kubelet[2282]: I0213 15:34:06.990489    2282 scope.go:117] "RemoveContainer" containerID="b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1"
Feb 13 15:34:06.990847 containerd[1803]: time="2025-02-13T15:34:06.990721930Z" level=error msg="ContainerStatus for \"b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1\": not found"
Feb 13 15:34:06.990895 kubelet[2282]: E0213 15:34:06.990880    2282 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1\": not found" containerID="b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1"
Feb 13 15:34:06.990943 kubelet[2282]: I0213 15:34:06.990906    2282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1"} err="failed to get container status \"b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1714018924942a3b25a805543f24bb074c15fc3667ed6b0d30dde6c87b721a1\": not found"
Feb 13 15:34:06.990943 kubelet[2282]: I0213 15:34:06.990929    2282 scope.go:117] "RemoveContainer" containerID="8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1"
Feb 13 15:34:06.998233 containerd[1803]: time="2025-02-13T15:34:06.993103588Z" level=error msg="ContainerStatus for \"8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1\": not found"
Feb 13 15:34:06.999615 kubelet[2282]: E0213 15:34:06.999559    2282 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1\": not found" containerID="8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1"
Feb 13 15:34:06.999757 kubelet[2282]: I0213 15:34:06.999614    2282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1"} err="failed to get container status \"8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"8345aedf5dd1e12eddc1da7e108f629a973a248c65fcf75570756113a7210cd1\": not found"
Feb 13 15:34:06.999757 kubelet[2282]: I0213 15:34:06.999648    2282 scope.go:117] "RemoveContainer" containerID="6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090"
Feb 13 15:34:07.004178 containerd[1803]: time="2025-02-13T15:34:07.004112523Z" level=error msg="ContainerStatus for \"6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090\": not found"
Feb 13 15:34:07.004691 kubelet[2282]: E0213 15:34:07.004632    2282 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090\": not found" containerID="6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090"
Feb 13 15:34:07.004861 kubelet[2282]: I0213 15:34:07.004697    2282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090"} err="failed to get container status \"6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090\": rpc error: code = NotFound desc = an error occurred when try to find container \"6304a3599643c355935ef5045e38d201e21259d57e7a5e5344eec822e2173090\": not found"
Feb 13 15:34:07.430054 kubelet[2282]: E0213 15:34:07.429998    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:07.590571 kubelet[2282]: I0213 15:34:07.590514    2282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="583be3f0-8721-4ffd-9143-ffe6b61ebc63" path="/var/lib/kubelet/pods/583be3f0-8721-4ffd-9143-ffe6b61ebc63/volumes"
Feb 13 15:34:08.127918 ntpd[1781]: Deleting interface #11 lxc_health, fe80::480f:9bff:fe86:59f7%7#123, interface stats: received=0, sent=0, dropped=0, active_time=42 secs
Feb 13 15:34:08.128297 ntpd[1781]: 13 Feb 15:34:08 ntpd[1781]: Deleting interface #11 lxc_health, fe80::480f:9bff:fe86:59f7%7#123, interface stats: received=0, sent=0, dropped=0, active_time=42 secs
Feb 13 15:34:08.430557 kubelet[2282]: E0213 15:34:08.430498    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:09.052328 kubelet[2282]: E0213 15:34:09.052279    2282 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="583be3f0-8721-4ffd-9143-ffe6b61ebc63" containerName="mount-cgroup"
Feb 13 15:34:09.052328 kubelet[2282]: E0213 15:34:09.052309    2282 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="583be3f0-8721-4ffd-9143-ffe6b61ebc63" containerName="clean-cilium-state"
Feb 13 15:34:09.052328 kubelet[2282]: E0213 15:34:09.052319    2282 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="583be3f0-8721-4ffd-9143-ffe6b61ebc63" containerName="cilium-agent"
Feb 13 15:34:09.052328 kubelet[2282]: E0213 15:34:09.052329    2282 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="583be3f0-8721-4ffd-9143-ffe6b61ebc63" containerName="apply-sysctl-overwrites"
Feb 13 15:34:09.052328 kubelet[2282]: E0213 15:34:09.052336    2282 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="583be3f0-8721-4ffd-9143-ffe6b61ebc63" containerName="mount-bpf-fs"
Feb 13 15:34:09.052653 kubelet[2282]: I0213 15:34:09.052363    2282 memory_manager.go:354] "RemoveStaleState removing state" podUID="583be3f0-8721-4ffd-9143-ffe6b61ebc63" containerName="cilium-agent"
Feb 13 15:34:09.069968 systemd[1]: Created slice kubepods-besteffort-podadd7bd0c_131c_4f51_b9cf_0909c51cfb6b.slice - libcontainer container kubepods-besteffort-podadd7bd0c_131c_4f51_b9cf_0909c51cfb6b.slice.
Feb 13 15:34:09.122697 systemd[1]: Created slice kubepods-burstable-podde2cfda8_8348_4052_b0b8_ff60e6145189.slice - libcontainer container kubepods-burstable-podde2cfda8_8348_4052_b0b8_ff60e6145189.slice.
Feb 13 15:34:09.141835 kubelet[2282]: I0213 15:34:09.141775    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/add7bd0c-131c-4f51-b9cf-0909c51cfb6b-cilium-config-path\") pod \"cilium-operator-5d85765b45-gdrl4\" (UID: \"add7bd0c-131c-4f51-b9cf-0909c51cfb6b\") " pod="kube-system/cilium-operator-5d85765b45-gdrl4"
Feb 13 15:34:09.142025 kubelet[2282]: I0213 15:34:09.141846    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6jn9\" (UniqueName: \"kubernetes.io/projected/add7bd0c-131c-4f51-b9cf-0909c51cfb6b-kube-api-access-r6jn9\") pod \"cilium-operator-5d85765b45-gdrl4\" (UID: \"add7bd0c-131c-4f51-b9cf-0909c51cfb6b\") " pod="kube-system/cilium-operator-5d85765b45-gdrl4"
Feb 13 15:34:09.243125 kubelet[2282]: I0213 15:34:09.242966    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de2cfda8-8348-4052-b0b8-ff60e6145189-cilium-run\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243125 kubelet[2282]: I0213 15:34:09.243018    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de2cfda8-8348-4052-b0b8-ff60e6145189-bpf-maps\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243125 kubelet[2282]: I0213 15:34:09.243058    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de2cfda8-8348-4052-b0b8-ff60e6145189-hostproc\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243125 kubelet[2282]: I0213 15:34:09.243084    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de2cfda8-8348-4052-b0b8-ff60e6145189-etc-cni-netd\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243125 kubelet[2282]: I0213 15:34:09.243118    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de2cfda8-8348-4052-b0b8-ff60e6145189-lib-modules\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243469 kubelet[2282]: I0213 15:34:09.243140    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de2cfda8-8348-4052-b0b8-ff60e6145189-clustermesh-secrets\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243469 kubelet[2282]: I0213 15:34:09.243163    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4tmf\" (UniqueName: \"kubernetes.io/projected/de2cfda8-8348-4052-b0b8-ff60e6145189-kube-api-access-n4tmf\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243469 kubelet[2282]: I0213 15:34:09.243203    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de2cfda8-8348-4052-b0b8-ff60e6145189-cni-path\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243469 kubelet[2282]: I0213 15:34:09.243240    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de2cfda8-8348-4052-b0b8-ff60e6145189-cilium-cgroup\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243469 kubelet[2282]: I0213 15:34:09.243260    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de2cfda8-8348-4052-b0b8-ff60e6145189-xtables-lock\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243469 kubelet[2282]: I0213 15:34:09.243282    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de2cfda8-8348-4052-b0b8-ff60e6145189-cilium-config-path\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243704 kubelet[2282]: I0213 15:34:09.243310    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de2cfda8-8348-4052-b0b8-ff60e6145189-host-proc-sys-net\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243704 kubelet[2282]: I0213 15:34:09.243334    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de2cfda8-8348-4052-b0b8-ff60e6145189-host-proc-sys-kernel\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243704 kubelet[2282]: I0213 15:34:09.243357    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de2cfda8-8348-4052-b0b8-ff60e6145189-hubble-tls\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.243704 kubelet[2282]: I0213 15:34:09.243377    2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de2cfda8-8348-4052-b0b8-ff60e6145189-cilium-ipsec-secrets\") pod \"cilium-xz45x\" (UID: \"de2cfda8-8348-4052-b0b8-ff60e6145189\") " pod="kube-system/cilium-xz45x"
Feb 13 15:34:09.382200 containerd[1803]: time="2025-02-13T15:34:09.381550729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gdrl4,Uid:add7bd0c-131c-4f51-b9cf-0909c51cfb6b,Namespace:kube-system,Attempt:0,}"
Feb 13 15:34:09.408422 containerd[1803]: time="2025-02-13T15:34:09.407911741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:34:09.408422 containerd[1803]: time="2025-02-13T15:34:09.407982687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:34:09.408422 containerd[1803]: time="2025-02-13T15:34:09.408006578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:34:09.409214 containerd[1803]: time="2025-02-13T15:34:09.409080765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:34:09.430760 kubelet[2282]: E0213 15:34:09.430716    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:09.433041 systemd[1]: Started cri-containerd-50716c8af8ba41c6392a30cdbb71ae499ad2e9453d904412c9888e743ebd4581.scope - libcontainer container 50716c8af8ba41c6392a30cdbb71ae499ad2e9453d904412c9888e743ebd4581.
Feb 13 15:34:09.437715 containerd[1803]: time="2025-02-13T15:34:09.437666180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xz45x,Uid:de2cfda8-8348-4052-b0b8-ff60e6145189,Namespace:kube-system,Attempt:0,}"
Feb 13 15:34:09.477750 containerd[1803]: time="2025-02-13T15:34:09.477638683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:34:09.478127 containerd[1803]: time="2025-02-13T15:34:09.477956959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:34:09.478127 containerd[1803]: time="2025-02-13T15:34:09.478021551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:34:09.478764 containerd[1803]: time="2025-02-13T15:34:09.478562213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:34:09.506140 systemd[1]: Started cri-containerd-2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d.scope - libcontainer container 2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d.
Feb 13 15:34:09.516796 containerd[1803]: time="2025-02-13T15:34:09.516736657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gdrl4,Uid:add7bd0c-131c-4f51-b9cf-0909c51cfb6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"50716c8af8ba41c6392a30cdbb71ae499ad2e9453d904412c9888e743ebd4581\""
Feb 13 15:34:09.520248 containerd[1803]: time="2025-02-13T15:34:09.519547836Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb 13 15:34:09.544607 containerd[1803]: time="2025-02-13T15:34:09.544496573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xz45x,Uid:de2cfda8-8348-4052-b0b8-ff60e6145189,Namespace:kube-system,Attempt:0,} returns sandbox id \"2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d\""
Feb 13 15:34:09.547959 containerd[1803]: time="2025-02-13T15:34:09.547737357Z" level=info msg="CreateContainer within sandbox \"2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 15:34:09.576288 containerd[1803]: time="2025-02-13T15:34:09.576231566Z" level=info msg="CreateContainer within sandbox \"2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f4e3b17fa94d906a448b94e323eb1bfa0fc1edd8ede7a7a22dc4edc340b14e58\""
Feb 13 15:34:09.585714 containerd[1803]: time="2025-02-13T15:34:09.582661967Z" level=info msg="StartContainer for \"f4e3b17fa94d906a448b94e323eb1bfa0fc1edd8ede7a7a22dc4edc340b14e58\""
Feb 13 15:34:09.586140 kubelet[2282]: E0213 15:34:09.586097    2282 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 13 15:34:09.624071 systemd[1]: Started cri-containerd-f4e3b17fa94d906a448b94e323eb1bfa0fc1edd8ede7a7a22dc4edc340b14e58.scope - libcontainer container f4e3b17fa94d906a448b94e323eb1bfa0fc1edd8ede7a7a22dc4edc340b14e58.
Feb 13 15:34:09.660036 containerd[1803]: time="2025-02-13T15:34:09.659991258Z" level=info msg="StartContainer for \"f4e3b17fa94d906a448b94e323eb1bfa0fc1edd8ede7a7a22dc4edc340b14e58\" returns successfully"
Feb 13 15:34:09.687442 systemd[1]: cri-containerd-f4e3b17fa94d906a448b94e323eb1bfa0fc1edd8ede7a7a22dc4edc340b14e58.scope: Deactivated successfully.
Feb 13 15:34:09.732963 containerd[1803]: time="2025-02-13T15:34:09.732895525Z" level=info msg="shim disconnected" id=f4e3b17fa94d906a448b94e323eb1bfa0fc1edd8ede7a7a22dc4edc340b14e58 namespace=k8s.io
Feb 13 15:34:09.732963 containerd[1803]: time="2025-02-13T15:34:09.732967428Z" level=warning msg="cleaning up after shim disconnected" id=f4e3b17fa94d906a448b94e323eb1bfa0fc1edd8ede7a7a22dc4edc340b14e58 namespace=k8s.io
Feb 13 15:34:09.733370 containerd[1803]: time="2025-02-13T15:34:09.732980255Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:34:09.979036 containerd[1803]: time="2025-02-13T15:34:09.978598526Z" level=info msg="CreateContainer within sandbox \"2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 15:34:09.999907 containerd[1803]: time="2025-02-13T15:34:09.999841964Z" level=info msg="CreateContainer within sandbox \"2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5bdb827faae52e1b6c4894bfd8760701b10a53715b27fcf0841db3963d227105\""
Feb 13 15:34:10.000763 containerd[1803]: time="2025-02-13T15:34:10.000724230Z" level=info msg="StartContainer for \"5bdb827faae52e1b6c4894bfd8760701b10a53715b27fcf0841db3963d227105\""
Feb 13 15:34:10.037192 systemd[1]: Started cri-containerd-5bdb827faae52e1b6c4894bfd8760701b10a53715b27fcf0841db3963d227105.scope - libcontainer container 5bdb827faae52e1b6c4894bfd8760701b10a53715b27fcf0841db3963d227105.
Feb 13 15:34:10.085025 containerd[1803]: time="2025-02-13T15:34:10.084800987Z" level=info msg="StartContainer for \"5bdb827faae52e1b6c4894bfd8760701b10a53715b27fcf0841db3963d227105\" returns successfully"
Feb 13 15:34:10.113222 systemd[1]: cri-containerd-5bdb827faae52e1b6c4894bfd8760701b10a53715b27fcf0841db3963d227105.scope: Deactivated successfully.
Feb 13 15:34:10.208363 containerd[1803]: time="2025-02-13T15:34:10.208288954Z" level=info msg="shim disconnected" id=5bdb827faae52e1b6c4894bfd8760701b10a53715b27fcf0841db3963d227105 namespace=k8s.io
Feb 13 15:34:10.208363 containerd[1803]: time="2025-02-13T15:34:10.208342691Z" level=warning msg="cleaning up after shim disconnected" id=5bdb827faae52e1b6c4894bfd8760701b10a53715b27fcf0841db3963d227105 namespace=k8s.io
Feb 13 15:34:10.208363 containerd[1803]: time="2025-02-13T15:34:10.208356152Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:34:10.431131 kubelet[2282]: E0213 15:34:10.430966    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:10.980401 containerd[1803]: time="2025-02-13T15:34:10.980356908Z" level=info msg="CreateContainer within sandbox \"2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 15:34:11.005013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19633411.mount: Deactivated successfully.
Feb 13 15:34:11.025400 containerd[1803]: time="2025-02-13T15:34:11.024843443Z" level=info msg="CreateContainer within sandbox \"2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b1d95da9f2f77ccfd2cbdb9a8ca470787ec0d0208c7e365bb76e98bd9c474f2\""
Feb 13 15:34:11.026449 containerd[1803]: time="2025-02-13T15:34:11.026415550Z" level=info msg="StartContainer for \"8b1d95da9f2f77ccfd2cbdb9a8ca470787ec0d0208c7e365bb76e98bd9c474f2\""
Feb 13 15:34:11.087886 systemd[1]: Started cri-containerd-8b1d95da9f2f77ccfd2cbdb9a8ca470787ec0d0208c7e365bb76e98bd9c474f2.scope - libcontainer container 8b1d95da9f2f77ccfd2cbdb9a8ca470787ec0d0208c7e365bb76e98bd9c474f2.
Feb 13 15:34:11.148877 containerd[1803]: time="2025-02-13T15:34:11.148101705Z" level=info msg="StartContainer for \"8b1d95da9f2f77ccfd2cbdb9a8ca470787ec0d0208c7e365bb76e98bd9c474f2\" returns successfully"
Feb 13 15:34:11.162130 systemd[1]: cri-containerd-8b1d95da9f2f77ccfd2cbdb9a8ca470787ec0d0208c7e365bb76e98bd9c474f2.scope: Deactivated successfully.
Feb 13 15:34:11.238604 containerd[1803]: time="2025-02-13T15:34:11.238346975Z" level=info msg="shim disconnected" id=8b1d95da9f2f77ccfd2cbdb9a8ca470787ec0d0208c7e365bb76e98bd9c474f2 namespace=k8s.io
Feb 13 15:34:11.238604 containerd[1803]: time="2025-02-13T15:34:11.238408110Z" level=warning msg="cleaning up after shim disconnected" id=8b1d95da9f2f77ccfd2cbdb9a8ca470787ec0d0208c7e365bb76e98bd9c474f2 namespace=k8s.io
Feb 13 15:34:11.238604 containerd[1803]: time="2025-02-13T15:34:11.238421473Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:34:11.433257 kubelet[2282]: E0213 15:34:11.432879    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:11.840784 kubelet[2282]: I0213 15:34:11.840643    2282 setters.go:600] "Node became not ready" node="172.31.29.108" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:34:11Z","lastTransitionTime":"2025-02-13T15:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Feb 13 15:34:11.996956 containerd[1803]: time="2025-02-13T15:34:11.996908940Z" level=info msg="CreateContainer within sandbox \"2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 15:34:12.018760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176313395.mount: Deactivated successfully.
Feb 13 15:34:12.026156 containerd[1803]: time="2025-02-13T15:34:12.026108933Z" level=info msg="CreateContainer within sandbox \"2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"83266ab79f95d22520fa2dfd9289a5d777fea21915236fbbc8170f75a666f617\""
Feb 13 15:34:12.027033 containerd[1803]: time="2025-02-13T15:34:12.026873973Z" level=info msg="StartContainer for \"83266ab79f95d22520fa2dfd9289a5d777fea21915236fbbc8170f75a666f617\""
Feb 13 15:34:12.084706 systemd[1]: Started cri-containerd-83266ab79f95d22520fa2dfd9289a5d777fea21915236fbbc8170f75a666f617.scope - libcontainer container 83266ab79f95d22520fa2dfd9289a5d777fea21915236fbbc8170f75a666f617.
Feb 13 15:34:12.136127 systemd[1]: cri-containerd-83266ab79f95d22520fa2dfd9289a5d777fea21915236fbbc8170f75a666f617.scope: Deactivated successfully.
Feb 13 15:34:12.142355 containerd[1803]: time="2025-02-13T15:34:12.141594409Z" level=info msg="StartContainer for \"83266ab79f95d22520fa2dfd9289a5d777fea21915236fbbc8170f75a666f617\" returns successfully"
Feb 13 15:34:12.276292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83266ab79f95d22520fa2dfd9289a5d777fea21915236fbbc8170f75a666f617-rootfs.mount: Deactivated successfully.
Feb 13 15:34:12.300847 containerd[1803]: time="2025-02-13T15:34:12.299948545Z" level=info msg="shim disconnected" id=83266ab79f95d22520fa2dfd9289a5d777fea21915236fbbc8170f75a666f617 namespace=k8s.io
Feb 13 15:34:12.300847 containerd[1803]: time="2025-02-13T15:34:12.300012809Z" level=warning msg="cleaning up after shim disconnected" id=83266ab79f95d22520fa2dfd9289a5d777fea21915236fbbc8170f75a666f617 namespace=k8s.io
Feb 13 15:34:12.300847 containerd[1803]: time="2025-02-13T15:34:12.300024133Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:34:12.339675 containerd[1803]: time="2025-02-13T15:34:12.339626707Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:34:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 13 15:34:12.345669 containerd[1803]: time="2025-02-13T15:34:12.345620401Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:34:12.347115 containerd[1803]: time="2025-02-13T15:34:12.346884420Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197"
Feb 13 15:34:12.348561 containerd[1803]: time="2025-02-13T15:34:12.348319900Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:34:12.349955 containerd[1803]: time="2025-02-13T15:34:12.349897811Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.830301041s"
Feb 13 15:34:12.350053 containerd[1803]: time="2025-02-13T15:34:12.349960957Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Feb 13 15:34:12.352913 containerd[1803]: time="2025-02-13T15:34:12.352883045Z" level=info msg="CreateContainer within sandbox \"50716c8af8ba41c6392a30cdbb71ae499ad2e9453d904412c9888e743ebd4581\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb 13 15:34:12.379841 containerd[1803]: time="2025-02-13T15:34:12.377190562Z" level=info msg="CreateContainer within sandbox \"50716c8af8ba41c6392a30cdbb71ae499ad2e9453d904412c9888e743ebd4581\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"983236d60b2f3fdf73dc0dd7858fa1f2b923d6b8608d016d404d0acfff3ae7d9\""
Feb 13 15:34:12.380556 containerd[1803]: time="2025-02-13T15:34:12.380382006Z" level=info msg="StartContainer for \"983236d60b2f3fdf73dc0dd7858fa1f2b923d6b8608d016d404d0acfff3ae7d9\""
Feb 13 15:34:12.428047 systemd[1]: Started cri-containerd-983236d60b2f3fdf73dc0dd7858fa1f2b923d6b8608d016d404d0acfff3ae7d9.scope - libcontainer container 983236d60b2f3fdf73dc0dd7858fa1f2b923d6b8608d016d404d0acfff3ae7d9.
Feb 13 15:34:12.434056 kubelet[2282]: E0213 15:34:12.434022    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:12.482284 containerd[1803]: time="2025-02-13T15:34:12.482236269Z" level=info msg="StartContainer for \"983236d60b2f3fdf73dc0dd7858fa1f2b923d6b8608d016d404d0acfff3ae7d9\" returns successfully"
Feb 13 15:34:13.005111 containerd[1803]: time="2025-02-13T15:34:13.004889025Z" level=info msg="CreateContainer within sandbox \"2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 15:34:13.194406 containerd[1803]: time="2025-02-13T15:34:13.194324569Z" level=info msg="CreateContainer within sandbox \"2120872ccd76f69aff5a27e19660a7255cc6de8b7a788046d4e9feb77921197d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"475b4664ef16df2edc77b472cb6136f8d902f9f87b97275dec5ef6180023e7d6\""
Feb 13 15:34:13.196696 containerd[1803]: time="2025-02-13T15:34:13.195539270Z" level=info msg="StartContainer for \"475b4664ef16df2edc77b472cb6136f8d902f9f87b97275dec5ef6180023e7d6\""
Feb 13 15:34:13.246745 systemd[1]: Started cri-containerd-475b4664ef16df2edc77b472cb6136f8d902f9f87b97275dec5ef6180023e7d6.scope - libcontainer container 475b4664ef16df2edc77b472cb6136f8d902f9f87b97275dec5ef6180023e7d6.
Feb 13 15:34:13.325977 containerd[1803]: time="2025-02-13T15:34:13.325759952Z" level=info msg="StartContainer for \"475b4664ef16df2edc77b472cb6136f8d902f9f87b97275dec5ef6180023e7d6\" returns successfully"
Feb 13 15:34:13.434962 kubelet[2282]: E0213 15:34:13.434918    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:14.120053 kubelet[2282]: I0213 15:34:14.119987    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-gdrl4" podStartSLOduration=3.287338727 podStartE2EDuration="6.119965384s" podCreationTimestamp="2025-02-13 15:34:08 +0000 UTC" firstStartedPulling="2025-02-13 15:34:09.51866276 +0000 UTC m=+71.013652565" lastFinishedPulling="2025-02-13 15:34:12.351289412 +0000 UTC m=+73.846279222" observedRunningTime="2025-02-13 15:34:13.041507507 +0000 UTC m=+74.536497332" watchObservedRunningTime="2025-02-13 15:34:14.119965384 +0000 UTC m=+75.614955210"
Feb 13 15:34:14.145866 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Feb 13 15:34:14.436139 kubelet[2282]: E0213 15:34:14.436078    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:15.437072 kubelet[2282]: E0213 15:34:15.437012    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:16.442990 kubelet[2282]: E0213 15:34:16.442938    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:16.496426 systemd[1]: run-containerd-runc-k8s.io-475b4664ef16df2edc77b472cb6136f8d902f9f87b97275dec5ef6180023e7d6-runc.IZfQZp.mount: Deactivated successfully.
Feb 13 15:34:17.443463 kubelet[2282]: E0213 15:34:17.443341    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:17.955135 (udev-worker)[4895]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:34:17.955685 systemd-networkd[1649]: lxc_health: Link UP
Feb 13 15:34:17.962790 (udev-worker)[4897]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:34:17.968101 systemd-networkd[1649]: lxc_health: Gained carrier
Feb 13 15:34:18.444129 kubelet[2282]: E0213 15:34:18.443977    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:19.173073 systemd-networkd[1649]: lxc_health: Gained IPv6LL
Feb 13 15:34:19.367881 kubelet[2282]: E0213 15:34:19.367839    2282 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:19.450090 kubelet[2282]: E0213 15:34:19.444352    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:19.473010 kubelet[2282]: I0213 15:34:19.472948    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xz45x" podStartSLOduration=10.472926108 podStartE2EDuration="10.472926108s" podCreationTimestamp="2025-02-13 15:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:14.120433 +0000 UTC m=+75.615422828" watchObservedRunningTime="2025-02-13 15:34:19.472926108 +0000 UTC m=+80.967915934"
Feb 13 15:34:20.445025 kubelet[2282]: E0213 15:34:20.444938    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:21.445575 kubelet[2282]: E0213 15:34:21.445474    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:21.879231 systemd[1]: run-containerd-runc-k8s.io-475b4664ef16df2edc77b472cb6136f8d902f9f87b97275dec5ef6180023e7d6-runc.CyVNyS.mount: Deactivated successfully.
Feb 13 15:34:22.128005 ntpd[1781]: Listen normally on 15 lxc_health [fe80::f85f:d6ff:fee1:7ec%15]:123
Feb 13 15:34:22.129350 ntpd[1781]: 13 Feb 15:34:22 ntpd[1781]: Listen normally on 15 lxc_health [fe80::f85f:d6ff:fee1:7ec%15]:123
Feb 13 15:34:22.447842 kubelet[2282]: E0213 15:34:22.446670    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:23.447335 kubelet[2282]: E0213 15:34:23.447270    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:24.448218 kubelet[2282]: E0213 15:34:24.448150    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:25.449036 kubelet[2282]: E0213 15:34:25.448957    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:26.449506 kubelet[2282]: E0213 15:34:26.449463    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:27.451442 kubelet[2282]: E0213 15:34:27.451382    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:28.451610 kubelet[2282]: E0213 15:34:28.451553    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:29.451929 kubelet[2282]: E0213 15:34:29.451873    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:30.452274 kubelet[2282]: E0213 15:34:30.452220    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:31.453580 kubelet[2282]: E0213 15:34:31.453516    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:32.454531 kubelet[2282]: E0213 15:34:32.454473    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:33.455168 kubelet[2282]: E0213 15:34:33.455109    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:34.456309 kubelet[2282]: E0213 15:34:34.456264    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:35.457403 kubelet[2282]: E0213 15:34:35.457345    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:36.458385 kubelet[2282]: E0213 15:34:36.458326    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:37.459270 kubelet[2282]: E0213 15:34:37.459177    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:38.459843 kubelet[2282]: E0213 15:34:38.459785    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:39.367762 kubelet[2282]: E0213 15:34:39.367708    2282 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:39.460934 kubelet[2282]: E0213 15:34:39.460878    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:40.461716 kubelet[2282]: E0213 15:34:40.461511    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:41.320342 kubelet[2282]: E0213 15:34:41.320283    2282 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.108?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 13 15:34:41.462780 kubelet[2282]: E0213 15:34:41.462720    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:42.341628 kubelet[2282]: E0213 15:34:42.341553    2282 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T15:34:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T15:34:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T15:34:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T15:34:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":73054371},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\\\",\\\"registry.k8s.io/kube-proxy:v1.31.6\\\"],\\\"sizeBytes\\\":30228127},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.31.29.108\": Patch \"https://172.31.22.133:6443/api/v1/nodes/172.31.29.108/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 13 15:34:42.462968 kubelet[2282]: E0213 15:34:42.462915    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:43.463567 kubelet[2282]: E0213 15:34:43.463502    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:44.464729 kubelet[2282]: E0213 15:34:44.464661    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:45.465591 kubelet[2282]: E0213 15:34:45.465530    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:46.465911 kubelet[2282]: E0213 15:34:46.465850    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:47.466950 kubelet[2282]: E0213 15:34:47.466895    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:48.467136 kubelet[2282]: E0213 15:34:48.467079    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:49.467896 kubelet[2282]: E0213 15:34:49.467843    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:50.468462 kubelet[2282]: E0213 15:34:50.468407    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:51.321611 kubelet[2282]: E0213 15:34:51.321530    2282 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.108?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 13 15:34:51.469003 kubelet[2282]: E0213 15:34:51.468948    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:52.199061 update_engine[1788]: I20250213 15:34:52.198987  1788 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs
Feb 13 15:34:52.199061 update_engine[1788]: I20250213 15:34:52.199054  1788 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs
Feb 13 15:34:52.202796 update_engine[1788]: I20250213 15:34:52.201620  1788 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs
Feb 13 15:34:52.204972 update_engine[1788]: I20250213 15:34:52.204658  1788 omaha_request_params.cc:62] Current group set to stable
Feb 13 15:34:52.207880 update_engine[1788]: I20250213 15:34:52.207602  1788 update_attempter.cc:499] Already updated boot flags. Skipping.
Feb 13 15:34:52.207880 update_engine[1788]: I20250213 15:34:52.207636  1788 update_attempter.cc:643] Scheduling an action processor start.
Feb 13 15:34:52.207880 update_engine[1788]: I20250213 15:34:52.207664  1788 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction
Feb 13 15:34:52.207880 update_engine[1788]: I20250213 15:34:52.207829  1788 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs
Feb 13 15:34:52.208107 update_engine[1788]: I20250213 15:34:52.207931  1788 omaha_request_action.cc:271] Posting an Omaha request to disabled
Feb 13 15:34:52.208107 update_engine[1788]: I20250213 15:34:52.207942  1788 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?>
Feb 13 15:34:52.208107 update_engine[1788]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1">
Feb 13 15:34:52.208107 update_engine[1788]:     <os version="Chateau" platform="CoreOS" sp="4152.2.1_x86_64"></os>
Feb 13 15:34:52.208107 update_engine[1788]:     <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4152.2.1" track="stable" bootid="{e339eb0b-3e95-4379-bcec-42397e1f3d9a}" oem="ami" oemversion="3.2.985.0-r1" alephversion="4152.2.1" machineid="ec2a51bc425daac964dc8aafa63ca30c" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" >
Feb 13 15:34:52.208107 update_engine[1788]:         <ping active="1"></ping>
Feb 13 15:34:52.208107 update_engine[1788]:         <updatecheck></updatecheck>
Feb 13 15:34:52.208107 update_engine[1788]:         <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event>
Feb 13 15:34:52.208107 update_engine[1788]:     </app>
Feb 13 15:34:52.208107 update_engine[1788]: </request>
Feb 13 15:34:52.208107 update_engine[1788]: I20250213 15:34:52.207952  1788 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Feb 13 15:34:52.210033 locksmithd[1818]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0
Feb 13 15:34:52.225298 update_engine[1788]: I20250213 15:34:52.225100  1788 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Feb 13 15:34:52.225756 update_engine[1788]: I20250213 15:34:52.225711  1788 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Feb 13 15:34:52.268537 update_engine[1788]: E20250213 15:34:52.268451  1788 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Feb 13 15:34:52.268687 update_engine[1788]: I20250213 15:34:52.268579  1788 libcurl_http_fetcher.cc:283] No HTTP response, retry 1
Feb 13 15:34:52.342244 kubelet[2282]: E0213 15:34:52.342186    2282 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.29.108\": Get \"https://172.31.22.133:6443/api/v1/nodes/172.31.29.108?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 13 15:34:52.470021 kubelet[2282]: E0213 15:34:52.469888    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:53.470191 kubelet[2282]: E0213 15:34:53.470130    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:54.470796 kubelet[2282]: E0213 15:34:54.470742    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:55.471478 kubelet[2282]: E0213 15:34:55.471417    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:56.472574 kubelet[2282]: E0213 15:34:56.472510    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:57.472969 kubelet[2282]: E0213 15:34:57.472913    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:58.473543 kubelet[2282]: E0213 15:34:58.473384    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:59.367943 kubelet[2282]: E0213 15:34:59.367894    2282 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:34:59.446595 containerd[1803]: time="2025-02-13T15:34:59.446397822Z" level=info msg="StopPodSandbox for \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\""
Feb 13 15:34:59.446595 containerd[1803]: time="2025-02-13T15:34:59.446521895Z" level=info msg="TearDown network for sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" successfully"
Feb 13 15:34:59.446595 containerd[1803]: time="2025-02-13T15:34:59.446534104Z" level=info msg="StopPodSandbox for \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" returns successfully"
Feb 13 15:34:59.447297 containerd[1803]: time="2025-02-13T15:34:59.447002578Z" level=info msg="RemovePodSandbox for \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\""
Feb 13 15:34:59.447297 containerd[1803]: time="2025-02-13T15:34:59.447031471Z" level=info msg="Forcibly stopping sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\""
Feb 13 15:34:59.447297 containerd[1803]: time="2025-02-13T15:34:59.447195378Z" level=info msg="TearDown network for sandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" successfully"
Feb 13 15:34:59.455548 containerd[1803]: time="2025-02-13T15:34:59.455457953Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:34:59.456073 containerd[1803]: time="2025-02-13T15:34:59.455889969Z" level=info msg="RemovePodSandbox \"c9e559538fe69643072b803c3db253a0f9050ba60826d2e696ef934b8b7a463b\" returns successfully"
Feb 13 15:34:59.473662 kubelet[2282]: E0213 15:34:59.473611    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:00.154359 kubelet[2282]: E0213 15:35:00.149422    2282 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.108?timeout=10s\": unexpected EOF"
Feb 13 15:35:00.157581 kubelet[2282]: E0213 15:35:00.157440    2282 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.108?timeout=10s\": dial tcp 172.31.22.133:6443: connect: connection reset by peer"
Feb 13 15:35:00.161494 kubelet[2282]: E0213 15:35:00.161449    2282 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.108?timeout=10s\": dial tcp 172.31.22.133:6443: connect: connection refused"
Feb 13 15:35:00.161494 kubelet[2282]: I0213 15:35:00.161494    2282 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
Feb 13 15:35:00.163608 kubelet[2282]: E0213 15:35:00.163451    2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.108?timeout=10s\": dial tcp 172.31.22.133:6443: connect: connection refused" interval="200ms"
Feb 13 15:35:00.365417 kubelet[2282]: E0213 15:35:00.365351    2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.108?timeout=10s\": dial tcp 172.31.22.133:6443: connect: connection refused" interval="400ms"
Feb 13 15:35:00.475006 kubelet[2282]: E0213 15:35:00.474759    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:00.767380 kubelet[2282]: E0213 15:35:00.767065    2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.108?timeout=10s\": dial tcp 172.31.22.133:6443: connect: connection refused" interval="800ms"
Feb 13 15:35:01.144409 kubelet[2282]: E0213 15:35:01.144274    2282 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.29.108\": Get \"https://172.31.22.133:6443/api/v1/nodes/172.31.29.108?timeout=10s\": dial tcp 172.31.22.133:6443: connect: connection refused - error from a previous attempt: unexpected EOF"
Feb 13 15:35:01.145153 kubelet[2282]: E0213 15:35:01.145115    2282 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.29.108\": Get \"https://172.31.22.133:6443/api/v1/nodes/172.31.29.108?timeout=10s\": dial tcp 172.31.22.133:6443: connect: connection refused"
Feb 13 15:35:01.146190 kubelet[2282]: E0213 15:35:01.146154    2282 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.29.108\": Get \"https://172.31.22.133:6443/api/v1/nodes/172.31.29.108?timeout=10s\": dial tcp 172.31.22.133:6443: connect: connection refused"
Feb 13 15:35:01.146190 kubelet[2282]: E0213 15:35:01.146183    2282 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
Feb 13 15:35:01.475381 kubelet[2282]: E0213 15:35:01.475319    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:02.192441 update_engine[1788]: I20250213 15:35:02.192255  1788 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Feb 13 15:35:02.193054 update_engine[1788]: I20250213 15:35:02.192656  1788 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Feb 13 15:35:02.193054 update_engine[1788]: I20250213 15:35:02.192970  1788 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Feb 13 15:35:02.194355 update_engine[1788]: E20250213 15:35:02.194299  1788 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Feb 13 15:35:02.194540 update_engine[1788]: I20250213 15:35:02.194386  1788 libcurl_http_fetcher.cc:283] No HTTP response, retry 2
Feb 13 15:35:02.475798 kubelet[2282]: E0213 15:35:02.475663    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:03.476700 kubelet[2282]: E0213 15:35:03.476640    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:04.477767 kubelet[2282]: E0213 15:35:04.477709    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:05.478997 kubelet[2282]: E0213 15:35:05.478901    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:06.479276 kubelet[2282]: E0213 15:35:06.479122    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:07.479966 kubelet[2282]: E0213 15:35:07.479909    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:08.480751 kubelet[2282]: E0213 15:35:08.480593    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:09.481917 kubelet[2282]: E0213 15:35:09.481866    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:10.483072 kubelet[2282]: E0213 15:35:10.483011    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:11.483628 kubelet[2282]: E0213 15:35:11.483571    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:11.569380 kubelet[2282]: E0213 15:35:11.569318    2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.108?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
Feb 13 15:35:12.194573 update_engine[1788]: I20250213 15:35:12.194474  1788 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Feb 13 15:35:12.195074 update_engine[1788]: I20250213 15:35:12.194808  1788 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Feb 13 15:35:12.195158 update_engine[1788]: I20250213 15:35:12.195128  1788 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Feb 13 15:35:12.196002 update_engine[1788]: E20250213 15:35:12.195952  1788 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Feb 13 15:35:12.196109 update_engine[1788]: I20250213 15:35:12.196033  1788 libcurl_http_fetcher.cc:283] No HTTP response, retry 3
Feb 13 15:35:12.484410 kubelet[2282]: E0213 15:35:12.484277    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:13.484871 kubelet[2282]: E0213 15:35:13.484804    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:14.485253 kubelet[2282]: E0213 15:35:14.485193    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:15.485985 kubelet[2282]: E0213 15:35:15.485927    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:16.486896 kubelet[2282]: E0213 15:35:16.486842    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:17.487839 kubelet[2282]: E0213 15:35:17.487769    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:18.488194 kubelet[2282]: E0213 15:35:18.488136    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:19.368738 kubelet[2282]: E0213 15:35:19.368684    2282 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:19.488778 kubelet[2282]: E0213 15:35:19.488721    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:20.489583 kubelet[2282]: E0213 15:35:20.489526    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:21.211314 kubelet[2282]: E0213 15:35:21.210792    2282 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T15:35:11Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T15:35:11Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T15:35:11Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T15:35:11Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":73054371},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\\\",\\\"registry.k8s.io/kube-proxy:v1.31.6\\\"],\\\"sizeBytes\\\":30228127},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.31.29.108\": Patch \"https://172.31.22.133:6443/api/v1/nodes/172.31.29.108/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Feb 13 15:35:21.490488 kubelet[2282]: E0213 15:35:21.490365    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:35:22.194208 update_engine[1788]: I20250213 15:35:22.194103  1788 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Feb 13 15:35:22.194662 update_engine[1788]: I20250213 15:35:22.194428  1788 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Feb 13 15:35:22.194737 update_engine[1788]: I20250213 15:35:22.194710  1788 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Feb 13 15:35:22.195616 update_engine[1788]: E20250213 15:35:22.195573  1788 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Feb 13 15:35:22.195719 update_engine[1788]: I20250213 15:35:22.195643  1788 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded
Feb 13 15:35:22.195719 update_engine[1788]: I20250213 15:35:22.195657  1788 omaha_request_action.cc:617] Omaha request response:
Feb 13 15:35:22.195804 update_engine[1788]: E20250213 15:35:22.195745  1788 omaha_request_action.cc:636] Omaha request network transfer failed.
Feb 13 15:35:22.195804 update_engine[1788]: I20250213 15:35:22.195771  1788 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing.
Feb 13 15:35:22.195804 update_engine[1788]: I20250213 15:35:22.195780  1788 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Feb 13 15:35:22.195804 update_engine[1788]: I20250213 15:35:22.195787  1788 update_attempter.cc:306] Processing Done.
Feb 13 15:35:22.195997 update_engine[1788]: E20250213 15:35:22.195806  1788 update_attempter.cc:619] Update failed.
Feb 13 15:35:22.195997 update_engine[1788]: I20250213 15:35:22.195832  1788 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse
Feb 13 15:35:22.195997 update_engine[1788]: I20250213 15:35:22.195842  1788 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse)
Feb 13 15:35:22.195997 update_engine[1788]: I20250213 15:35:22.195849  1788 payload_state.cc:103] Ignoring failures until we get a valid Omaha response.
Feb 13 15:35:22.195997 update_engine[1788]: I20250213 15:35:22.195936  1788 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction
Feb 13 15:35:22.195997 update_engine[1788]: I20250213 15:35:22.195966  1788 omaha_request_action.cc:271] Posting an Omaha request to disabled
Feb 13 15:35:22.195997 update_engine[1788]: I20250213 15:35:22.195974  1788 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?>
Feb 13 15:35:22.195997 update_engine[1788]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1">
Feb 13 15:35:22.195997 update_engine[1788]:     <os version="Chateau" platform="CoreOS" sp="4152.2.1_x86_64"></os>
Feb 13 15:35:22.195997 update_engine[1788]:     <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4152.2.1" track="stable" bootid="{e339eb0b-3e95-4379-bcec-42397e1f3d9a}" oem="ami" oemversion="3.2.985.0-r1" alephversion="4152.2.1" machineid="ec2a51bc425daac964dc8aafa63ca30c" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" >
Feb 13 15:35:22.195997 update_engine[1788]:         <event eventtype="3" eventresult="0" errorcode="268437456"></event>
Feb 13 15:35:22.195997 update_engine[1788]:     </app>
Feb 13 15:35:22.195997 update_engine[1788]: </request>
Feb 13 15:35:22.195997 update_engine[1788]: I20250213 15:35:22.195984  1788 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Feb 13 15:35:22.196749 update_engine[1788]: I20250213 15:35:22.196381  1788 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Feb 13 15:35:22.196749 update_engine[1788]: I20250213 15:35:22.196601  1788 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Feb 13 15:35:22.196986 locksmithd[1818]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0
Feb 13 15:35:22.197472 update_engine[1788]: E20250213 15:35:22.197181  1788 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Feb 13 15:35:22.197472 update_engine[1788]: I20250213 15:35:22.197242  1788 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded
Feb 13 15:35:22.197472 update_engine[1788]: I20250213 15:35:22.197253  1788 omaha_request_action.cc:617] Omaha request response:
Feb 13 15:35:22.197472 update_engine[1788]: I20250213 15:35:22.197264  1788 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Feb 13 15:35:22.197472 update_engine[1788]: I20250213 15:35:22.197273  1788 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Feb 13 15:35:22.197472 update_engine[1788]: I20250213 15:35:22.197281  1788 update_attempter.cc:306] Processing Done.
Feb 13 15:35:22.197472 update_engine[1788]: I20250213 15:35:22.197291  1788 update_attempter.cc:310] Error event sent.
Feb 13 15:35:22.197472 update_engine[1788]: I20250213 15:35:22.197304  1788 update_check_scheduler.cc:74] Next update check in 45m2s
Feb 13 15:35:22.198085 locksmithd[1818]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0
Feb 13 15:35:22.491509 kubelet[2282]: E0213 15:35:22.491361    2282 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"