Feb 13 19:25:52.097398 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:40:15 -00 2025 Feb 13 19:25:52.097447 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:25:52.097463 kernel: BIOS-provided physical RAM map: Feb 13 19:25:52.097474 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:25:52.097485 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:25:52.097496 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:25:52.097512 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 19:25:52.097524 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 19:25:52.097535 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 19:25:52.097546 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:25:52.097558 kernel: NX (Execute Disable) protection: active Feb 13 19:25:52.097569 kernel: APIC: Static calls initialized Feb 13 19:25:52.097581 kernel: SMBIOS 2.7 present. Feb 13 19:25:52.097593 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 19:25:52.097611 kernel: Hypervisor detected: KVM Feb 13 19:25:52.097624 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:25:52.097636 kernel: kvm-clock: using sched offset of 8645171192 cycles Feb 13 19:25:52.097650 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:25:52.097663 kernel: tsc: Detected 2499.994 MHz processor Feb 13 19:25:52.097677 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:25:52.097690 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:25:52.097706 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 19:25:52.097719 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:25:52.097732 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:25:52.097745 kernel: Using GB pages for direct mapping Feb 13 19:25:52.097758 kernel: ACPI: Early table checksum verification disabled Feb 13 19:25:52.097771 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 19:25:52.097784 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 19:25:52.097797 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:25:52.097810 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 19:25:52.097825 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 19:25:52.097838 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:25:52.097851 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:25:52.097864 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 19:25:52.097877 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:25:52.097890 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 19:25:52.097903 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 19:25:52.097916 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:25:52.097929 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 19:25:52.097945 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 19:25:52.097963 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 19:25:52.097977 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 19:25:52.097991 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 19:25:52.098004 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 19:25:52.098020 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 19:25:52.098034 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 19:25:52.098047 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 19:25:52.098061 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 19:25:52.098075 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:25:52.098089 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:25:52.098102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 19:25:52.098116 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 19:25:52.098130 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 19:25:52.098147 kernel: Zone ranges: Feb 13 19:25:52.098161 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:25:52.098175 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 19:25:52.098189 kernel: Normal empty Feb 13 19:25:52.098203 kernel: Movable zone start for each node Feb 13 19:25:52.101263 kernel: Early memory node ranges Feb 13 19:25:52.101293 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:25:52.101314 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 19:25:52.101330 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 19:25:52.101345 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:25:52.101369 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:25:52.101384 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 19:25:52.101399 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:25:52.101414 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:25:52.101429 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 19:25:52.101444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:25:52.101460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:25:52.101475 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:25:52.101490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:25:52.101508 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:25:52.101523 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:25:52.101538 kernel: TSC deadline timer available Feb 13 19:25:52.101553 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:25:52.101568 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:25:52.101585 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 19:25:52.101601 kernel: Booting paravirtualized kernel on KVM Feb 13 19:25:52.101616 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:25:52.101632 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:25:52.101651 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:25:52.101666 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:25:52.101680 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:25:52.101695 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:25:52.101710 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:25:52.101728 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:25:52.101744 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:25:52.101759 kernel: random: crng init done Feb 13 19:25:52.101777 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:25:52.101792 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:25:52.101807 kernel: Fallback order for Node 0: 0 Feb 13 19:25:52.101822 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 19:25:52.101836 kernel: Policy zone: DMA32 Feb 13 19:25:52.101851 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:25:52.101866 kernel: Memory: 1930300K/2057760K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 127200K reserved, 0K cma-reserved) Feb 13 19:25:52.101882 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:25:52.101897 kernel: Kernel/User page tables isolation: enabled Feb 13 19:25:52.101915 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:25:52.101930 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:25:52.101945 kernel: Dynamic Preempt: voluntary Feb 13 19:25:52.101960 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:25:52.101976 kernel: rcu: RCU event tracing is enabled. Feb 13 19:25:52.102098 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:25:52.102116 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:25:52.102132 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:25:52.102147 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:25:52.102166 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:25:52.102181 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:25:52.102197 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:25:52.102212 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:25:52.102251 kernel: Console: colour VGA+ 80x25 Feb 13 19:25:52.102266 kernel: printk: console [ttyS0] enabled Feb 13 19:25:52.102281 kernel: ACPI: Core revision 20230628 Feb 13 19:25:52.102296 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 19:25:52.102311 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:25:52.102330 kernel: x2apic enabled Feb 13 19:25:52.102346 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:25:52.102373 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Feb 13 19:25:52.102393 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Feb 13 19:25:52.102409 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 19:25:52.102425 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 19:25:52.102442 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:25:52.102457 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:25:52.102473 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:25:52.102489 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:25:52.102505 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 19:25:52.102521 kernel: RETBleed: Vulnerable Feb 13 19:25:52.102537 kernel: Speculative Store Bypass: Vulnerable Feb 13 19:25:52.102556 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:25:52.102572 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:25:52.102588 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 19:25:52.102604 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:25:52.102620 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:25:52.102636 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:25:52.102655 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 19:25:52.102671 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 19:25:52.102687 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 19:25:52.102703 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 19:25:52.102719 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 19:25:52.102735 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 19:25:52.102751 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:25:52.102767 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 19:25:52.102851 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 19:25:52.102868 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 19:25:52.102884 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 19:25:52.102904 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 19:25:52.102919 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 19:25:52.102936 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 19:25:52.102952 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:25:52.102967 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:25:52.102983 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:25:52.102999 kernel: landlock: Up and running. Feb 13 19:25:52.103015 kernel: SELinux: Initializing. Feb 13 19:25:52.103031 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:25:52.103047 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:25:52.103064 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 19:25:52.103080 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:25:52.103099 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:25:52.103115 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:25:52.103132 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 19:25:52.103148 kernel: signal: max sigframe size: 3632 Feb 13 19:25:52.103163 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:25:52.103180 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:25:52.103197 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:25:52.103212 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:25:52.105280 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:25:52.105300 kernel: .... node #0, CPUs: #1 Feb 13 19:25:52.105318 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:25:52.105336 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:25:52.105352 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:25:52.105368 kernel: smpboot: Max logical packages: 1 Feb 13 19:25:52.105385 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Feb 13 19:25:52.105401 kernel: devtmpfs: initialized Feb 13 19:25:52.105417 kernel: x86/mm: Memory block size: 128MB Feb 13 19:25:52.105436 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:25:52.105453 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:25:52.105469 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:25:52.105485 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:25:52.105502 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:25:52.105518 kernel: audit: type=2000 audit(1739474751.534:1): state=initialized audit_enabled=0 res=1 Feb 13 19:25:52.105534 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:25:52.105550 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:25:52.105566 kernel: cpuidle: using governor menu Feb 13 19:25:52.105585 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:25:52.105601 kernel: dca service started, version 1.12.1 Feb 13 19:25:52.105617 kernel: PCI: Using configuration type 1 for base access Feb 13 19:25:52.105633 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:25:52.105649 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:25:52.105665 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:25:52.105681 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:25:52.105697 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:25:52.105713 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:25:52.105787 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:25:52.105806 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:25:52.105822 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:25:52.105838 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:25:52.105855 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:25:52.105870 kernel: ACPI: Interpreter enabled Feb 13 19:25:52.105886 kernel: ACPI: PM: (supports S0 S5) Feb 13 19:25:52.105902 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:25:52.105919 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:25:52.105939 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:25:52.105955 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:25:52.105971 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:25:52.106272 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:25:52.106422 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:25:52.106557 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:25:52.106577 kernel: acpiphp: Slot [3] registered Feb 13 19:25:52.106598 kernel: acpiphp: Slot [4] registered Feb 13 19:25:52.106614 kernel: acpiphp: Slot [5] registered Feb 13 19:25:52.106630 kernel: acpiphp: Slot [6] registered Feb 13 19:25:52.106646 kernel: acpiphp: Slot [7] registered Feb 13 19:25:52.106662 kernel: acpiphp: Slot [8] registered Feb 13 19:25:52.106678 kernel: acpiphp: Slot [9] registered Feb 13 19:25:52.106694 kernel: acpiphp: Slot [10] registered Feb 13 19:25:52.106710 kernel: acpiphp: Slot [11] registered Feb 13 19:25:52.106725 kernel: acpiphp: Slot [12] registered Feb 13 19:25:52.106744 kernel: acpiphp: Slot [13] registered Feb 13 19:25:52.106761 kernel: acpiphp: Slot [14] registered Feb 13 19:25:52.106776 kernel: acpiphp: Slot [15] registered Feb 13 19:25:52.106901 kernel: acpiphp: Slot [16] registered Feb 13 19:25:52.106918 kernel: acpiphp: Slot [17] registered Feb 13 19:25:52.106935 kernel: acpiphp: Slot [18] registered Feb 13 19:25:52.106950 kernel: acpiphp: Slot [19] registered Feb 13 19:25:52.106967 kernel: acpiphp: Slot [20] registered Feb 13 19:25:52.106983 kernel: acpiphp: Slot [21] registered Feb 13 19:25:52.106998 kernel: acpiphp: Slot [22] registered Feb 13 19:25:52.107019 kernel: acpiphp: Slot [23] registered Feb 13 19:25:52.107035 kernel: acpiphp: Slot [24] registered Feb 13 19:25:52.107052 kernel: acpiphp: Slot [25] registered Feb 13 19:25:52.107067 kernel: acpiphp: Slot [26] registered Feb 13 19:25:52.107083 kernel: acpiphp: Slot [27] registered Feb 13 19:25:52.107140 kernel: acpiphp: Slot [28] registered Feb 13 19:25:52.109252 kernel: acpiphp: Slot [29] registered Feb 13 19:25:52.109320 kernel: acpiphp: Slot [30] registered Feb 13 19:25:52.109355 kernel: acpiphp: Slot [31] registered Feb 13 19:25:52.109388 kernel: PCI host bridge to bus 0000:00 Feb 13 19:25:52.109595 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:25:52.109730 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:25:52.109860 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:25:52.109983 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 19:25:52.110183 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:25:52.110437 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:25:52.110735 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 19:25:52.110966 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 19:25:52.111183 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:25:52.112442 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 19:25:52.112588 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 19:25:52.112725 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 19:25:52.112870 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 19:25:52.113023 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 19:25:52.113160 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 19:25:52.113964 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 19:25:52.114242 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 19:25:52.114454 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 19:25:52.114590 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 19:25:52.114798 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:25:52.114948 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:25:52.115085 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 19:25:52.115305 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:25:52.115563 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 19:25:52.115586 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:25:52.115600 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:25:52.115615 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:25:52.115634 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:25:52.115648 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:25:52.115662 kernel: iommu: Default domain type: Translated Feb 13 19:25:52.115675 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:25:52.115689 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:25:52.115702 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:25:52.115716 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:25:52.115729 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 19:25:52.115858 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 19:25:52.115990 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 19:25:52.116118 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:25:52.116134 kernel: vgaarb: loaded Feb 13 19:25:52.116149 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 19:25:52.116162 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 19:25:52.116176 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:25:52.116190 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:25:52.116204 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:25:52.116234 kernel: pnp: PnP ACPI init Feb 13 19:25:52.116247 kernel: pnp: PnP ACPI: found 5 devices Feb 13 19:25:52.116260 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:25:52.116275 kernel: NET: Registered PF_INET protocol family Feb 13 19:25:52.116288 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:25:52.116301 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 19:25:52.116323 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:25:52.116337 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:25:52.116943 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:25:52.116965 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 19:25:52.116978 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:25:52.116992 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:25:52.117006 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:25:52.117019 kernel: NET: Registered PF_XDP protocol family Feb 13 19:25:52.117173 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:25:52.117315 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:25:52.117433 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:25:52.117554 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 19:25:52.117815 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:25:52.117836 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:25:52.117851 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:25:52.117866 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Feb 13 19:25:52.117881 kernel: clocksource: Switched to clocksource tsc Feb 13 19:25:52.117895 kernel: Initialise system trusted keyrings Feb 13 19:25:52.117909 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 19:25:52.117927 kernel: Key type asymmetric registered Feb 13 19:25:52.117941 kernel: Asymmetric key parser 'x509' registered Feb 13 19:25:52.117954 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:25:52.117968 kernel: io scheduler mq-deadline registered Feb 13 19:25:52.117982 kernel: io scheduler kyber registered Feb 13 19:25:52.117996 kernel: io scheduler bfq registered Feb 13 19:25:52.118010 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:25:52.118023 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:25:52.118037 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:25:52.118054 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:25:52.118068 kernel: i8042: Warning: Keylock active Feb 13 19:25:52.118082 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:25:52.118096 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:25:52.118428 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:25:52.118559 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:25:52.118745 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:25:51 UTC (1739474751) Feb 13 19:25:52.118869 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:25:52.118891 kernel: intel_pstate: CPU model not supported Feb 13 19:25:52.118905 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:25:52.118919 kernel: Segment Routing with IPv6 Feb 13 19:25:52.118932 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:25:52.118946 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:25:52.118959 kernel: Key type dns_resolver registered Feb 13 19:25:52.118972 kernel: IPI shorthand broadcast: enabled Feb 13 19:25:52.118985 kernel: sched_clock: Marking stable (567001946, 214168240)->(861317917, -80147731) Feb 13 19:25:52.118999 kernel: registered taskstats version 1 Feb 13 19:25:52.119016 kernel: Loading compiled-in X.509 certificates Feb 13 19:25:52.119030 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6c364ddae48101e091a28279a8d953535f596d53' Feb 13 19:25:52.119043 kernel: Key type .fscrypt registered Feb 13 19:25:52.119056 kernel: Key type fscrypt-provisioning registered Feb 13 19:25:52.119069 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:25:52.119083 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:25:52.119097 kernel: ima: No architecture policies found Feb 13 19:25:52.119171 kernel: clk: Disabling unused clocks Feb 13 19:25:52.119186 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 19:25:52.119204 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:25:52.119255 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 19:25:52.119269 kernel: Run /init as init process Feb 13 19:25:52.119282 kernel: with arguments: Feb 13 19:25:52.119296 kernel: /init Feb 13 19:25:52.119309 kernel: with environment: Feb 13 19:25:52.119323 kernel: HOME=/ Feb 13 19:25:52.119335 kernel: TERM=linux Feb 13 19:25:52.119349 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:25:52.119372 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:25:52.119404 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:25:52.119425 systemd[1]: Detected virtualization amazon. Feb 13 19:25:52.119439 systemd[1]: Detected architecture x86-64. Feb 13 19:25:52.119454 systemd[1]: Running in initrd. Feb 13 19:25:52.119472 systemd[1]: No hostname configured, using default hostname. Feb 13 19:25:52.119488 systemd[1]: Hostname set to . Feb 13 19:25:52.119503 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:25:52.119518 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:25:52.119533 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:25:52.119548 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:25:52.119565 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:25:52.119580 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:25:52.119597 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:25:52.119614 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:25:52.119631 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:25:52.119646 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:25:52.119661 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:25:52.119677 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:25:52.119695 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:25:52.119711 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:25:52.119769 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:25:52.119790 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:25:52.119806 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:25:52.119822 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:25:52.119837 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:25:52.119852 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:25:52.119867 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:25:52.119886 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:25:52.119901 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:25:52.119916 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:25:52.119932 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:25:52.119946 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:25:52.119962 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:25:52.119977 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:25:52.119994 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:25:52.120009 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:25:52.120028 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:25:52.120046 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:25:52.120060 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:25:52.120075 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:25:52.120095 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:25:52.120110 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:25:52.120162 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 19:25:52.120199 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:25:52.120214 kernel: Bridge firewalling registered Feb 13 19:25:52.120260 systemd-journald[179]: Journal started Feb 13 19:25:52.120292 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2f190b33879f7f57dc2cde53c1648d) is 4.8M, max 38.5M, 33.7M free. Feb 13 19:25:52.060570 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 19:25:52.287489 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:25:52.113038 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 19:25:52.290749 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:25:52.296680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:25:52.302427 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:25:52.313450 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:25:52.321130 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:25:52.322364 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:25:52.338794 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:25:52.341709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:25:52.370789 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:25:52.373537 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:25:52.392876 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:25:52.418004 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:25:52.421688 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:25:52.428971 dracut-cmdline[213]: dracut-dracut-053 Feb 13 19:25:52.433562 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:25:52.490046 systemd-resolved[215]: Positive Trust Anchors: Feb 13 19:25:52.490066 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:25:52.490133 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:25:52.495704 systemd-resolved[215]: Defaulting to hostname 'linux'. Feb 13 19:25:52.498396 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:25:52.500266 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:25:52.583079 kernel: SCSI subsystem initialized Feb 13 19:25:52.605262 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:25:52.628278 kernel: iscsi: registered transport (tcp) Feb 13 19:25:52.680367 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:25:52.680462 kernel: QLogic iSCSI HBA Driver Feb 13 19:25:52.734691 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:25:52.741459 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:25:52.775834 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:25:52.775953 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:25:52.776017 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:25:52.828250 kernel: raid6: avx512x4 gen() 11522 MB/s Feb 13 19:25:52.845255 kernel: raid6: avx512x2 gen() 9303 MB/s Feb 13 19:25:52.862251 kernel: raid6: avx512x1 gen() 4937 MB/s Feb 13 19:25:52.880251 kernel: raid6: avx2x4 gen() 7034 MB/s Feb 13 19:25:52.897247 kernel: raid6: avx2x2 gen() 8800 MB/s Feb 13 19:25:52.914388 kernel: raid6: avx2x1 gen() 8307 MB/s Feb 13 19:25:52.914464 kernel: raid6: using algorithm avx512x4 gen() 11522 MB/s Feb 13 19:25:52.932522 kernel: raid6: .... xor() 5842 MB/s, rmw enabled Feb 13 19:25:52.932652 kernel: raid6: using avx512x2 recovery algorithm Feb 13 19:25:52.956416 kernel: xor: automatically using best checksumming function avx Feb 13 19:25:53.142244 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:25:53.154254 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:25:53.173446 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:25:53.194517 systemd-udevd[399]: Using default interface naming scheme 'v255'. Feb 13 19:25:53.203101 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:25:53.217165 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:25:53.242621 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 13 19:25:53.281019 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:25:53.291571 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:25:53.371707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:25:53.385064 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:25:53.423755 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:25:53.433797 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:25:53.435506 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:25:53.437730 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:25:53.449076 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:25:53.485691 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:25:53.525403 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:25:53.525686 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 19:25:53.525987 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:25:53.526011 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:d1:8d:72:41:47 Feb 13 19:25:53.505446 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:25:53.558192 (udev-worker)[462]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:25:53.563021 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:25:53.566239 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 19:25:53.581138 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:25:53.581204 kernel: AES CTR mode by8 optimization enabled Feb 13 19:25:53.581020 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:25:53.581273 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:25:53.591906 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:25:53.584872 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:25:53.587486 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:25:53.588047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:25:53.602601 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:25:53.602635 kernel: GPT:9289727 != 16777215 Feb 13 19:25:53.602652 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:25:53.602669 kernel: GPT:9289727 != 16777215 Feb 13 19:25:53.602715 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:25:53.602742 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:25:53.591910 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:25:53.615173 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:25:53.708309 kernel: BTRFS: device fsid 60f89c25-9096-4268-99ca-ef7992742f2b devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (459) Feb 13 19:25:53.723289 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (452) Feb 13 19:25:53.819776 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:25:53.827467 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:25:53.874837 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:25:53.878857 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:25:53.901334 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:25:53.901503 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:25:53.928634 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:25:53.951810 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:25:53.966601 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:25:53.993667 disk-uuid[633]: Primary Header is updated. Feb 13 19:25:53.993667 disk-uuid[633]: Secondary Entries is updated. Feb 13 19:25:53.993667 disk-uuid[633]: Secondary Header is updated. Feb 13 19:25:53.999748 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:25:55.015152 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:25:55.017251 disk-uuid[634]: The operation has completed successfully. Feb 13 19:25:55.285587 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:25:55.285715 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:25:55.362744 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:25:55.371371 sh[894]: Success Feb 13 19:25:55.398255 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:25:55.518345 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:25:55.535390 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:25:55.555715 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:25:55.620025 kernel: BTRFS info (device dm-0): first mount of filesystem 60f89c25-9096-4268-99ca-ef7992742f2b Feb 13 19:25:55.620097 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:25:55.620127 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:25:55.620898 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:25:55.621693 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:25:55.648244 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:25:55.652202 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:25:55.654086 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:25:55.661647 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:25:55.671771 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:25:55.705563 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:25:55.705641 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:25:55.705661 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:25:55.719471 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:25:55.744177 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:25:55.748991 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:25:55.763036 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:25:55.776458 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:25:55.852730 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:25:55.864694 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:25:55.984885 systemd-networkd[1087]: lo: Link UP Feb 13 19:25:55.984898 systemd-networkd[1087]: lo: Gained carrier Feb 13 19:25:55.989592 systemd-networkd[1087]: Enumeration completed Feb 13 19:25:55.989724 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:25:55.993258 systemd[1]: Reached target network.target - Network. Feb 13 19:25:55.996200 systemd-networkd[1087]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:25:55.996205 systemd-networkd[1087]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:25:56.009238 systemd-networkd[1087]: eth0: Link UP Feb 13 19:25:56.009247 systemd-networkd[1087]: eth0: Gained carrier Feb 13 19:25:56.009264 systemd-networkd[1087]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:25:56.015375 ignition[1023]: Ignition 2.20.0 Feb 13 19:25:56.018546 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:25:56.015383 ignition[1023]: Stage: fetch-offline Feb 13 19:25:56.015804 ignition[1023]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:25:56.022343 systemd-networkd[1087]: eth0: DHCPv4 address 172.31.19.191/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:25:56.015817 ignition[1023]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:25:56.016257 ignition[1023]: Ignition finished successfully Feb 13 19:25:56.043041 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:25:56.080543 ignition[1097]: Ignition 2.20.0 Feb 13 19:25:56.080554 ignition[1097]: Stage: fetch Feb 13 19:25:56.080873 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:25:56.080882 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:25:56.080958 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:25:56.101330 ignition[1097]: PUT result: OK Feb 13 19:25:56.104974 ignition[1097]: parsed url from cmdline: "" Feb 13 19:25:56.104983 ignition[1097]: no config URL provided Feb 13 19:25:56.104992 ignition[1097]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:25:56.105004 ignition[1097]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:25:56.105021 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:25:56.106226 ignition[1097]: PUT result: OK Feb 13 19:25:56.106278 ignition[1097]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:25:56.111448 ignition[1097]: GET result: OK Feb 13 19:25:56.111534 ignition[1097]: parsing config with SHA512: 28f37cb5dddf9e627675d5135d8aef1f433091f3f9947910b5d3379cf0e4b53c8af20c861dd87146346d160ca1692025a00e722f78de7e24ac4399c35583c108 Feb 13 19:25:56.120438 unknown[1097]: fetched base config from "system" Feb 13 19:25:56.120458 unknown[1097]: fetched base config from "system" Feb 13 19:25:56.120466 unknown[1097]: fetched user config from "aws" Feb 13 19:25:56.122538 ignition[1097]: fetch: fetch complete Feb 13 19:25:56.122546 ignition[1097]: fetch: fetch passed Feb 13 19:25:56.128428 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:25:56.122688 ignition[1097]: Ignition finished successfully Feb 13 19:25:56.138602 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:25:56.161960 ignition[1104]: Ignition 2.20.0 Feb 13 19:25:56.161976 ignition[1104]: Stage: kargs Feb 13 19:25:56.162866 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:25:56.162880 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:25:56.162994 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:25:56.168063 ignition[1104]: PUT result: OK Feb 13 19:25:56.174671 ignition[1104]: kargs: kargs passed Feb 13 19:25:56.174921 ignition[1104]: Ignition finished successfully Feb 13 19:25:56.179279 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:25:56.186506 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:25:56.206427 ignition[1110]: Ignition 2.20.0 Feb 13 19:25:56.206443 ignition[1110]: Stage: disks Feb 13 19:25:56.207068 ignition[1110]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:25:56.207083 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:25:56.207312 ignition[1110]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:25:56.209063 ignition[1110]: PUT result: OK Feb 13 19:25:56.221767 ignition[1110]: disks: disks passed Feb 13 19:25:56.222012 ignition[1110]: Ignition finished successfully Feb 13 19:25:56.224959 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:25:56.228090 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:25:56.228193 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:25:56.233263 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:25:56.235869 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:25:56.237446 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:25:56.246382 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:25:56.301627 systemd-fsck[1118]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:25:56.305934 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:25:56.627361 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:25:56.738285 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 157595f2-1515-4117-a2d1-73fe2ed647fc r/w with ordered data mode. Quota mode: none. Feb 13 19:25:56.739381 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:25:56.741758 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:25:56.760403 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:25:56.764379 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:25:56.767376 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:25:56.770235 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:25:56.770265 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:25:56.777279 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:25:56.783251 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1137) Feb 13 19:25:56.785362 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:25:56.791284 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:25:56.791314 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:25:56.791327 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:25:56.805284 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:25:56.807618 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:25:56.857188 initrd-setup-root[1162]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:25:56.863194 initrd-setup-root[1169]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:25:56.869662 initrd-setup-root[1176]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:25:56.874896 initrd-setup-root[1183]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:25:56.987305 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:25:56.994331 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:25:57.001439 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:25:57.009245 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:25:57.038454 ignition[1251]: INFO : Ignition 2.20.0 Feb 13 19:25:57.039789 ignition[1251]: INFO : Stage: mount Feb 13 19:25:57.039789 ignition[1251]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:25:57.039789 ignition[1251]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:25:57.039789 ignition[1251]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:25:57.044770 ignition[1251]: INFO : PUT result: OK Feb 13 19:25:57.048358 ignition[1251]: INFO : mount: mount passed Feb 13 19:25:57.049368 ignition[1251]: INFO : Ignition finished successfully Feb 13 19:25:57.049853 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:25:57.063354 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:25:57.076192 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:25:57.352413 systemd-networkd[1087]: eth0: Gained IPv6LL Feb 13 19:25:57.617130 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:25:57.622474 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:25:57.651244 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1263) Feb 13 19:25:57.654587 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:25:57.654654 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:25:57.654678 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:25:57.660244 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:25:57.662731 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:25:57.716771 ignition[1279]: INFO : Ignition 2.20.0 Feb 13 19:25:57.720983 ignition[1279]: INFO : Stage: files Feb 13 19:25:57.722970 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:25:57.722970 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:25:57.726674 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:25:57.730820 ignition[1279]: INFO : PUT result: OK Feb 13 19:25:57.734764 ignition[1279]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:25:57.736927 ignition[1279]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:25:57.736927 ignition[1279]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:25:57.745200 ignition[1279]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:25:57.747101 ignition[1279]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:25:57.749028 unknown[1279]: wrote ssh authorized keys file for user: core Feb 13 19:25:57.750364 ignition[1279]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:25:57.754615 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:25:57.756866 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:25:57.863458 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:25:58.168824 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:25:58.171797 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:25:58.171797 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 19:25:58.617031 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:25:58.796114 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:25:58.800191 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 19:25:59.197573 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:25:59.603834 ignition[1279]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:25:59.603834 ignition[1279]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:25:59.609933 ignition[1279]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:25:59.613310 ignition[1279]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:25:59.613310 ignition[1279]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:25:59.613310 ignition[1279]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:25:59.619285 ignition[1279]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:25:59.619285 ignition[1279]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:25:59.623429 ignition[1279]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:25:59.623429 ignition[1279]: INFO : files: files passed Feb 13 19:25:59.623429 ignition[1279]: INFO : Ignition finished successfully Feb 13 19:25:59.629497 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:25:59.636375 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:25:59.640557 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:25:59.642751 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:25:59.642866 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:25:59.673927 initrd-setup-root-after-ignition[1309]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:25:59.673927 initrd-setup-root-after-ignition[1309]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:25:59.678258 initrd-setup-root-after-ignition[1313]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:25:59.682389 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:25:59.683033 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:25:59.689545 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:25:59.753311 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:25:59.753444 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:25:59.756745 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:25:59.760096 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:25:59.765418 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:25:59.772554 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:25:59.802045 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:25:59.809556 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:25:59.872002 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:25:59.878335 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:25:59.886672 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:25:59.890533 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:25:59.890713 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:25:59.900447 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:25:59.901690 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:25:59.903196 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:25:59.908865 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:25:59.912284 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:25:59.914876 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:25:59.916419 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:25:59.918951 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:25:59.922585 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:25:59.924662 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:25:59.925759 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:25:59.925936 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:25:59.930185 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:25:59.932599 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:25:59.934060 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:25:59.934290 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:25:59.952625 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:25:59.952772 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:25:59.955944 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:25:59.956353 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:25:59.961745 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:25:59.961983 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:25:59.971719 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:25:59.990382 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:25:59.991068 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:25:59.998506 ignition[1333]: INFO : Ignition 2.20.0 Feb 13 19:25:59.998506 ignition[1333]: INFO : Stage: umount Feb 13 19:26:00.011395 ignition[1333]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:26:00.011395 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:26:00.011395 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:26:00.011395 ignition[1333]: INFO : PUT result: OK Feb 13 19:26:00.011395 ignition[1333]: INFO : umount: umount passed Feb 13 19:26:00.011395 ignition[1333]: INFO : Ignition finished successfully Feb 13 19:26:00.001512 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:26:00.002775 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:26:00.003020 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:26:00.004680 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:26:00.004862 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:26:00.010190 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:26:00.011262 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:26:00.028808 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:26:00.028942 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:26:00.034831 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:26:00.035362 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:26:00.035410 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:26:00.036204 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:26:00.036283 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:26:00.036375 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:26:00.036406 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:26:00.036547 systemd[1]: Stopped target network.target - Network. Feb 13 19:26:00.036737 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:26:00.036767 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:26:00.036925 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:26:00.037075 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:26:00.044341 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:26:00.045894 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:26:00.047406 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:26:00.050988 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:26:00.051045 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:26:00.053003 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:26:00.053048 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:26:00.055380 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:26:00.055525 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:26:00.057173 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:26:00.057236 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:26:00.058773 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:26:00.060741 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:26:00.065399 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:26:00.065493 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:26:00.082563 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:26:00.082955 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:26:00.083073 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:26:00.093503 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:26:00.096811 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:26:00.096884 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:26:00.109401 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:26:00.111966 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:26:00.112082 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:26:00.119098 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:26:00.119160 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:26:00.121394 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:26:00.121448 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:26:00.123343 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:26:00.123395 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:26:00.130383 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:26:00.137159 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:26:00.137248 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:26:00.147959 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:26:00.148098 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:26:00.152054 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:26:00.152141 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:26:00.154317 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:26:00.154389 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:26:00.158333 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:26:00.161403 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:26:00.164039 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:26:00.165212 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:26:00.172715 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:26:00.172859 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:26:00.177093 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:26:00.177166 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:26:00.196494 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:26:00.198708 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:26:00.201420 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:26:00.209415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:26:00.209508 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:26:00.218921 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:26:00.219017 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:26:00.219501 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:26:00.220161 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:26:00.247194 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:26:00.247348 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:26:00.257088 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:26:00.260076 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:26:00.260363 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:26:00.270466 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:26:00.286925 systemd[1]: Switching root. Feb 13 19:26:00.327337 systemd-journald[179]: Journal stopped Feb 13 19:26:03.206516 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 19:26:03.206699 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:26:03.206731 kernel: SELinux: policy capability open_perms=1 Feb 13 19:26:03.206752 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:26:03.206779 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:26:03.206803 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:26:03.206821 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:26:03.206840 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:26:03.206858 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:26:03.206878 kernel: audit: type=1403 audit(1739474760.676:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:26:03.206909 systemd[1]: Successfully loaded SELinux policy in 83.969ms. Feb 13 19:26:03.206940 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 39.984ms. Feb 13 19:26:03.206963 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:26:03.206984 systemd[1]: Detected virtualization amazon. Feb 13 19:26:03.207009 systemd[1]: Detected architecture x86-64. Feb 13 19:26:03.207030 systemd[1]: Detected first boot. Feb 13 19:26:03.207050 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:26:03.207071 zram_generator::config[1379]: No configuration found. Feb 13 19:26:03.207092 kernel: Guest personality initialized and is inactive Feb 13 19:26:03.207112 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 19:26:03.207130 kernel: Initialized host personality Feb 13 19:26:03.207152 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:26:03.207173 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:26:03.207209 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:26:03.207250 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:26:03.207271 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:26:03.207293 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:26:03.207314 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:26:03.207337 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:26:03.207358 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:26:03.207379 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:26:03.207406 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:26:03.207428 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:26:03.207451 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:26:03.207478 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:26:03.207501 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:26:03.207525 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:26:03.207547 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:26:03.207569 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:26:03.207592 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:26:03.207620 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:26:03.207642 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:26:03.207665 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:26:03.207687 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:26:03.207711 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:26:03.207733 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:26:03.207756 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:26:03.207781 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:26:03.207803 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:26:03.207825 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:26:03.207846 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:26:03.207868 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:26:03.207889 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:26:03.207910 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:26:03.207933 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:26:03.207955 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:26:03.207978 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:26:03.208002 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:26:03.208030 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:26:03.208053 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:26:03.208076 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:26:03.208096 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:26:03.208118 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:26:03.208142 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:26:03.208164 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:26:03.222311 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:26:03.222450 systemd[1]: Reached target machines.target - Containers. Feb 13 19:26:03.222487 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:26:03.222513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:26:03.222538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:26:03.222563 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:26:03.222589 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:26:03.222612 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:26:03.223930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:26:03.223962 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:26:03.223983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:26:03.224003 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:26:03.224022 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:26:03.224041 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:26:03.224059 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:26:03.224078 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:26:03.224099 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:26:03.224120 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:26:03.224138 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:26:03.224157 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:26:03.224175 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:26:03.224194 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:26:03.224212 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:26:03.224267 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:26:03.224286 systemd[1]: Stopped verity-setup.service. Feb 13 19:26:03.224305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:26:03.224327 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:26:03.224346 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:26:03.224364 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:26:03.224383 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:26:03.224405 kernel: fuse: init (API version 7.39) Feb 13 19:26:03.224424 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:26:03.224442 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:26:03.224466 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:26:03.224484 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:26:03.224504 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:26:03.224526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:26:03.224545 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:26:03.224563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:26:03.224581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:26:03.224599 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:26:03.224617 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:26:03.224637 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:26:03.224656 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:26:03.224675 kernel: loop: module loaded Feb 13 19:26:03.224733 systemd-journald[1460]: Collecting audit messages is disabled. Feb 13 19:26:03.224766 kernel: ACPI: bus type drm_connector registered Feb 13 19:26:03.224784 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:26:03.224803 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:26:03.224821 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:26:03.224839 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:26:03.224857 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:26:03.224878 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:26:03.224897 systemd-journald[1460]: Journal started Feb 13 19:26:03.224932 systemd-journald[1460]: Runtime Journal (/run/log/journal/ec2f190b33879f7f57dc2cde53c1648d) is 4.8M, max 38.5M, 33.7M free. Feb 13 19:26:02.384646 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:26:03.229411 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:26:02.396671 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:26:02.397162 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:26:03.229020 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:26:03.232532 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:26:03.238503 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:26:03.245208 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:26:03.250114 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:26:03.252585 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:26:03.281448 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:26:03.284114 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:26:03.284175 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:26:03.289212 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:26:03.301621 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:26:03.306756 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:26:03.308624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:26:03.319712 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:26:03.329564 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:26:03.331915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:26:03.343429 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:26:03.344782 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:26:03.351461 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:26:03.360838 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:26:03.365173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:26:03.368610 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:26:03.405260 kernel: loop0: detected capacity change from 0 to 138176 Feb 13 19:26:03.435657 systemd-journald[1460]: Time spent on flushing to /var/log/journal/ec2f190b33879f7f57dc2cde53c1648d is 201.041ms for 978 entries. Feb 13 19:26:03.435657 systemd-journald[1460]: System Journal (/var/log/journal/ec2f190b33879f7f57dc2cde53c1648d) is 8M, max 195.6M, 187.6M free. Feb 13 19:26:03.677718 systemd-journald[1460]: Received client request to flush runtime journal. Feb 13 19:26:03.677808 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:26:03.677845 kernel: loop1: detected capacity change from 0 to 62832 Feb 13 19:26:03.441372 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:26:03.448000 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:26:03.466847 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:26:03.491696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:26:03.500209 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:26:03.554887 udevadm[1522]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:26:03.585337 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:26:03.587008 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:26:03.653102 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:26:03.671902 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:26:03.688407 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:26:03.736288 kernel: loop2: detected capacity change from 0 to 205544 Feb 13 19:26:03.734896 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. Feb 13 19:26:03.734926 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. Feb 13 19:26:03.747635 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:26:03.900350 kernel: loop3: detected capacity change from 0 to 147912 Feb 13 19:26:03.986258 kernel: loop4: detected capacity change from 0 to 138176 Feb 13 19:26:04.045963 kernel: loop5: detected capacity change from 0 to 62832 Feb 13 19:26:04.078272 kernel: loop6: detected capacity change from 0 to 205544 Feb 13 19:26:04.136809 kernel: loop7: detected capacity change from 0 to 147912 Feb 13 19:26:04.198071 (sd-merge)[1536]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:26:04.210588 (sd-merge)[1536]: Merged extensions into '/usr'. Feb 13 19:26:04.229273 systemd[1]: Reload requested from client PID 1514 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:26:04.229294 systemd[1]: Reloading... Feb 13 19:26:04.392901 zram_generator::config[1561]: No configuration found. Feb 13 19:26:04.720379 ldconfig[1509]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:26:04.771422 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:26:04.969011 systemd[1]: Reloading finished in 738 ms. Feb 13 19:26:04.997614 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:26:05.001590 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:26:05.019472 systemd[1]: Starting ensure-sysext.service... Feb 13 19:26:05.033816 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:26:05.079175 systemd[1]: Reload requested from client PID 1613 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:26:05.079200 systemd[1]: Reloading... Feb 13 19:26:05.107916 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:26:05.108531 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:26:05.110367 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:26:05.110756 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Feb 13 19:26:05.110919 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Feb 13 19:26:05.127747 systemd-tmpfiles[1614]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:26:05.127763 systemd-tmpfiles[1614]: Skipping /boot Feb 13 19:26:05.213244 zram_generator::config[1640]: No configuration found. Feb 13 19:26:05.231677 systemd-tmpfiles[1614]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:26:05.231695 systemd-tmpfiles[1614]: Skipping /boot Feb 13 19:26:05.394931 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:26:05.481390 systemd[1]: Reloading finished in 401 ms. Feb 13 19:26:05.495263 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:26:05.508401 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:26:05.526731 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:26:05.537211 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:26:05.566665 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:26:05.581859 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:26:05.595361 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:26:05.605587 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:26:05.616768 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:26:05.618492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:26:05.632245 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:26:05.642451 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:26:05.653743 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:26:05.655647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:26:05.655930 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:26:05.663087 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:26:05.664456 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:26:05.669750 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:26:05.673019 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:26:05.675500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:26:05.676110 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:26:05.682739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:26:05.707097 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:26:05.707548 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:26:05.712715 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:26:05.727290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:26:05.729626 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:26:05.730081 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:26:05.732377 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:26:05.746672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:26:05.746985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:26:05.751416 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:26:05.778620 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:26:05.782266 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:26:05.782633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:26:05.785040 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:26:05.785379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:26:05.789141 systemd-udevd[1704]: Using default interface naming scheme 'v255'. Feb 13 19:26:05.799343 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:26:05.799679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:26:05.806528 augenrules[1732]: No rules Feb 13 19:26:05.808574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:26:05.819342 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:26:05.820699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:26:05.820754 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:26:05.820825 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:26:05.820884 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:26:05.830368 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:26:05.831610 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:26:05.834470 systemd[1]: Finished ensure-sysext.service. Feb 13 19:26:05.835847 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:26:05.836230 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:26:05.840714 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:26:05.840899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:26:05.843013 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:26:05.843306 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:26:05.847960 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:26:05.875913 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:26:05.879508 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:26:05.894449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:26:05.896565 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:26:05.906445 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:26:05.908740 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:26:06.101849 systemd-networkd[1749]: lo: Link UP Feb 13 19:26:06.101861 systemd-networkd[1749]: lo: Gained carrier Feb 13 19:26:06.113472 (udev-worker)[1766]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:26:06.115544 systemd-resolved[1703]: Positive Trust Anchors: Feb 13 19:26:06.115984 systemd-resolved[1703]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:26:06.116048 systemd-resolved[1703]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:26:06.124602 systemd-networkd[1749]: Enumeration completed Feb 13 19:26:06.124738 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:26:06.129010 systemd-resolved[1703]: Defaulting to hostname 'linux'. Feb 13 19:26:06.135933 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:26:06.146682 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:26:06.148300 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:26:06.150575 systemd[1]: Reached target network.target - Network. Feb 13 19:26:06.151628 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:26:06.188656 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:26:06.203477 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:26:06.235345 systemd-networkd[1749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:26:06.235357 systemd-networkd[1749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:26:06.240266 systemd-networkd[1749]: eth0: Link UP Feb 13 19:26:06.240659 systemd-networkd[1749]: eth0: Gained carrier Feb 13 19:26:06.240774 systemd-networkd[1749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:26:06.250340 systemd-networkd[1749]: eth0: DHCPv4 address 172.31.19.191/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:26:06.272269 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1758) Feb 13 19:26:06.303521 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 19:26:06.310341 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:26:06.318245 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:26:06.323279 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 19:26:06.342245 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 19:26:06.380263 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 19:26:06.500246 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:26:06.515599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:26:06.527102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:26:06.543478 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:26:06.545587 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:26:06.550859 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:26:06.565257 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:26:06.582392 lvm[1871]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:26:06.612536 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:26:06.614823 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:26:06.619641 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:26:06.640670 lvm[1875]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:26:06.684936 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:26:06.820184 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:26:06.824519 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:26:06.827347 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:26:06.829180 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:26:06.831177 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:26:06.833986 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:26:06.837259 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:26:06.849328 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:26:06.849385 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:26:06.854676 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:26:06.864404 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:26:06.876418 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:26:06.886759 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:26:06.888475 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:26:06.893073 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:26:06.919126 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:26:06.924019 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:26:06.934185 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:26:06.939721 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:26:06.945327 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:26:06.948716 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:26:06.948866 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:26:06.956430 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:26:06.971446 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:26:06.978492 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:26:06.985368 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:26:07.021815 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:26:07.023796 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:26:07.026443 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:26:07.035383 jq[1885]: false Feb 13 19:26:07.040449 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:26:07.045375 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:26:07.056343 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:26:07.065454 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:26:07.069284 dbus-daemon[1884]: [system] SELinux support is enabled Feb 13 19:26:07.072775 dbus-daemon[1884]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1749 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:26:07.079420 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:26:07.088437 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:26:07.093562 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:26:07.095549 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:26:07.105693 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:26:07.110767 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:26:07.114309 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:26:07.131835 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:26:07.132588 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:26:07.142797 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:26:07.144312 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:26:07.172053 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:26:07.172201 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:26:07.179579 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:26:07.179823 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:26:07.194813 dbus-daemon[1884]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:26:07.227658 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:26:07.240259 update_engine[1897]: I20250213 19:26:07.233922 1897 main.cc:92] Flatcar Update Engine starting Feb 13 19:26:07.240614 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:04:11 UTC 2025 (1): Starting Feb 13 19:26:07.240614 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:26:07.240614 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: ---------------------------------------------------- Feb 13 19:26:07.240614 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:26:07.240614 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:26:07.240614 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: corporation. Support and training for ntp-4 are Feb 13 19:26:07.240614 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: available at https://www.nwtime.org/support Feb 13 19:26:07.240614 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: ---------------------------------------------------- Feb 13 19:26:07.236911 ntpd[1890]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:04:11 UTC 2025 (1): Starting Feb 13 19:26:07.236953 ntpd[1890]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:26:07.236968 ntpd[1890]: ---------------------------------------------------- Feb 13 19:26:07.236980 ntpd[1890]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:26:07.236992 ntpd[1890]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:26:07.237002 ntpd[1890]: corporation. Support and training for ntp-4 are Feb 13 19:26:07.237014 ntpd[1890]: available at https://www.nwtime.org/support Feb 13 19:26:07.237027 ntpd[1890]: ---------------------------------------------------- Feb 13 19:26:07.256767 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:26:07.266172 ntpd[1890]: proto: precision = 0.069 usec (-24) Feb 13 19:26:07.269543 update_engine[1897]: I20250213 19:26:07.266635 1897 update_check_scheduler.cc:74] Next update check in 8m0s Feb 13 19:26:07.269592 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: proto: precision = 0.069 usec (-24) Feb 13 19:26:07.267314 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:26:07.272586 ntpd[1890]: basedate set to 2025-02-01 Feb 13 19:26:07.274436 jq[1898]: true Feb 13 19:26:07.274699 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: basedate set to 2025-02-01 Feb 13 19:26:07.274699 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: gps base set to 2025-02-02 (week 2352) Feb 13 19:26:07.272617 ntpd[1890]: gps base set to 2025-02-02 (week 2352) Feb 13 19:26:07.284144 extend-filesystems[1886]: Found loop4 Feb 13 19:26:07.284144 extend-filesystems[1886]: Found loop5 Feb 13 19:26:07.284144 extend-filesystems[1886]: Found loop6 Feb 13 19:26:07.284144 extend-filesystems[1886]: Found loop7 Feb 13 19:26:07.284144 extend-filesystems[1886]: Found nvme0n1 Feb 13 19:26:07.284144 extend-filesystems[1886]: Found nvme0n1p1 Feb 13 19:26:07.284144 extend-filesystems[1886]: Found nvme0n1p2 Feb 13 19:26:07.284144 extend-filesystems[1886]: Found nvme0n1p3 Feb 13 19:26:07.284144 extend-filesystems[1886]: Found usr Feb 13 19:26:07.284144 extend-filesystems[1886]: Found nvme0n1p4 Feb 13 19:26:07.284144 extend-filesystems[1886]: Found nvme0n1p6 Feb 13 19:26:07.284144 extend-filesystems[1886]: Found nvme0n1p7 Feb 13 19:26:07.284144 extend-filesystems[1886]: Found nvme0n1p9 Feb 13 19:26:07.284144 extend-filesystems[1886]: Checking size of /dev/nvme0n1p9 Feb 13 19:26:07.325631 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:26:07.336681 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:26:07.336681 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:26:07.336681 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:26:07.336681 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: Listen normally on 3 eth0 172.31.19.191:123 Feb 13 19:26:07.336681 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: Listen normally on 4 lo [::1]:123 Feb 13 19:26:07.336681 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: bind(21) AF_INET6 fe80::4d1:8dff:fe72:4147%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:26:07.336681 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: unable to create socket on eth0 (5) for fe80::4d1:8dff:fe72:4147%2#123 Feb 13 19:26:07.336681 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: failed to init interface for address fe80::4d1:8dff:fe72:4147%2 Feb 13 19:26:07.336681 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: Listening on routing socket on fd #21 for interface updates Feb 13 19:26:07.336681 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:26:07.336681 ntpd[1890]: 13 Feb 19:26:07 ntpd[1890]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:26:07.337105 tar[1900]: linux-amd64/helm Feb 13 19:26:07.294790 ntpd[1890]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:26:07.325883 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:26:07.294946 ntpd[1890]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:26:07.295148 ntpd[1890]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:26:07.295189 ntpd[1890]: Listen normally on 3 eth0 172.31.19.191:123 Feb 13 19:26:07.295247 ntpd[1890]: Listen normally on 4 lo [::1]:123 Feb 13 19:26:07.295305 ntpd[1890]: bind(21) AF_INET6 fe80::4d1:8dff:fe72:4147%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:26:07.295326 ntpd[1890]: unable to create socket on eth0 (5) for fe80::4d1:8dff:fe72:4147%2#123 Feb 13 19:26:07.295342 ntpd[1890]: failed to init interface for address fe80::4d1:8dff:fe72:4147%2 Feb 13 19:26:07.295373 ntpd[1890]: Listening on routing socket on fd #21 for interface updates Feb 13 19:26:07.332003 ntpd[1890]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:26:07.332039 ntpd[1890]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:26:07.351846 (ntainerd)[1918]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:26:07.352038 systemd-logind[1896]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:26:07.352063 systemd-logind[1896]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 19:26:07.352092 systemd-logind[1896]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:26:07.374450 jq[1929]: true Feb 13 19:26:07.357188 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:26:07.389194 extend-filesystems[1886]: Resized partition /dev/nvme0n1p9 Feb 13 19:26:07.365398 systemd-logind[1896]: New seat seat0. Feb 13 19:26:07.368047 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:26:07.397095 extend-filesystems[1942]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:26:07.409638 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:26:07.432456 coreos-metadata[1883]: Feb 13 19:26:07.432 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:26:07.435411 coreos-metadata[1883]: Feb 13 19:26:07.434 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:26:07.435411 coreos-metadata[1883]: Feb 13 19:26:07.435 INFO Fetch successful Feb 13 19:26:07.435411 coreos-metadata[1883]: Feb 13 19:26:07.435 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:26:07.436161 coreos-metadata[1883]: Feb 13 19:26:07.436 INFO Fetch successful Feb 13 19:26:07.436161 coreos-metadata[1883]: Feb 13 19:26:07.436 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:26:07.436829 coreos-metadata[1883]: Feb 13 19:26:07.436 INFO Fetch successful Feb 13 19:26:07.436829 coreos-metadata[1883]: Feb 13 19:26:07.436 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:26:07.438349 coreos-metadata[1883]: Feb 13 19:26:07.438 INFO Fetch successful Feb 13 19:26:07.438349 coreos-metadata[1883]: Feb 13 19:26:07.438 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:26:07.438944 coreos-metadata[1883]: Feb 13 19:26:07.438 INFO Fetch failed with 404: resource not found Feb 13 19:26:07.438944 coreos-metadata[1883]: Feb 13 19:26:07.438 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:26:07.439538 coreos-metadata[1883]: Feb 13 19:26:07.439 INFO Fetch successful Feb 13 19:26:07.439538 coreos-metadata[1883]: Feb 13 19:26:07.439 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:26:07.440132 coreos-metadata[1883]: Feb 13 19:26:07.440 INFO Fetch successful Feb 13 19:26:07.440132 coreos-metadata[1883]: Feb 13 19:26:07.440 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:26:07.441452 coreos-metadata[1883]: Feb 13 19:26:07.441 INFO Fetch successful Feb 13 19:26:07.441452 coreos-metadata[1883]: Feb 13 19:26:07.441 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:26:07.442123 coreos-metadata[1883]: Feb 13 19:26:07.442 INFO Fetch successful Feb 13 19:26:07.442123 coreos-metadata[1883]: Feb 13 19:26:07.442 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:26:07.448239 coreos-metadata[1883]: Feb 13 19:26:07.444 INFO Fetch successful Feb 13 19:26:07.522191 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:26:07.524691 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:26:07.557104 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1763) Feb 13 19:26:07.575420 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:26:07.600096 extend-filesystems[1942]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:26:07.600096 extend-filesystems[1942]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:26:07.600096 extend-filesystems[1942]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:26:07.611044 extend-filesystems[1886]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:26:07.614661 bash[1955]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:26:07.602598 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:26:07.602887 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:26:07.617076 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:26:07.658513 systemd[1]: Starting sshkeys.service... Feb 13 19:26:07.683423 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:26:07.692879 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:26:07.901074 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:26:07.905033 dbus-daemon[1884]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:26:07.907590 dbus-daemon[1884]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1911 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:26:07.922909 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:26:07.996051 polkitd[2016]: Started polkitd version 121 Feb 13 19:26:08.030243 coreos-metadata[1977]: Feb 13 19:26:08.029 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:26:08.035105 coreos-metadata[1977]: Feb 13 19:26:08.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:26:08.036112 coreos-metadata[1977]: Feb 13 19:26:08.036 INFO Fetch successful Feb 13 19:26:08.036240 coreos-metadata[1977]: Feb 13 19:26:08.036 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:26:08.036695 polkitd[2016]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:26:08.036792 polkitd[2016]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:26:08.038155 coreos-metadata[1977]: Feb 13 19:26:08.038 INFO Fetch successful Feb 13 19:26:08.039688 polkitd[2016]: Finished loading, compiling and executing 2 rules Feb 13 19:26:08.039849 unknown[1977]: wrote ssh authorized keys file for user: core Feb 13 19:26:08.041493 systemd-networkd[1749]: eth0: Gained IPv6LL Feb 13 19:26:08.049084 dbus-daemon[1884]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:26:08.049249 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:26:08.052360 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:26:08.059315 polkitd[2016]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:26:08.062692 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:26:08.074239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:08.088038 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:26:08.097457 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:26:08.115276 locksmithd[1921]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:26:08.162686 systemd-hostnamed[1911]: Hostname set to (transient) Feb 13 19:26:08.166595 systemd-resolved[1703]: System hostname changed to 'ip-172-31-19-191'. Feb 13 19:26:08.175189 update-ssh-keys[2049]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:26:08.184174 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:26:08.191394 systemd[1]: Finished sshkeys.service. Feb 13 19:26:08.244147 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:26:08.484666 amazon-ssm-agent[2044]: Initializing new seelog logger Feb 13 19:26:08.491464 amazon-ssm-agent[2044]: New Seelog Logger Creation Complete Feb 13 19:26:08.491586 amazon-ssm-agent[2044]: 2025/02/13 19:26:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:26:08.491586 amazon-ssm-agent[2044]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:26:08.492257 amazon-ssm-agent[2044]: 2025/02/13 19:26:08 processing appconfig overrides Feb 13 19:26:08.498587 amazon-ssm-agent[2044]: 2025/02/13 19:26:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:26:08.498587 amazon-ssm-agent[2044]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:26:08.498792 amazon-ssm-agent[2044]: 2025/02/13 19:26:08 processing appconfig overrides Feb 13 19:26:08.499135 amazon-ssm-agent[2044]: 2025/02/13 19:26:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:26:08.499135 amazon-ssm-agent[2044]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:26:08.503446 amazon-ssm-agent[2044]: 2025/02/13 19:26:08 processing appconfig overrides Feb 13 19:26:08.503446 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO Proxy environment variables: Feb 13 19:26:08.511213 amazon-ssm-agent[2044]: 2025/02/13 19:26:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:26:08.511213 amazon-ssm-agent[2044]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:26:08.511387 amazon-ssm-agent[2044]: 2025/02/13 19:26:08 processing appconfig overrides Feb 13 19:26:08.559282 containerd[1918]: time="2025-02-13T19:26:08.559171570Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:26:08.602097 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO https_proxy: Feb 13 19:26:08.699586 containerd[1918]: time="2025-02-13T19:26:08.699277557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:08.703665 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO http_proxy: Feb 13 19:26:08.706237 containerd[1918]: time="2025-02-13T19:26:08.705166804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:26:08.706237 containerd[1918]: time="2025-02-13T19:26:08.705226196Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:26:08.708243 containerd[1918]: time="2025-02-13T19:26:08.706409735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:26:08.708243 containerd[1918]: time="2025-02-13T19:26:08.707042673Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:26:08.708243 containerd[1918]: time="2025-02-13T19:26:08.707073111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:08.708243 containerd[1918]: time="2025-02-13T19:26:08.707154736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:26:08.708243 containerd[1918]: time="2025-02-13T19:26:08.707173480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:08.708243 containerd[1918]: time="2025-02-13T19:26:08.707473260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:26:08.708243 containerd[1918]: time="2025-02-13T19:26:08.707493786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:08.708243 containerd[1918]: time="2025-02-13T19:26:08.707510573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:26:08.708243 containerd[1918]: time="2025-02-13T19:26:08.707524498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:08.708243 containerd[1918]: time="2025-02-13T19:26:08.707622203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:08.708243 containerd[1918]: time="2025-02-13T19:26:08.707922857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:26:08.709469 containerd[1918]: time="2025-02-13T19:26:08.709429189Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:26:08.709603 containerd[1918]: time="2025-02-13T19:26:08.709583002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:26:08.710112 containerd[1918]: time="2025-02-13T19:26:08.710089784Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:26:08.711094 containerd[1918]: time="2025-02-13T19:26:08.711072509Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:26:08.721051 containerd[1918]: time="2025-02-13T19:26:08.720620439Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:26:08.721051 containerd[1918]: time="2025-02-13T19:26:08.720704187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:26:08.721051 containerd[1918]: time="2025-02-13T19:26:08.720726712Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:26:08.721051 containerd[1918]: time="2025-02-13T19:26:08.720750989Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:26:08.721051 containerd[1918]: time="2025-02-13T19:26:08.720773365Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:26:08.721051 containerd[1918]: time="2025-02-13T19:26:08.721037097Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721386062Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721526672Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721550002Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721572883Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721595603Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721615743Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721634764Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721654156Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721674119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721758890Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721780201Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721795871Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721825490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.722552 containerd[1918]: time="2025-02-13T19:26:08.721845757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.721875800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.721896567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.721915726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.722005357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.722022942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.722066311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.722086317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.722107553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.722172088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.722193492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.722227631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.722437524Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.722508186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.722532398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.725999 containerd[1918]: time="2025-02-13T19:26:08.722548215Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:26:08.730707 containerd[1918]: time="2025-02-13T19:26:08.722619945Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:26:08.730707 containerd[1918]: time="2025-02-13T19:26:08.722647568Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:26:08.730707 containerd[1918]: time="2025-02-13T19:26:08.722716239Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:26:08.730707 containerd[1918]: time="2025-02-13T19:26:08.722740463Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:26:08.730707 containerd[1918]: time="2025-02-13T19:26:08.722755700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.730707 containerd[1918]: time="2025-02-13T19:26:08.722773428Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:26:08.730707 containerd[1918]: time="2025-02-13T19:26:08.722834593Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:26:08.730707 containerd[1918]: time="2025-02-13T19:26:08.722855599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:26:08.734995 containerd[1918]: time="2025-02-13T19:26:08.734139405Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:26:08.734995 containerd[1918]: time="2025-02-13T19:26:08.734246027Z" level=info msg="Connect containerd service" Feb 13 19:26:08.734995 containerd[1918]: time="2025-02-13T19:26:08.734298795Z" level=info msg="using legacy CRI server" Feb 13 19:26:08.734995 containerd[1918]: time="2025-02-13T19:26:08.734308820Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:26:08.734995 containerd[1918]: time="2025-02-13T19:26:08.734468780Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:26:08.740419 containerd[1918]: time="2025-02-13T19:26:08.738729227Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:26:08.740419 containerd[1918]: time="2025-02-13T19:26:08.739382429Z" level=info msg="Start subscribing containerd event" Feb 13 19:26:08.740419 containerd[1918]: time="2025-02-13T19:26:08.739552160Z" level=info msg="Start recovering state" Feb 13 19:26:08.740419 containerd[1918]: time="2025-02-13T19:26:08.739635712Z" level=info msg="Start event monitor" Feb 13 19:26:08.740419 containerd[1918]: time="2025-02-13T19:26:08.739660227Z" level=info msg="Start snapshots syncer" Feb 13 19:26:08.740419 containerd[1918]: time="2025-02-13T19:26:08.739673055Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:26:08.740419 containerd[1918]: time="2025-02-13T19:26:08.739684193Z" level=info msg="Start streaming server" Feb 13 19:26:08.745245 containerd[1918]: time="2025-02-13T19:26:08.741734292Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:26:08.745245 containerd[1918]: time="2025-02-13T19:26:08.742142125Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:26:08.745245 containerd[1918]: time="2025-02-13T19:26:08.743021234Z" level=info msg="containerd successfully booted in 0.185656s" Feb 13 19:26:08.742371 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:26:08.816010 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO no_proxy: Feb 13 19:26:08.921013 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:26:09.016713 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:26:09.016861 sshd_keygen[1919]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:26:09.105621 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:26:09.117776 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO Agent will take identity from EC2 Feb 13 19:26:09.117671 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:26:09.155889 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:26:09.157009 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:26:09.168675 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:26:09.215036 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:26:09.218763 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:26:09.227479 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:26:09.242658 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:26:09.245866 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:26:09.315321 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:26:09.371353 tar[1900]: linux-amd64/LICENSE Feb 13 19:26:09.372310 tar[1900]: linux-amd64/README.md Feb 13 19:26:09.383251 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:26:09.383251 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:26:09.383251 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 19:26:09.383251 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:26:09.383251 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:26:09.383251 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO [Registrar] Starting registrar module Feb 13 19:26:09.383251 amazon-ssm-agent[2044]: 2025-02-13 19:26:08 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:26:09.383251 amazon-ssm-agent[2044]: 2025-02-13 19:26:09 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:26:09.383251 amazon-ssm-agent[2044]: 2025-02-13 19:26:09 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:26:09.383251 amazon-ssm-agent[2044]: 2025-02-13 19:26:09 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:26:09.383251 amazon-ssm-agent[2044]: 2025-02-13 19:26:09 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:26:09.392062 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:26:09.397815 systemd[1]: Started sshd@0-172.31.19.191:22-139.178.68.195:59086.service - OpenSSH per-connection server daemon (139.178.68.195:59086). Feb 13 19:26:09.401739 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:26:09.415386 amazon-ssm-agent[2044]: 2025-02-13 19:26:09 INFO [CredentialRefresher] Next credential rotation will be in 31.74166041346667 minutes Feb 13 19:26:09.622352 sshd[2127]: Accepted publickey for core from 139.178.68.195 port 59086 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:26:09.625349 sshd-session[2127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:09.650636 systemd-logind[1896]: New session 1 of user core. Feb 13 19:26:09.653516 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:26:09.662773 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:26:09.710696 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:26:09.727860 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:26:09.748881 (systemd)[2132]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:26:09.754132 systemd-logind[1896]: New session c1 of user core. Feb 13 19:26:10.028709 systemd[2132]: Queued start job for default target default.target. Feb 13 19:26:10.034593 systemd[2132]: Created slice app.slice - User Application Slice. Feb 13 19:26:10.034637 systemd[2132]: Reached target paths.target - Paths. Feb 13 19:26:10.034697 systemd[2132]: Reached target timers.target - Timers. Feb 13 19:26:10.049387 systemd[2132]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:26:10.064834 systemd[2132]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:26:10.064994 systemd[2132]: Reached target sockets.target - Sockets. Feb 13 19:26:10.065061 systemd[2132]: Reached target basic.target - Basic System. Feb 13 19:26:10.065112 systemd[2132]: Reached target default.target - Main User Target. Feb 13 19:26:10.065149 systemd[2132]: Startup finished in 293ms. Feb 13 19:26:10.065631 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:26:10.073577 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:26:10.237463 ntpd[1890]: Listen normally on 6 eth0 [fe80::4d1:8dff:fe72:4147%2]:123 Feb 13 19:26:10.241200 ntpd[1890]: 13 Feb 19:26:10 ntpd[1890]: Listen normally on 6 eth0 [fe80::4d1:8dff:fe72:4147%2]:123 Feb 13 19:26:10.241519 systemd[1]: Started sshd@1-172.31.19.191:22-139.178.68.195:59090.service - OpenSSH per-connection server daemon (139.178.68.195:59090). Feb 13 19:26:10.404509 amazon-ssm-agent[2044]: 2025-02-13 19:26:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:26:10.409266 sshd[2143]: Accepted publickey for core from 139.178.68.195 port 59090 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:26:10.413374 sshd-session[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:10.422859 systemd-logind[1896]: New session 2 of user core. Feb 13 19:26:10.429589 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:26:10.506846 amazon-ssm-agent[2044]: 2025-02-13 19:26:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2146) started Feb 13 19:26:10.563207 sshd[2147]: Connection closed by 139.178.68.195 port 59090 Feb 13 19:26:10.563474 sshd-session[2143]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:10.569999 systemd[1]: sshd@1-172.31.19.191:22-139.178.68.195:59090.service: Deactivated successfully. Feb 13 19:26:10.572030 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:26:10.572825 systemd-logind[1896]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:26:10.574421 systemd-logind[1896]: Removed session 2. Feb 13 19:26:10.605866 systemd[1]: Started sshd@2-172.31.19.191:22-139.178.68.195:59104.service - OpenSSH per-connection server daemon (139.178.68.195:59104). Feb 13 19:26:10.607860 amazon-ssm-agent[2044]: 2025-02-13 19:26:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:26:10.771721 sshd[2162]: Accepted publickey for core from 139.178.68.195 port 59104 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:26:10.774069 sshd-session[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:10.782348 systemd-logind[1896]: New session 3 of user core. Feb 13 19:26:10.788458 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:26:10.879783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:10.882207 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:26:10.884243 systemd[1]: Startup finished in 747ms (kernel) + 8.895s (initrd) + 10.287s (userspace) = 19.930s. Feb 13 19:26:10.917246 sshd[2164]: Connection closed by 139.178.68.195 port 59104 Feb 13 19:26:10.918572 sshd-session[2162]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:10.923906 systemd-logind[1896]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:26:10.924888 systemd[1]: sshd@2-172.31.19.191:22-139.178.68.195:59104.service: Deactivated successfully. Feb 13 19:26:11.093319 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:26:11.098853 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:26:11.104019 systemd-logind[1896]: Removed session 3. Feb 13 19:26:12.284064 kubelet[2170]: E0213 19:26:12.284004 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:26:12.286734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:26:12.286946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:26:12.287534 systemd[1]: kubelet.service: Consumed 986ms CPU time, 236.8M memory peak. Feb 13 19:26:15.059475 systemd-resolved[1703]: Clock change detected. Flushing caches. Feb 13 19:26:21.778737 systemd[1]: Started sshd@3-172.31.19.191:22-139.178.68.195:37422.service - OpenSSH per-connection server daemon (139.178.68.195:37422). Feb 13 19:26:21.950170 sshd[2186]: Accepted publickey for core from 139.178.68.195 port 37422 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:26:21.951611 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:21.958200 systemd-logind[1896]: New session 4 of user core. Feb 13 19:26:21.964278 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:26:22.083746 sshd[2188]: Connection closed by 139.178.68.195 port 37422 Feb 13 19:26:22.084697 sshd-session[2186]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:22.088236 systemd[1]: sshd@3-172.31.19.191:22-139.178.68.195:37422.service: Deactivated successfully. Feb 13 19:26:22.090473 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:26:22.092539 systemd-logind[1896]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:26:22.094243 systemd-logind[1896]: Removed session 4. Feb 13 19:26:22.128439 systemd[1]: Started sshd@4-172.31.19.191:22-139.178.68.195:37426.service - OpenSSH per-connection server daemon (139.178.68.195:37426). Feb 13 19:26:22.288197 sshd[2194]: Accepted publickey for core from 139.178.68.195 port 37426 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:26:22.289581 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:22.295122 systemd-logind[1896]: New session 5 of user core. Feb 13 19:26:22.304295 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:26:22.416218 sshd[2196]: Connection closed by 139.178.68.195 port 37426 Feb 13 19:26:22.417342 sshd-session[2194]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:22.421178 systemd[1]: sshd@4-172.31.19.191:22-139.178.68.195:37426.service: Deactivated successfully. Feb 13 19:26:22.423812 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:26:22.425569 systemd-logind[1896]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:26:22.426984 systemd-logind[1896]: Removed session 5. Feb 13 19:26:22.454591 systemd[1]: Started sshd@5-172.31.19.191:22-139.178.68.195:37436.service - OpenSSH per-connection server daemon (139.178.68.195:37436). Feb 13 19:26:22.645272 sshd[2202]: Accepted publickey for core from 139.178.68.195 port 37436 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:26:22.646755 sshd-session[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:22.651861 systemd-logind[1896]: New session 6 of user core. Feb 13 19:26:22.659275 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:26:22.779144 sshd[2204]: Connection closed by 139.178.68.195 port 37436 Feb 13 19:26:22.780374 sshd-session[2202]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:22.783557 systemd[1]: sshd@5-172.31.19.191:22-139.178.68.195:37436.service: Deactivated successfully. Feb 13 19:26:22.785838 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:26:22.787739 systemd-logind[1896]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:26:22.788822 systemd-logind[1896]: Removed session 6. Feb 13 19:26:22.816839 systemd[1]: Started sshd@6-172.31.19.191:22-139.178.68.195:37438.service - OpenSSH per-connection server daemon (139.178.68.195:37438). Feb 13 19:26:23.010183 sshd[2210]: Accepted publickey for core from 139.178.68.195 port 37438 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:26:23.011600 sshd-session[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:23.016376 systemd-logind[1896]: New session 7 of user core. Feb 13 19:26:23.025310 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:26:23.138993 sudo[2213]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:26:23.139510 sudo[2213]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:26:23.140934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:26:23.146339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:23.153883 sudo[2213]: pam_unix(sudo:session): session closed for user root Feb 13 19:26:23.179967 sshd[2212]: Connection closed by 139.178.68.195 port 37438 Feb 13 19:26:23.180737 sshd-session[2210]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:23.185310 systemd-logind[1896]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:26:23.185489 systemd[1]: sshd@6-172.31.19.191:22-139.178.68.195:37438.service: Deactivated successfully. Feb 13 19:26:23.189339 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:26:23.191276 systemd-logind[1896]: Removed session 7. Feb 13 19:26:23.218937 systemd[1]: Started sshd@7-172.31.19.191:22-139.178.68.195:37448.service - OpenSSH per-connection server daemon (139.178.68.195:37448). Feb 13 19:26:23.376938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:23.389411 sshd[2222]: Accepted publickey for core from 139.178.68.195 port 37448 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:26:23.390790 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:26:23.391742 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:23.400817 systemd-logind[1896]: New session 8 of user core. Feb 13 19:26:23.413362 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:26:23.489281 kubelet[2229]: E0213 19:26:23.489226 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:26:23.497590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:26:23.497794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:26:23.498563 systemd[1]: kubelet.service: Consumed 178ms CPU time, 96.5M memory peak. Feb 13 19:26:23.534777 sudo[2238]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:26:23.535361 sudo[2238]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:26:23.541458 sudo[2238]: pam_unix(sudo:session): session closed for user root Feb 13 19:26:23.551886 sudo[2237]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:26:23.552303 sudo[2237]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:26:23.574480 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:26:23.638571 augenrules[2260]: No rules Feb 13 19:26:23.641575 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:26:23.641956 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:26:23.645212 sudo[2237]: pam_unix(sudo:session): session closed for user root Feb 13 19:26:23.668540 sshd[2234]: Connection closed by 139.178.68.195 port 37448 Feb 13 19:26:23.669509 sshd-session[2222]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:23.674319 systemd[1]: sshd@7-172.31.19.191:22-139.178.68.195:37448.service: Deactivated successfully. Feb 13 19:26:23.676767 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:26:23.678033 systemd-logind[1896]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:26:23.680903 systemd-logind[1896]: Removed session 8. Feb 13 19:26:23.715163 systemd[1]: Started sshd@8-172.31.19.191:22-139.178.68.195:37454.service - OpenSSH per-connection server daemon (139.178.68.195:37454). Feb 13 19:26:23.905606 sshd[2269]: Accepted publickey for core from 139.178.68.195 port 37454 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:26:23.908682 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:26:23.915759 systemd-logind[1896]: New session 9 of user core. Feb 13 19:26:23.920317 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:26:24.022769 sudo[2272]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:26:24.023732 sudo[2272]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:26:24.582522 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:26:24.583605 (dockerd)[2289]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:26:25.141203 dockerd[2289]: time="2025-02-13T19:26:25.141137470Z" level=info msg="Starting up" Feb 13 19:26:25.297004 dockerd[2289]: time="2025-02-13T19:26:25.296956100Z" level=info msg="Loading containers: start." Feb 13 19:26:25.533183 kernel: Initializing XFRM netlink socket Feb 13 19:26:25.576580 (udev-worker)[2311]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:26:25.672108 systemd-networkd[1749]: docker0: Link UP Feb 13 19:26:25.708946 dockerd[2289]: time="2025-02-13T19:26:25.708896876Z" level=info msg="Loading containers: done." Feb 13 19:26:25.728031 dockerd[2289]: time="2025-02-13T19:26:25.727873874Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:26:25.728233 dockerd[2289]: time="2025-02-13T19:26:25.728109462Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:26:25.728285 dockerd[2289]: time="2025-02-13T19:26:25.728251080Z" level=info msg="Daemon has completed initialization" Feb 13 19:26:25.769177 dockerd[2289]: time="2025-02-13T19:26:25.769120785Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:26:25.769649 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:26:27.098061 containerd[1918]: time="2025-02-13T19:26:27.098022039Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:26:27.794842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1743785150.mount: Deactivated successfully. Feb 13 19:26:29.356404 containerd[1918]: time="2025-02-13T19:26:29.356346043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:29.358645 containerd[1918]: time="2025-02-13T19:26:29.358596059Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976588" Feb 13 19:26:29.362089 containerd[1918]: time="2025-02-13T19:26:29.360228531Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:29.364956 containerd[1918]: time="2025-02-13T19:26:29.364914469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:29.367884 containerd[1918]: time="2025-02-13T19:26:29.367837341Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 2.26977392s" Feb 13 19:26:29.368003 containerd[1918]: time="2025-02-13T19:26:29.367893138Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 19:26:29.370554 containerd[1918]: time="2025-02-13T19:26:29.370504109Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:26:31.606459 containerd[1918]: time="2025-02-13T19:26:31.606407012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:31.608813 containerd[1918]: time="2025-02-13T19:26:31.608097970Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708193" Feb 13 19:26:31.610296 containerd[1918]: time="2025-02-13T19:26:31.610237207Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:31.613998 containerd[1918]: time="2025-02-13T19:26:31.613936677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:31.615388 containerd[1918]: time="2025-02-13T19:26:31.615157787Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 2.244614734s" Feb 13 19:26:31.615388 containerd[1918]: time="2025-02-13T19:26:31.615197352Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 19:26:31.616455 containerd[1918]: time="2025-02-13T19:26:31.616426954Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:26:33.040353 containerd[1918]: time="2025-02-13T19:26:33.040297668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:33.044856 containerd[1918]: time="2025-02-13T19:26:33.044634988Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652425" Feb 13 19:26:33.051153 containerd[1918]: time="2025-02-13T19:26:33.049963712Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:33.056521 containerd[1918]: time="2025-02-13T19:26:33.056433749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:33.058243 containerd[1918]: time="2025-02-13T19:26:33.058201640Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.441738023s" Feb 13 19:26:33.059419 containerd[1918]: time="2025-02-13T19:26:33.058391846Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 19:26:33.059933 containerd[1918]: time="2025-02-13T19:26:33.059903255Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:26:33.680378 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:26:33.691368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:34.000425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:34.014688 (kubelet)[2551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:26:34.147026 kubelet[2551]: E0213 19:26:34.146875 2551 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:26:34.151851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:26:34.153079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:26:34.154547 systemd[1]: kubelet.service: Consumed 199ms CPU time, 97.9M memory peak. Feb 13 19:26:34.504859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2091037471.mount: Deactivated successfully. Feb 13 19:26:35.241806 containerd[1918]: time="2025-02-13T19:26:35.241744542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:35.245886 containerd[1918]: time="2025-02-13T19:26:35.245621989Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 19:26:35.251752 containerd[1918]: time="2025-02-13T19:26:35.250320066Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:35.255673 containerd[1918]: time="2025-02-13T19:26:35.255624757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:35.257180 containerd[1918]: time="2025-02-13T19:26:35.257143297Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 2.197198159s" Feb 13 19:26:35.257326 containerd[1918]: time="2025-02-13T19:26:35.257305128Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 19:26:35.257977 containerd[1918]: time="2025-02-13T19:26:35.257941422Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:26:35.928893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651963789.mount: Deactivated successfully. Feb 13 19:26:37.327422 containerd[1918]: time="2025-02-13T19:26:37.326255952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:37.330748 containerd[1918]: time="2025-02-13T19:26:37.330685296Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 19:26:37.336365 containerd[1918]: time="2025-02-13T19:26:37.334455152Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:37.339193 containerd[1918]: time="2025-02-13T19:26:37.339157135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:37.340569 containerd[1918]: time="2025-02-13T19:26:37.340536952Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.082561653s" Feb 13 19:26:37.340817 containerd[1918]: time="2025-02-13T19:26:37.340795483Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:26:37.343376 containerd[1918]: time="2025-02-13T19:26:37.343345138Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:26:37.976310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551542044.mount: Deactivated successfully. Feb 13 19:26:37.991018 containerd[1918]: time="2025-02-13T19:26:37.990963943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:37.993127 containerd[1918]: time="2025-02-13T19:26:37.992818407Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 19:26:37.998785 containerd[1918]: time="2025-02-13T19:26:37.996576584Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:38.010227 containerd[1918]: time="2025-02-13T19:26:38.010171290Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 666.787166ms" Feb 13 19:26:38.010227 containerd[1918]: time="2025-02-13T19:26:38.010227950Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:26:38.011565 containerd[1918]: time="2025-02-13T19:26:38.011494634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:38.012523 containerd[1918]: time="2025-02-13T19:26:38.012492989Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:26:38.631332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183266438.mount: Deactivated successfully. Feb 13 19:26:39.023689 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:26:41.237370 containerd[1918]: time="2025-02-13T19:26:41.237310972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:41.247104 containerd[1918]: time="2025-02-13T19:26:41.243976529Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Feb 13 19:26:41.250439 containerd[1918]: time="2025-02-13T19:26:41.250391263Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:41.254899 containerd[1918]: time="2025-02-13T19:26:41.254848079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:26:41.256282 containerd[1918]: time="2025-02-13T19:26:41.256236524Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.243708807s" Feb 13 19:26:41.256282 containerd[1918]: time="2025-02-13T19:26:41.256281995Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 19:26:43.725258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:43.725593 systemd[1]: kubelet.service: Consumed 199ms CPU time, 97.9M memory peak. Feb 13 19:26:43.732453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:43.772217 systemd[1]: Reload requested from client PID 2696 ('systemctl') (unit session-9.scope)... Feb 13 19:26:43.772236 systemd[1]: Reloading... Feb 13 19:26:43.942109 zram_generator::config[2741]: No configuration found. Feb 13 19:26:44.111261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:26:44.255679 systemd[1]: Reloading finished in 482 ms. Feb 13 19:26:44.310234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:44.324669 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:26:44.328494 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:44.330164 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:26:44.330457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:44.330528 systemd[1]: kubelet.service: Consumed 127ms CPU time, 83.3M memory peak. Feb 13 19:26:44.336487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:44.539274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:44.546766 (kubelet)[2804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:26:44.614281 kubelet[2804]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:26:44.614792 kubelet[2804]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:26:44.614836 kubelet[2804]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:26:44.616509 kubelet[2804]: I0213 19:26:44.616465 2804 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:26:44.989509 kubelet[2804]: I0213 19:26:44.989466 2804 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:26:44.989509 kubelet[2804]: I0213 19:26:44.989498 2804 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:26:44.989894 kubelet[2804]: I0213 19:26:44.989871 2804 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:26:45.028719 kubelet[2804]: I0213 19:26:45.028596 2804 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:26:45.030807 kubelet[2804]: E0213 19:26:45.030758 2804 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.191:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.191:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:26:45.043063 kubelet[2804]: E0213 19:26:45.043022 2804 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:26:45.043063 kubelet[2804]: I0213 19:26:45.043059 2804 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:26:45.051624 kubelet[2804]: I0213 19:26:45.049847 2804 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:26:45.059622 kubelet[2804]: I0213 19:26:45.058789 2804 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:26:45.059622 kubelet[2804]: I0213 19:26:45.059046 2804 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:26:45.059622 kubelet[2804]: I0213 19:26:45.059111 2804 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-191","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:26:45.059622 kubelet[2804]: I0213 19:26:45.059373 2804 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:26:45.060007 kubelet[2804]: I0213 19:26:45.059383 2804 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:26:45.060007 kubelet[2804]: I0213 19:26:45.059490 2804 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:26:45.062736 kubelet[2804]: I0213 19:26:45.062692 2804 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:26:45.062736 kubelet[2804]: I0213 19:26:45.062732 2804 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:26:45.062891 kubelet[2804]: I0213 19:26:45.062776 2804 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:26:45.062891 kubelet[2804]: I0213 19:26:45.062793 2804 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:26:45.066907 kubelet[2804]: W0213 19:26:45.066835 2804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-191&limit=500&resourceVersion=0": dial tcp 172.31.19.191:6443: connect: connection refused Feb 13 19:26:45.067019 kubelet[2804]: E0213 19:26:45.066909 2804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-191&limit=500&resourceVersion=0\": dial tcp 172.31.19.191:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:26:45.074791 kubelet[2804]: W0213 19:26:45.074724 2804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.191:6443: connect: connection refused Feb 13 19:26:45.075022 kubelet[2804]: E0213 19:26:45.074805 2804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.191:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:26:45.076019 kubelet[2804]: I0213 19:26:45.075657 2804 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:26:45.080340 kubelet[2804]: I0213 19:26:45.080312 2804 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:26:45.081189 kubelet[2804]: W0213 19:26:45.081165 2804 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:26:45.081956 kubelet[2804]: I0213 19:26:45.081870 2804 server.go:1269] "Started kubelet" Feb 13 19:26:45.082327 kubelet[2804]: I0213 19:26:45.082256 2804 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:26:45.084897 kubelet[2804]: I0213 19:26:45.084864 2804 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:26:45.089131 kubelet[2804]: I0213 19:26:45.089054 2804 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:26:45.093288 kubelet[2804]: I0213 19:26:45.092572 2804 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:26:45.093288 kubelet[2804]: I0213 19:26:45.092876 2804 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:26:45.102744 kubelet[2804]: I0213 19:26:45.102210 2804 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:26:45.106201 kubelet[2804]: W0213 19:26:45.105809 2804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.191:6443: connect: connection refused Feb 13 19:26:45.106201 kubelet[2804]: E0213 19:26:45.105878 2804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.191:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:26:45.106201 kubelet[2804]: E0213 19:26:45.094786 2804 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.191:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.191:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-191.1823db1e7ca51cf8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-191,UID:ip-172-31-19-191,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-191,},FirstTimestamp:2025-02-13 19:26:45.081840888 +0000 UTC m=+0.527134327,LastTimestamp:2025-02-13 19:26:45.081840888 +0000 UTC m=+0.527134327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-191,}" Feb 13 19:26:45.106201 kubelet[2804]: E0213 19:26:45.104162 2804 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-191\" not found" Feb 13 19:26:45.108098 kubelet[2804]: E0213 19:26:45.106596 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-191?timeout=10s\": dial tcp 172.31.19.191:6443: connect: connection refused" interval="200ms" Feb 13 19:26:45.108098 kubelet[2804]: I0213 19:26:45.102741 2804 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:26:45.108098 kubelet[2804]: I0213 19:26:45.107066 2804 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:26:45.108098 kubelet[2804]: I0213 19:26:45.107328 2804 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:26:45.108098 kubelet[2804]: I0213 19:26:45.102696 2804 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:26:45.109306 kubelet[2804]: I0213 19:26:45.109289 2804 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:26:45.110112 kubelet[2804]: I0213 19:26:45.110096 2804 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:26:45.111091 kubelet[2804]: E0213 19:26:45.110460 2804 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:26:45.139602 kubelet[2804]: I0213 19:26:45.139564 2804 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:26:45.139602 kubelet[2804]: I0213 19:26:45.139590 2804 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:26:45.139602 kubelet[2804]: I0213 19:26:45.139611 2804 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:26:45.146104 kubelet[2804]: I0213 19:26:45.144170 2804 policy_none.go:49] "None policy: Start" Feb 13 19:26:45.146104 kubelet[2804]: I0213 19:26:45.144140 2804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:26:45.147030 kubelet[2804]: I0213 19:26:45.146933 2804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:26:45.150498 kubelet[2804]: I0213 19:26:45.150139 2804 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:26:45.150498 kubelet[2804]: I0213 19:26:45.150186 2804 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:26:45.150498 kubelet[2804]: E0213 19:26:45.150245 2804 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:26:45.152319 kubelet[2804]: I0213 19:26:45.152298 2804 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:26:45.152428 kubelet[2804]: I0213 19:26:45.152327 2804 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:26:45.158092 kubelet[2804]: W0213 19:26:45.157703 2804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.191:6443: connect: connection refused Feb 13 19:26:45.158092 kubelet[2804]: E0213 19:26:45.157779 2804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.191:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:26:45.164733 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:26:45.180965 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:26:45.184904 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:26:45.204113 kubelet[2804]: I0213 19:26:45.203207 2804 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:26:45.204113 kubelet[2804]: I0213 19:26:45.203439 2804 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:26:45.204113 kubelet[2804]: I0213 19:26:45.203453 2804 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:26:45.204113 kubelet[2804]: I0213 19:26:45.203985 2804 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:26:45.205864 kubelet[2804]: E0213 19:26:45.205834 2804 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-191\" not found" Feb 13 19:26:45.264614 systemd[1]: Created slice kubepods-burstable-pod3149e38e4a7bc9bec9a7ba1a3a3c7a83.slice - libcontainer container kubepods-burstable-pod3149e38e4a7bc9bec9a7ba1a3a3c7a83.slice. Feb 13 19:26:45.283318 systemd[1]: Created slice kubepods-burstable-pod901d9e26bbb41e146ea1288a8761375d.slice - libcontainer container kubepods-burstable-pod901d9e26bbb41e146ea1288a8761375d.slice. Feb 13 19:26:45.294609 systemd[1]: Created slice kubepods-burstable-podaccc4d91322b1f6528fed397471dc1ce.slice - libcontainer container kubepods-burstable-podaccc4d91322b1f6528fed397471dc1ce.slice. Feb 13 19:26:45.305597 kubelet[2804]: I0213 19:26:45.305559 2804 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-191" Feb 13 19:26:45.306195 kubelet[2804]: E0213 19:26:45.306138 2804 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.191:6443/api/v1/nodes\": dial tcp 172.31.19.191:6443: connect: connection refused" node="ip-172-31-19-191" Feb 13 19:26:45.307554 kubelet[2804]: E0213 19:26:45.307512 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-191?timeout=10s\": dial tcp 172.31.19.191:6443: connect: connection refused" interval="400ms" Feb 13 19:26:45.310714 kubelet[2804]: I0213 19:26:45.310681 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3149e38e4a7bc9bec9a7ba1a3a3c7a83-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-191\" (UID: \"3149e38e4a7bc9bec9a7ba1a3a3c7a83\") " pod="kube-system/kube-controller-manager-ip-172-31-19-191" Feb 13 19:26:45.310988 kubelet[2804]: I0213 19:26:45.310716 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/901d9e26bbb41e146ea1288a8761375d-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-191\" (UID: \"901d9e26bbb41e146ea1288a8761375d\") " pod="kube-system/kube-scheduler-ip-172-31-19-191" Feb 13 19:26:45.310988 kubelet[2804]: I0213 19:26:45.310745 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/accc4d91322b1f6528fed397471dc1ce-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-191\" (UID: \"accc4d91322b1f6528fed397471dc1ce\") " pod="kube-system/kube-apiserver-ip-172-31-19-191" Feb 13 19:26:45.310988 kubelet[2804]: I0213 19:26:45.310776 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/accc4d91322b1f6528fed397471dc1ce-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-191\" (UID: \"accc4d91322b1f6528fed397471dc1ce\") " pod="kube-system/kube-apiserver-ip-172-31-19-191" Feb 13 19:26:45.310988 kubelet[2804]: I0213 19:26:45.310804 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3149e38e4a7bc9bec9a7ba1a3a3c7a83-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-191\" (UID: \"3149e38e4a7bc9bec9a7ba1a3a3c7a83\") " pod="kube-system/kube-controller-manager-ip-172-31-19-191" Feb 13 19:26:45.310988 kubelet[2804]: I0213 19:26:45.310828 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3149e38e4a7bc9bec9a7ba1a3a3c7a83-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-191\" (UID: \"3149e38e4a7bc9bec9a7ba1a3a3c7a83\") " pod="kube-system/kube-controller-manager-ip-172-31-19-191" Feb 13 19:26:45.311629 kubelet[2804]: I0213 19:26:45.310851 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3149e38e4a7bc9bec9a7ba1a3a3c7a83-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-191\" (UID: \"3149e38e4a7bc9bec9a7ba1a3a3c7a83\") " pod="kube-system/kube-controller-manager-ip-172-31-19-191" Feb 13 19:26:45.311629 kubelet[2804]: I0213 19:26:45.310874 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/accc4d91322b1f6528fed397471dc1ce-ca-certs\") pod \"kube-apiserver-ip-172-31-19-191\" (UID: \"accc4d91322b1f6528fed397471dc1ce\") " pod="kube-system/kube-apiserver-ip-172-31-19-191" Feb 13 19:26:45.311629 kubelet[2804]: I0213 19:26:45.310896 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3149e38e4a7bc9bec9a7ba1a3a3c7a83-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-191\" (UID: \"3149e38e4a7bc9bec9a7ba1a3a3c7a83\") " pod="kube-system/kube-controller-manager-ip-172-31-19-191" Feb 13 19:26:45.508177 kubelet[2804]: I0213 19:26:45.508143 2804 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-191" Feb 13 19:26:45.508536 kubelet[2804]: E0213 19:26:45.508502 2804 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.191:6443/api/v1/nodes\": dial tcp 172.31.19.191:6443: connect: connection refused" node="ip-172-31-19-191" Feb 13 19:26:45.580451 containerd[1918]: time="2025-02-13T19:26:45.580331033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-191,Uid:3149e38e4a7bc9bec9a7ba1a3a3c7a83,Namespace:kube-system,Attempt:0,}" Feb 13 19:26:45.591350 containerd[1918]: time="2025-02-13T19:26:45.591272142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-191,Uid:901d9e26bbb41e146ea1288a8761375d,Namespace:kube-system,Attempt:0,}" Feb 13 19:26:45.599270 containerd[1918]: time="2025-02-13T19:26:45.599228853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-191,Uid:accc4d91322b1f6528fed397471dc1ce,Namespace:kube-system,Attempt:0,}" Feb 13 19:26:45.708055 kubelet[2804]: E0213 19:26:45.708000 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-191?timeout=10s\": dial tcp 172.31.19.191:6443: connect: connection refused" interval="800ms" Feb 13 19:26:45.911191 kubelet[2804]: I0213 19:26:45.911095 2804 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-191" Feb 13 19:26:45.911678 kubelet[2804]: E0213 19:26:45.911642 2804 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.191:6443/api/v1/nodes\": dial tcp 172.31.19.191:6443: connect: connection refused" node="ip-172-31-19-191" Feb 13 19:26:46.034852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3959050837.mount: Deactivated successfully. Feb 13 19:26:46.040399 containerd[1918]: time="2025-02-13T19:26:46.040353399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:26:46.042220 containerd[1918]: time="2025-02-13T19:26:46.042181337Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:26:46.044411 containerd[1918]: time="2025-02-13T19:26:46.044364510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:26:46.045259 containerd[1918]: time="2025-02-13T19:26:46.045219580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:26:46.048544 containerd[1918]: time="2025-02-13T19:26:46.048382715Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:26:46.051759 containerd[1918]: time="2025-02-13T19:26:46.050334732Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:26:46.051759 containerd[1918]: time="2025-02-13T19:26:46.050513293Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:26:46.053113 containerd[1918]: time="2025-02-13T19:26:46.052563637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:26:46.056014 containerd[1918]: time="2025-02-13T19:26:46.055564114Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 463.964869ms" Feb 13 19:26:46.059240 containerd[1918]: time="2025-02-13T19:26:46.058656756Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 459.323593ms" Feb 13 19:26:46.062255 containerd[1918]: time="2025-02-13T19:26:46.062214825Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.06798ms" Feb 13 19:26:46.289380 containerd[1918]: time="2025-02-13T19:26:46.284801772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:26:46.292111 containerd[1918]: time="2025-02-13T19:26:46.292005299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:26:46.292321 containerd[1918]: time="2025-02-13T19:26:46.292291790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:26:46.292449 containerd[1918]: time="2025-02-13T19:26:46.292421901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:26:46.292770 containerd[1918]: time="2025-02-13T19:26:46.292714887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:26:46.295943 containerd[1918]: time="2025-02-13T19:26:46.293821977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:26:46.295943 containerd[1918]: time="2025-02-13T19:26:46.293875944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:26:46.295943 containerd[1918]: time="2025-02-13T19:26:46.294061136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:26:46.303127 containerd[1918]: time="2025-02-13T19:26:46.301828272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:26:46.303127 containerd[1918]: time="2025-02-13T19:26:46.301943336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:26:46.303127 containerd[1918]: time="2025-02-13T19:26:46.301969470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:26:46.303127 containerd[1918]: time="2025-02-13T19:26:46.302062411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:26:46.315101 kubelet[2804]: W0213 19:26:46.314963 2804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.191:6443: connect: connection refused Feb 13 19:26:46.315101 kubelet[2804]: E0213 19:26:46.315048 2804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.191:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:26:46.341275 systemd[1]: Started cri-containerd-87500c157d5fed7360379ca73d135226458f07d9e8448e8f7c8c1f8847f1a46f.scope - libcontainer container 87500c157d5fed7360379ca73d135226458f07d9e8448e8f7c8c1f8847f1a46f. Feb 13 19:26:46.355306 systemd[1]: Started cri-containerd-1e4f7fef9894bc2bcfac6325e3d95712731a70c7fb17a0b3e031aa127b2add44.scope - libcontainer container 1e4f7fef9894bc2bcfac6325e3d95712731a70c7fb17a0b3e031aa127b2add44. Feb 13 19:26:46.358784 systemd[1]: Started cri-containerd-de10b154c0cf0e891d1373c5681f498b7e640b49b32f439171bfa4d36401af13.scope - libcontainer container de10b154c0cf0e891d1373c5681f498b7e640b49b32f439171bfa4d36401af13. Feb 13 19:26:46.420397 kubelet[2804]: W0213 19:26:46.420328 2804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.191:6443: connect: connection refused Feb 13 19:26:46.420555 kubelet[2804]: E0213 19:26:46.420412 2804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.191:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:26:46.426434 containerd[1918]: time="2025-02-13T19:26:46.426188446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-191,Uid:accc4d91322b1f6528fed397471dc1ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"87500c157d5fed7360379ca73d135226458f07d9e8448e8f7c8c1f8847f1a46f\"" Feb 13 19:26:46.433579 containerd[1918]: time="2025-02-13T19:26:46.433465776Z" level=info msg="CreateContainer within sandbox \"87500c157d5fed7360379ca73d135226458f07d9e8448e8f7c8c1f8847f1a46f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:26:46.433863 kubelet[2804]: W0213 19:26:46.433798 2804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.191:6443: connect: connection refused Feb 13 19:26:46.433863 kubelet[2804]: E0213 19:26:46.433844 2804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.191:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:26:46.473421 containerd[1918]: time="2025-02-13T19:26:46.473303649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-191,Uid:3149e38e4a7bc9bec9a7ba1a3a3c7a83,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e4f7fef9894bc2bcfac6325e3d95712731a70c7fb17a0b3e031aa127b2add44\"" Feb 13 19:26:46.477761 containerd[1918]: time="2025-02-13T19:26:46.477613066Z" level=info msg="CreateContainer within sandbox \"1e4f7fef9894bc2bcfac6325e3d95712731a70c7fb17a0b3e031aa127b2add44\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:26:46.488203 containerd[1918]: time="2025-02-13T19:26:46.488128481Z" level=info msg="CreateContainer within sandbox \"87500c157d5fed7360379ca73d135226458f07d9e8448e8f7c8c1f8847f1a46f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"76ae0bc2eccb608d4ad20bc5318369301355b0a09487f6972f1ce6b6a6f01979\"" Feb 13 19:26:46.491722 containerd[1918]: time="2025-02-13T19:26:46.490642188Z" level=info msg="StartContainer for \"76ae0bc2eccb608d4ad20bc5318369301355b0a09487f6972f1ce6b6a6f01979\"" Feb 13 19:26:46.492759 containerd[1918]: time="2025-02-13T19:26:46.492725148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-191,Uid:901d9e26bbb41e146ea1288a8761375d,Namespace:kube-system,Attempt:0,} returns sandbox id \"de10b154c0cf0e891d1373c5681f498b7e640b49b32f439171bfa4d36401af13\"" Feb 13 19:26:46.498745 containerd[1918]: time="2025-02-13T19:26:46.498707924Z" level=info msg="CreateContainer within sandbox \"de10b154c0cf0e891d1373c5681f498b7e640b49b32f439171bfa4d36401af13\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:26:46.509456 kubelet[2804]: E0213 19:26:46.509400 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-191?timeout=10s\": dial tcp 172.31.19.191:6443: connect: connection refused" interval="1.6s" Feb 13 19:26:46.524335 containerd[1918]: time="2025-02-13T19:26:46.524188016Z" level=info msg="CreateContainer within sandbox \"1e4f7fef9894bc2bcfac6325e3d95712731a70c7fb17a0b3e031aa127b2add44\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c7130dbea31f9a8a8ec655bb67151410cd995b632926f86ad535dfffc9aee6e8\"" Feb 13 19:26:46.527262 containerd[1918]: time="2025-02-13T19:26:46.527223407Z" level=info msg="StartContainer for \"c7130dbea31f9a8a8ec655bb67151410cd995b632926f86ad535dfffc9aee6e8\"" Feb 13 19:26:46.534274 systemd[1]: Started cri-containerd-76ae0bc2eccb608d4ad20bc5318369301355b0a09487f6972f1ce6b6a6f01979.scope - libcontainer container 76ae0bc2eccb608d4ad20bc5318369301355b0a09487f6972f1ce6b6a6f01979. Feb 13 19:26:46.545448 containerd[1918]: time="2025-02-13T19:26:46.545272823Z" level=info msg="CreateContainer within sandbox \"de10b154c0cf0e891d1373c5681f498b7e640b49b32f439171bfa4d36401af13\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5e97cee1cfa323bd3aec8dae503ee4a409843f8e57a6f4ffec5f44b2c59c15b1\"" Feb 13 19:26:46.547795 containerd[1918]: time="2025-02-13T19:26:46.547672258Z" level=info msg="StartContainer for \"5e97cee1cfa323bd3aec8dae503ee4a409843f8e57a6f4ffec5f44b2c59c15b1\"" Feb 13 19:26:46.576457 kubelet[2804]: W0213 19:26:46.576309 2804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-191&limit=500&resourceVersion=0": dial tcp 172.31.19.191:6443: connect: connection refused Feb 13 19:26:46.578161 kubelet[2804]: E0213 19:26:46.577058 2804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-191&limit=500&resourceVersion=0\": dial tcp 172.31.19.191:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:26:46.595331 systemd[1]: Started cri-containerd-c7130dbea31f9a8a8ec655bb67151410cd995b632926f86ad535dfffc9aee6e8.scope - libcontainer container c7130dbea31f9a8a8ec655bb67151410cd995b632926f86ad535dfffc9aee6e8. Feb 13 19:26:46.645282 systemd[1]: Started cri-containerd-5e97cee1cfa323bd3aec8dae503ee4a409843f8e57a6f4ffec5f44b2c59c15b1.scope - libcontainer container 5e97cee1cfa323bd3aec8dae503ee4a409843f8e57a6f4ffec5f44b2c59c15b1. Feb 13 19:26:46.656591 containerd[1918]: time="2025-02-13T19:26:46.656498264Z" level=info msg="StartContainer for \"76ae0bc2eccb608d4ad20bc5318369301355b0a09487f6972f1ce6b6a6f01979\" returns successfully" Feb 13 19:26:46.702121 containerd[1918]: time="2025-02-13T19:26:46.701445749Z" level=info msg="StartContainer for \"c7130dbea31f9a8a8ec655bb67151410cd995b632926f86ad535dfffc9aee6e8\" returns successfully" Feb 13 19:26:46.719783 kubelet[2804]: I0213 19:26:46.719040 2804 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-191" Feb 13 19:26:46.719783 kubelet[2804]: E0213 19:26:46.719600 2804 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.191:6443/api/v1/nodes\": dial tcp 172.31.19.191:6443: connect: connection refused" node="ip-172-31-19-191" Feb 13 19:26:46.738377 containerd[1918]: time="2025-02-13T19:26:46.738322813Z" level=info msg="StartContainer for \"5e97cee1cfa323bd3aec8dae503ee4a409843f8e57a6f4ffec5f44b2c59c15b1\" returns successfully" Feb 13 19:26:47.199242 kubelet[2804]: E0213 19:26:47.199202 2804 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.191:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.191:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:26:48.324452 kubelet[2804]: I0213 19:26:48.324414 2804 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-191" Feb 13 19:26:49.628640 kubelet[2804]: E0213 19:26:49.628417 2804 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-191\" not found" node="ip-172-31-19-191" Feb 13 19:26:49.790372 kubelet[2804]: I0213 19:26:49.789985 2804 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-191" Feb 13 19:26:50.071622 kubelet[2804]: I0213 19:26:50.071569 2804 apiserver.go:52] "Watching apiserver" Feb 13 19:26:50.107736 kubelet[2804]: I0213 19:26:50.107691 2804 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:26:52.047486 systemd[1]: Reload requested from client PID 3082 ('systemctl') (unit session-9.scope)... Feb 13 19:26:52.047855 systemd[1]: Reloading... Feb 13 19:26:52.259307 zram_generator::config[3130]: No configuration found. Feb 13 19:26:52.420992 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:26:52.591648 systemd[1]: Reloading finished in 543 ms. Feb 13 19:26:52.665002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:52.683785 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:26:52.684150 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:52.684224 systemd[1]: kubelet.service: Consumed 876ms CPU time, 113M memory peak. Feb 13 19:26:52.693872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:26:53.015688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:26:53.018223 (kubelet)[3184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:26:53.122456 kubelet[3184]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:26:53.122456 kubelet[3184]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:26:53.122456 kubelet[3184]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:26:53.126789 kubelet[3184]: I0213 19:26:53.124044 3184 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:26:53.143643 kubelet[3184]: I0213 19:26:53.142676 3184 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:26:53.143643 kubelet[3184]: I0213 19:26:53.142707 3184 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:26:53.143643 kubelet[3184]: I0213 19:26:53.143262 3184 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:26:53.147234 kubelet[3184]: I0213 19:26:53.147210 3184 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:26:53.162313 kubelet[3184]: I0213 19:26:53.162140 3184 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:26:53.172548 kubelet[3184]: E0213 19:26:53.172506 3184 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:26:53.173116 kubelet[3184]: I0213 19:26:53.173099 3184 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:26:53.179755 kubelet[3184]: I0213 19:26:53.179695 3184 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:26:53.180031 kubelet[3184]: I0213 19:26:53.180017 3184 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:26:53.180253 kubelet[3184]: I0213 19:26:53.180227 3184 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:26:53.180554 kubelet[3184]: I0213 19:26:53.180316 3184 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-191","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:26:53.181211 kubelet[3184]: I0213 19:26:53.181189 3184 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:26:53.181329 kubelet[3184]: I0213 19:26:53.181320 3184 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:26:53.181449 kubelet[3184]: I0213 19:26:53.181440 3184 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:26:53.181660 kubelet[3184]: I0213 19:26:53.181648 3184 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:26:53.181740 kubelet[3184]: I0213 19:26:53.181733 3184 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:26:53.181800 kubelet[3184]: I0213 19:26:53.181796 3184 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:26:53.181850 kubelet[3184]: I0213 19:26:53.181845 3184 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:26:53.184905 kubelet[3184]: I0213 19:26:53.184878 3184 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:26:53.185514 kubelet[3184]: I0213 19:26:53.185495 3184 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:26:53.186104 kubelet[3184]: I0213 19:26:53.186015 3184 server.go:1269] "Started kubelet" Feb 13 19:26:53.208538 sudo[3197]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:26:53.209267 sudo[3197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:26:53.216187 kubelet[3184]: I0213 19:26:53.212531 3184 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:26:53.225233 kubelet[3184]: I0213 19:26:53.225186 3184 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:26:53.241671 kubelet[3184]: I0213 19:26:53.241633 3184 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:26:53.243958 kubelet[3184]: I0213 19:26:53.243894 3184 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:26:53.245460 kubelet[3184]: I0213 19:26:53.245424 3184 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:26:53.246731 kubelet[3184]: I0213 19:26:53.246713 3184 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:26:53.258131 kubelet[3184]: I0213 19:26:53.258100 3184 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:26:53.258368 kubelet[3184]: E0213 19:26:53.258347 3184 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-191\" not found" Feb 13 19:26:53.258874 kubelet[3184]: I0213 19:26:53.258852 3184 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:26:53.259084 kubelet[3184]: I0213 19:26:53.259059 3184 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:26:53.278511 kubelet[3184]: I0213 19:26:53.276192 3184 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:26:53.278511 kubelet[3184]: I0213 19:26:53.276292 3184 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:26:53.287204 kubelet[3184]: E0213 19:26:53.287037 3184 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:26:53.293300 kubelet[3184]: I0213 19:26:53.293225 3184 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:26:53.332238 kubelet[3184]: I0213 19:26:53.332201 3184 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:26:53.337285 kubelet[3184]: I0213 19:26:53.336761 3184 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:26:53.337285 kubelet[3184]: I0213 19:26:53.336803 3184 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:26:53.337285 kubelet[3184]: I0213 19:26:53.336908 3184 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:26:53.337285 kubelet[3184]: E0213 19:26:53.337002 3184 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:26:53.437326 kubelet[3184]: E0213 19:26:53.437277 3184 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:26:53.461967 kubelet[3184]: I0213 19:26:53.461337 3184 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:26:53.461967 kubelet[3184]: I0213 19:26:53.461470 3184 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:26:53.461967 kubelet[3184]: I0213 19:26:53.461497 3184 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:26:53.461967 kubelet[3184]: I0213 19:26:53.461828 3184 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:26:53.461967 kubelet[3184]: I0213 19:26:53.461846 3184 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:26:53.461967 kubelet[3184]: I0213 19:26:53.461873 3184 policy_none.go:49] "None policy: Start" Feb 13 19:26:53.469730 kubelet[3184]: I0213 19:26:53.469579 3184 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:26:53.469730 kubelet[3184]: I0213 19:26:53.469620 3184 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:26:53.471090 kubelet[3184]: I0213 19:26:53.470421 3184 state_mem.go:75] "Updated machine memory state" Feb 13 19:26:53.490488 kubelet[3184]: I0213 19:26:53.489959 3184 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:26:53.490488 kubelet[3184]: I0213 19:26:53.490211 3184 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:26:53.490488 kubelet[3184]: I0213 19:26:53.490226 3184 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:26:53.491086 kubelet[3184]: I0213 19:26:53.490742 3184 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:26:53.651466 kubelet[3184]: I0213 19:26:53.651344 3184 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-191" Feb 13 19:26:53.683840 kubelet[3184]: I0213 19:26:53.682654 3184 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-19-191" Feb 13 19:26:53.683840 kubelet[3184]: I0213 19:26:53.682749 3184 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-191" Feb 13 19:26:53.701245 kubelet[3184]: I0213 19:26:53.701195 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/accc4d91322b1f6528fed397471dc1ce-ca-certs\") pod \"kube-apiserver-ip-172-31-19-191\" (UID: \"accc4d91322b1f6528fed397471dc1ce\") " pod="kube-system/kube-apiserver-ip-172-31-19-191" Feb 13 19:26:53.701245 kubelet[3184]: I0213 19:26:53.701245 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/accc4d91322b1f6528fed397471dc1ce-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-191\" (UID: \"accc4d91322b1f6528fed397471dc1ce\") " pod="kube-system/kube-apiserver-ip-172-31-19-191" Feb 13 19:26:53.701534 kubelet[3184]: I0213 19:26:53.701274 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/accc4d91322b1f6528fed397471dc1ce-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-191\" (UID: \"accc4d91322b1f6528fed397471dc1ce\") " pod="kube-system/kube-apiserver-ip-172-31-19-191" Feb 13 19:26:53.701534 kubelet[3184]: I0213 19:26:53.701297 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3149e38e4a7bc9bec9a7ba1a3a3c7a83-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-191\" (UID: \"3149e38e4a7bc9bec9a7ba1a3a3c7a83\") " pod="kube-system/kube-controller-manager-ip-172-31-19-191" Feb 13 19:26:53.701534 kubelet[3184]: I0213 19:26:53.701322 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3149e38e4a7bc9bec9a7ba1a3a3c7a83-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-191\" (UID: \"3149e38e4a7bc9bec9a7ba1a3a3c7a83\") " pod="kube-system/kube-controller-manager-ip-172-31-19-191" Feb 13 19:26:53.701534 kubelet[3184]: I0213 19:26:53.701343 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3149e38e4a7bc9bec9a7ba1a3a3c7a83-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-191\" (UID: \"3149e38e4a7bc9bec9a7ba1a3a3c7a83\") " pod="kube-system/kube-controller-manager-ip-172-31-19-191" Feb 13 19:26:53.701534 kubelet[3184]: I0213 19:26:53.701365 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3149e38e4a7bc9bec9a7ba1a3a3c7a83-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-191\" (UID: \"3149e38e4a7bc9bec9a7ba1a3a3c7a83\") " pod="kube-system/kube-controller-manager-ip-172-31-19-191" Feb 13 19:26:53.701863 kubelet[3184]: I0213 19:26:53.701386 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3149e38e4a7bc9bec9a7ba1a3a3c7a83-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-191\" (UID: \"3149e38e4a7bc9bec9a7ba1a3a3c7a83\") " pod="kube-system/kube-controller-manager-ip-172-31-19-191" Feb 13 19:26:53.701863 kubelet[3184]: I0213 19:26:53.701411 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/901d9e26bbb41e146ea1288a8761375d-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-191\" (UID: \"901d9e26bbb41e146ea1288a8761375d\") " pod="kube-system/kube-scheduler-ip-172-31-19-191" Feb 13 19:26:53.777913 update_engine[1897]: I20250213 19:26:53.776443 1897 update_attempter.cc:509] Updating boot flags... Feb 13 19:26:53.892098 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3234) Feb 13 19:26:54.212330 kubelet[3184]: I0213 19:26:54.194170 3184 apiserver.go:52] "Watching apiserver" Feb 13 19:26:54.267300 kubelet[3184]: I0213 19:26:54.263376 3184 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:26:54.380162 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3225) Feb 13 19:26:54.511715 kubelet[3184]: I0213 19:26:54.510929 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-191" podStartSLOduration=1.510905256 podStartE2EDuration="1.510905256s" podCreationTimestamp="2025-02-13 19:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:26:54.486822554 +0000 UTC m=+1.447757331" watchObservedRunningTime="2025-02-13 19:26:54.510905256 +0000 UTC m=+1.471840034" Feb 13 19:26:54.570327 kubelet[3184]: I0213 19:26:54.570265 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-191" podStartSLOduration=1.570243081 podStartE2EDuration="1.570243081s" podCreationTimestamp="2025-02-13 19:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:26:54.512954318 +0000 UTC m=+1.473889086" watchObservedRunningTime="2025-02-13 19:26:54.570243081 +0000 UTC m=+1.531177858" Feb 13 19:26:54.581659 kubelet[3184]: I0213 19:26:54.577383 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-191" podStartSLOduration=1.577329282 podStartE2EDuration="1.577329282s" podCreationTimestamp="2025-02-13 19:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:26:54.574236512 +0000 UTC m=+1.535171280" watchObservedRunningTime="2025-02-13 19:26:54.577329282 +0000 UTC m=+1.538264061" Feb 13 19:26:54.843482 sudo[3197]: pam_unix(sudo:session): session closed for user root Feb 13 19:26:57.295862 sudo[2272]: pam_unix(sudo:session): session closed for user root Feb 13 19:26:57.320483 sshd[2271]: Connection closed by 139.178.68.195 port 37454 Feb 13 19:26:57.321965 sshd-session[2269]: pam_unix(sshd:session): session closed for user core Feb 13 19:26:57.329658 systemd[1]: sshd@8-172.31.19.191:22-139.178.68.195:37454.service: Deactivated successfully. Feb 13 19:26:57.333963 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:26:57.334280 systemd[1]: session-9.scope: Consumed 4.420s CPU time, 211.4M memory peak. Feb 13 19:26:57.339957 systemd-logind[1896]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:26:57.341939 systemd-logind[1896]: Removed session 9. Feb 13 19:26:58.138495 kubelet[3184]: I0213 19:26:58.138187 3184 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:26:58.141129 containerd[1918]: time="2025-02-13T19:26:58.140649272Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:26:58.146317 kubelet[3184]: I0213 19:26:58.146286 3184 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:26:58.156054 systemd[1]: Created slice kubepods-besteffort-pod90208bc6_8bc5_4fe1_97d4_32a70cfa68e2.slice - libcontainer container kubepods-besteffort-pod90208bc6_8bc5_4fe1_97d4_32a70cfa68e2.slice. Feb 13 19:26:58.177432 systemd[1]: Created slice kubepods-burstable-pod68331348_b861_43ac_adb9_d774f9189db4.slice - libcontainer container kubepods-burstable-pod68331348_b861_43ac_adb9_d774f9189db4.slice. Feb 13 19:26:58.242598 kubelet[3184]: I0213 19:26:58.242548 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68331348-b861-43ac-adb9-d774f9189db4-hubble-tls\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.242598 kubelet[3184]: I0213 19:26:58.242592 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90208bc6-8bc5-4fe1-97d4-32a70cfa68e2-xtables-lock\") pod \"kube-proxy-tz9wb\" (UID: \"90208bc6-8bc5-4fe1-97d4-32a70cfa68e2\") " pod="kube-system/kube-proxy-tz9wb" Feb 13 19:26:58.242871 kubelet[3184]: I0213 19:26:58.242617 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90208bc6-8bc5-4fe1-97d4-32a70cfa68e2-lib-modules\") pod \"kube-proxy-tz9wb\" (UID: \"90208bc6-8bc5-4fe1-97d4-32a70cfa68e2\") " pod="kube-system/kube-proxy-tz9wb" Feb 13 19:26:58.242871 kubelet[3184]: I0213 19:26:58.242640 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-hostproc\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.242871 kubelet[3184]: I0213 19:26:58.242716 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-bpf-maps\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.242871 kubelet[3184]: I0213 19:26:58.242742 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-xtables-lock\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.242871 kubelet[3184]: I0213 19:26:58.242769 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/90208bc6-8bc5-4fe1-97d4-32a70cfa68e2-kube-proxy\") pod \"kube-proxy-tz9wb\" (UID: \"90208bc6-8bc5-4fe1-97d4-32a70cfa68e2\") " pod="kube-system/kube-proxy-tz9wb" Feb 13 19:26:58.242871 kubelet[3184]: I0213 19:26:58.242791 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-lib-modules\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.243152 kubelet[3184]: I0213 19:26:58.242818 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-host-proc-sys-net\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.243152 kubelet[3184]: I0213 19:26:58.242842 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2xmt\" (UniqueName: \"kubernetes.io/projected/68331348-b861-43ac-adb9-d774f9189db4-kube-api-access-q2xmt\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.243152 kubelet[3184]: I0213 19:26:58.242868 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68331348-b861-43ac-adb9-d774f9189db4-clustermesh-secrets\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.243152 kubelet[3184]: I0213 19:26:58.242894 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68331348-b861-43ac-adb9-d774f9189db4-cilium-config-path\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.243152 kubelet[3184]: I0213 19:26:58.242917 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-cilium-run\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.243324 kubelet[3184]: I0213 19:26:58.242942 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-cilium-cgroup\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.243324 kubelet[3184]: I0213 19:26:58.242967 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-etc-cni-netd\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.243324 kubelet[3184]: I0213 19:26:58.242996 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-cni-path\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.243324 kubelet[3184]: I0213 19:26:58.243024 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvg9m\" (UniqueName: \"kubernetes.io/projected/90208bc6-8bc5-4fe1-97d4-32a70cfa68e2-kube-api-access-mvg9m\") pod \"kube-proxy-tz9wb\" (UID: \"90208bc6-8bc5-4fe1-97d4-32a70cfa68e2\") " pod="kube-system/kube-proxy-tz9wb" Feb 13 19:26:58.243324 kubelet[3184]: I0213 19:26:58.243050 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-host-proc-sys-kernel\") pod \"cilium-kt6pf\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " pod="kube-system/cilium-kt6pf" Feb 13 19:26:58.377893 kubelet[3184]: E0213 19:26:58.377351 3184 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:26:58.377893 kubelet[3184]: E0213 19:26:58.377387 3184 projected.go:194] Error preparing data for projected volume kube-api-access-mvg9m for pod kube-system/kube-proxy-tz9wb: configmap "kube-root-ca.crt" not found Feb 13 19:26:58.377893 kubelet[3184]: E0213 19:26:58.377468 3184 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90208bc6-8bc5-4fe1-97d4-32a70cfa68e2-kube-api-access-mvg9m podName:90208bc6-8bc5-4fe1-97d4-32a70cfa68e2 nodeName:}" failed. No retries permitted until 2025-02-13 19:26:58.877439163 +0000 UTC m=+5.838373933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mvg9m" (UniqueName: "kubernetes.io/projected/90208bc6-8bc5-4fe1-97d4-32a70cfa68e2-kube-api-access-mvg9m") pod "kube-proxy-tz9wb" (UID: "90208bc6-8bc5-4fe1-97d4-32a70cfa68e2") : configmap "kube-root-ca.crt" not found Feb 13 19:26:58.384671 kubelet[3184]: E0213 19:26:58.384642 3184 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:26:58.384957 kubelet[3184]: E0213 19:26:58.384823 3184 projected.go:194] Error preparing data for projected volume kube-api-access-q2xmt for pod kube-system/cilium-kt6pf: configmap "kube-root-ca.crt" not found Feb 13 19:26:58.384957 kubelet[3184]: E0213 19:26:58.384929 3184 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68331348-b861-43ac-adb9-d774f9189db4-kube-api-access-q2xmt podName:68331348-b861-43ac-adb9-d774f9189db4 nodeName:}" failed. No retries permitted until 2025-02-13 19:26:58.884905553 +0000 UTC m=+5.845840325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q2xmt" (UniqueName: "kubernetes.io/projected/68331348-b861-43ac-adb9-d774f9189db4-kube-api-access-q2xmt") pod "cilium-kt6pf" (UID: "68331348-b861-43ac-adb9-d774f9189db4") : configmap "kube-root-ca.crt" not found Feb 13 19:26:59.068744 containerd[1918]: time="2025-02-13T19:26:59.068694262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tz9wb,Uid:90208bc6-8bc5-4fe1-97d4-32a70cfa68e2,Namespace:kube-system,Attempt:0,}" Feb 13 19:26:59.085924 containerd[1918]: time="2025-02-13T19:26:59.085874109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kt6pf,Uid:68331348-b861-43ac-adb9-d774f9189db4,Namespace:kube-system,Attempt:0,}" Feb 13 19:26:59.166510 containerd[1918]: time="2025-02-13T19:26:59.165988979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:26:59.173589 containerd[1918]: time="2025-02-13T19:26:59.166682412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:26:59.173589 containerd[1918]: time="2025-02-13T19:26:59.166891383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:26:59.176549 containerd[1918]: time="2025-02-13T19:26:59.168335850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:26:59.214402 systemd[1]: Started cri-containerd-9f56faf1e29693d5d7526e7a4435c75f5ff7114c39fd87056fd81aeac8e372da.scope - libcontainer container 9f56faf1e29693d5d7526e7a4435c75f5ff7114c39fd87056fd81aeac8e372da. Feb 13 19:26:59.234064 containerd[1918]: time="2025-02-13T19:26:59.232009403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:26:59.234064 containerd[1918]: time="2025-02-13T19:26:59.232098563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:26:59.234064 containerd[1918]: time="2025-02-13T19:26:59.232122365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:26:59.234064 containerd[1918]: time="2025-02-13T19:26:59.232238617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:26:59.253380 systemd[1]: Created slice kubepods-besteffort-pod739da241_b1df_40a1_96df_d17dcb018fb8.slice - libcontainer container kubepods-besteffort-pod739da241_b1df_40a1_96df_d17dcb018fb8.slice. Feb 13 19:26:59.288697 systemd[1]: Started cri-containerd-718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f.scope - libcontainer container 718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f. Feb 13 19:26:59.356865 kubelet[3184]: I0213 19:26:59.356743 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/739da241-b1df-40a1-96df-d17dcb018fb8-cilium-config-path\") pod \"cilium-operator-5d85765b45-66f48\" (UID: \"739da241-b1df-40a1-96df-d17dcb018fb8\") " pod="kube-system/cilium-operator-5d85765b45-66f48" Feb 13 19:26:59.356865 kubelet[3184]: I0213 19:26:59.356797 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9hw\" (UniqueName: \"kubernetes.io/projected/739da241-b1df-40a1-96df-d17dcb018fb8-kube-api-access-cf9hw\") pod \"cilium-operator-5d85765b45-66f48\" (UID: \"739da241-b1df-40a1-96df-d17dcb018fb8\") " pod="kube-system/cilium-operator-5d85765b45-66f48" Feb 13 19:26:59.403148 containerd[1918]: time="2025-02-13T19:26:59.402888745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kt6pf,Uid:68331348-b861-43ac-adb9-d774f9189db4,Namespace:kube-system,Attempt:0,} returns sandbox id \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\"" Feb 13 19:26:59.410154 containerd[1918]: time="2025-02-13T19:26:59.409692350Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:26:59.411602 containerd[1918]: time="2025-02-13T19:26:59.411562427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tz9wb,Uid:90208bc6-8bc5-4fe1-97d4-32a70cfa68e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f56faf1e29693d5d7526e7a4435c75f5ff7114c39fd87056fd81aeac8e372da\"" Feb 13 19:26:59.422508 containerd[1918]: time="2025-02-13T19:26:59.422331204Z" level=info msg="CreateContainer within sandbox \"9f56faf1e29693d5d7526e7a4435c75f5ff7114c39fd87056fd81aeac8e372da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:26:59.481113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1574419427.mount: Deactivated successfully. Feb 13 19:26:59.488360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1436797741.mount: Deactivated successfully. Feb 13 19:26:59.495784 containerd[1918]: time="2025-02-13T19:26:59.495637854Z" level=info msg="CreateContainer within sandbox \"9f56faf1e29693d5d7526e7a4435c75f5ff7114c39fd87056fd81aeac8e372da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3edf23dbf1d9e8593889c410750ecdc20d01fbc8ec1e7db8d6e23cc9f02d8c23\"" Feb 13 19:26:59.501324 containerd[1918]: time="2025-02-13T19:26:59.501284537Z" level=info msg="StartContainer for \"3edf23dbf1d9e8593889c410750ecdc20d01fbc8ec1e7db8d6e23cc9f02d8c23\"" Feb 13 19:26:59.551305 systemd[1]: Started cri-containerd-3edf23dbf1d9e8593889c410750ecdc20d01fbc8ec1e7db8d6e23cc9f02d8c23.scope - libcontainer container 3edf23dbf1d9e8593889c410750ecdc20d01fbc8ec1e7db8d6e23cc9f02d8c23. Feb 13 19:26:59.561008 containerd[1918]: time="2025-02-13T19:26:59.560929140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-66f48,Uid:739da241-b1df-40a1-96df-d17dcb018fb8,Namespace:kube-system,Attempt:0,}" Feb 13 19:26:59.598956 containerd[1918]: time="2025-02-13T19:26:59.598908751Z" level=info msg="StartContainer for \"3edf23dbf1d9e8593889c410750ecdc20d01fbc8ec1e7db8d6e23cc9f02d8c23\" returns successfully" Feb 13 19:26:59.613088 containerd[1918]: time="2025-02-13T19:26:59.612022520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:26:59.613863 containerd[1918]: time="2025-02-13T19:26:59.612991623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:26:59.613863 containerd[1918]: time="2025-02-13T19:26:59.613034220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:26:59.613863 containerd[1918]: time="2025-02-13T19:26:59.613199529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:26:59.637299 systemd[1]: Started cri-containerd-ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a.scope - libcontainer container ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a. Feb 13 19:26:59.691802 containerd[1918]: time="2025-02-13T19:26:59.691751100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-66f48,Uid:739da241-b1df-40a1-96df-d17dcb018fb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\"" Feb 13 19:27:00.493095 kubelet[3184]: I0213 19:27:00.490764 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tz9wb" podStartSLOduration=2.490741966 podStartE2EDuration="2.490741966s" podCreationTimestamp="2025-02-13 19:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:27:00.459904003 +0000 UTC m=+7.420838784" watchObservedRunningTime="2025-02-13 19:27:00.490741966 +0000 UTC m=+7.451676743" Feb 13 19:27:07.467066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110379660.mount: Deactivated successfully. Feb 13 19:27:10.901787 containerd[1918]: time="2025-02-13T19:27:10.901708604Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 19:27:10.906250 containerd[1918]: time="2025-02-13T19:27:10.905764346Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.496008112s" Feb 13 19:27:10.906250 containerd[1918]: time="2025-02-13T19:27:10.905838629Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 19:27:10.910199 containerd[1918]: time="2025-02-13T19:27:10.909860447Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:27:10.915509 containerd[1918]: time="2025-02-13T19:27:10.915469781Z" level=info msg="CreateContainer within sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:27:10.952092 containerd[1918]: time="2025-02-13T19:27:10.950563463Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:10.952244 containerd[1918]: time="2025-02-13T19:27:10.952197406Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:11.057358 containerd[1918]: time="2025-02-13T19:27:11.057288338Z" level=info msg="CreateContainer within sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d\"" Feb 13 19:27:11.069142 containerd[1918]: time="2025-02-13T19:27:11.069011956Z" level=info msg="StartContainer for \"78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d\"" Feb 13 19:27:11.520108 systemd[1]: Started cri-containerd-78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d.scope - libcontainer container 78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d. Feb 13 19:27:11.591384 containerd[1918]: time="2025-02-13T19:27:11.586757848Z" level=info msg="StartContainer for \"78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d\" returns successfully" Feb 13 19:27:11.612028 systemd[1]: cri-containerd-78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d.scope: Deactivated successfully. Feb 13 19:27:11.853045 containerd[1918]: time="2025-02-13T19:27:11.836020106Z" level=info msg="shim disconnected" id=78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d namespace=k8s.io Feb 13 19:27:11.853045 containerd[1918]: time="2025-02-13T19:27:11.852848557Z" level=warning msg="cleaning up after shim disconnected" id=78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d namespace=k8s.io Feb 13 19:27:11.853045 containerd[1918]: time="2025-02-13T19:27:11.852867377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:27:12.016330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d-rootfs.mount: Deactivated successfully. Feb 13 19:27:12.563922 containerd[1918]: time="2025-02-13T19:27:12.563644719Z" level=info msg="CreateContainer within sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:27:12.629023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2478018796.mount: Deactivated successfully. Feb 13 19:27:12.658781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3338824367.mount: Deactivated successfully. Feb 13 19:27:12.687839 containerd[1918]: time="2025-02-13T19:27:12.687789900Z" level=info msg="CreateContainer within sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab\"" Feb 13 19:27:12.689218 containerd[1918]: time="2025-02-13T19:27:12.688996214Z" level=info msg="StartContainer for \"8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab\"" Feb 13 19:27:12.753585 systemd[1]: Started cri-containerd-8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab.scope - libcontainer container 8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab. Feb 13 19:27:12.828305 containerd[1918]: time="2025-02-13T19:27:12.828165616Z" level=info msg="StartContainer for \"8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab\" returns successfully" Feb 13 19:27:12.859628 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:27:12.860007 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:27:12.861488 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:27:12.880645 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:27:12.880941 systemd[1]: cri-containerd-8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab.scope: Deactivated successfully. Feb 13 19:27:12.989339 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:27:13.006249 containerd[1918]: time="2025-02-13T19:27:13.006144445Z" level=info msg="shim disconnected" id=8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab namespace=k8s.io Feb 13 19:27:13.006249 containerd[1918]: time="2025-02-13T19:27:13.006209916Z" level=warning msg="cleaning up after shim disconnected" id=8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab namespace=k8s.io Feb 13 19:27:13.006249 containerd[1918]: time="2025-02-13T19:27:13.006225173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:27:13.046741 containerd[1918]: time="2025-02-13T19:27:13.045886645Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:27:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:27:13.573141 containerd[1918]: time="2025-02-13T19:27:13.573082826Z" level=info msg="CreateContainer within sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:27:13.652635 containerd[1918]: time="2025-02-13T19:27:13.652464324Z" level=info msg="CreateContainer within sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51\"" Feb 13 19:27:13.657876 containerd[1918]: time="2025-02-13T19:27:13.657845185Z" level=info msg="StartContainer for \"f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51\"" Feb 13 19:27:13.749408 systemd[1]: Started cri-containerd-f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51.scope - libcontainer container f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51. Feb 13 19:27:13.822708 containerd[1918]: time="2025-02-13T19:27:13.822662062Z" level=info msg="StartContainer for \"f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51\" returns successfully" Feb 13 19:27:13.827173 systemd[1]: cri-containerd-f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51.scope: Deactivated successfully. Feb 13 19:27:13.838784 containerd[1918]: time="2025-02-13T19:27:13.838553333Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:13.842114 containerd[1918]: time="2025-02-13T19:27:13.841201081Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 19:27:13.844415 containerd[1918]: time="2025-02-13T19:27:13.844250929Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:27:13.848134 containerd[1918]: time="2025-02-13T19:27:13.847857482Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.937947184s" Feb 13 19:27:13.848134 containerd[1918]: time="2025-02-13T19:27:13.847903092Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 19:27:13.855105 containerd[1918]: time="2025-02-13T19:27:13.854581544Z" level=info msg="CreateContainer within sandbox \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:27:13.918391 containerd[1918]: time="2025-02-13T19:27:13.918334609Z" level=info msg="CreateContainer within sandbox \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\"" Feb 13 19:27:13.919776 containerd[1918]: time="2025-02-13T19:27:13.919315175Z" level=info msg="StartContainer for \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\"" Feb 13 19:27:13.956402 systemd[1]: Started cri-containerd-9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75.scope - libcontainer container 9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75. Feb 13 19:27:13.988036 containerd[1918]: time="2025-02-13T19:27:13.987930258Z" level=info msg="shim disconnected" id=f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51 namespace=k8s.io Feb 13 19:27:13.988036 containerd[1918]: time="2025-02-13T19:27:13.988086666Z" level=warning msg="cleaning up after shim disconnected" id=f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51 namespace=k8s.io Feb 13 19:27:13.988036 containerd[1918]: time="2025-02-13T19:27:13.988105637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:27:14.013471 containerd[1918]: time="2025-02-13T19:27:14.009488355Z" level=info msg="StartContainer for \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\" returns successfully" Feb 13 19:27:14.010588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51-rootfs.mount: Deactivated successfully. Feb 13 19:27:14.578580 containerd[1918]: time="2025-02-13T19:27:14.578460784Z" level=info msg="CreateContainer within sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:27:14.612482 containerd[1918]: time="2025-02-13T19:27:14.607900049Z" level=info msg="CreateContainer within sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837\"" Feb 13 19:27:14.612482 containerd[1918]: time="2025-02-13T19:27:14.609590363Z" level=info msg="StartContainer for \"80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837\"" Feb 13 19:27:14.693355 systemd[1]: Started cri-containerd-80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837.scope - libcontainer container 80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837. Feb 13 19:27:14.811701 systemd[1]: cri-containerd-80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837.scope: Deactivated successfully. Feb 13 19:27:14.815040 containerd[1918]: time="2025-02-13T19:27:14.814839751Z" level=info msg="StartContainer for \"80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837\" returns successfully" Feb 13 19:27:14.886565 containerd[1918]: time="2025-02-13T19:27:14.885922959Z" level=info msg="shim disconnected" id=80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837 namespace=k8s.io Feb 13 19:27:14.886565 containerd[1918]: time="2025-02-13T19:27:14.885986443Z" level=warning msg="cleaning up after shim disconnected" id=80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837 namespace=k8s.io Feb 13 19:27:14.886565 containerd[1918]: time="2025-02-13T19:27:14.885998295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:27:15.029257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837-rootfs.mount: Deactivated successfully. Feb 13 19:27:15.606210 containerd[1918]: time="2025-02-13T19:27:15.605730887Z" level=info msg="CreateContainer within sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:27:15.634859 kubelet[3184]: I0213 19:27:15.633828 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-66f48" podStartSLOduration=2.47572684 podStartE2EDuration="16.633808363s" podCreationTimestamp="2025-02-13 19:26:59 +0000 UTC" firstStartedPulling="2025-02-13 19:26:59.693368895 +0000 UTC m=+6.654303653" lastFinishedPulling="2025-02-13 19:27:13.851450406 +0000 UTC m=+20.812385176" observedRunningTime="2025-02-13 19:27:14.958665962 +0000 UTC m=+21.919600753" watchObservedRunningTime="2025-02-13 19:27:15.633808363 +0000 UTC m=+22.594743141" Feb 13 19:27:15.667266 containerd[1918]: time="2025-02-13T19:27:15.667218019Z" level=info msg="CreateContainer within sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\"" Feb 13 19:27:15.668162 containerd[1918]: time="2025-02-13T19:27:15.668129769Z" level=info msg="StartContainer for \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\"" Feb 13 19:27:15.717296 systemd[1]: Started cri-containerd-39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77.scope - libcontainer container 39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77. Feb 13 19:27:15.793421 containerd[1918]: time="2025-02-13T19:27:15.793367857Z" level=info msg="StartContainer for \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\" returns successfully" Feb 13 19:27:16.342672 kubelet[3184]: I0213 19:27:16.341529 3184 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:27:16.421589 systemd[1]: Created slice kubepods-burstable-pod05580fe3_e15e_4a5a_9aa3_0e68d913ba82.slice - libcontainer container kubepods-burstable-pod05580fe3_e15e_4a5a_9aa3_0e68d913ba82.slice. Feb 13 19:27:16.434121 systemd[1]: Created slice kubepods-burstable-pod067d199f_9a9e_4613_98b1_00bfae60d8eb.slice - libcontainer container kubepods-burstable-pod067d199f_9a9e_4613_98b1_00bfae60d8eb.slice. Feb 13 19:27:16.542179 kubelet[3184]: I0213 19:27:16.542120 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f8tk\" (UniqueName: \"kubernetes.io/projected/05580fe3-e15e-4a5a-9aa3-0e68d913ba82-kube-api-access-4f8tk\") pod \"coredns-6f6b679f8f-8r22k\" (UID: \"05580fe3-e15e-4a5a-9aa3-0e68d913ba82\") " pod="kube-system/coredns-6f6b679f8f-8r22k" Feb 13 19:27:16.542179 kubelet[3184]: I0213 19:27:16.542185 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl9mz\" (UniqueName: \"kubernetes.io/projected/067d199f-9a9e-4613-98b1-00bfae60d8eb-kube-api-access-dl9mz\") pod \"coredns-6f6b679f8f-btknb\" (UID: \"067d199f-9a9e-4613-98b1-00bfae60d8eb\") " pod="kube-system/coredns-6f6b679f8f-btknb" Feb 13 19:27:16.542404 kubelet[3184]: I0213 19:27:16.542212 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/067d199f-9a9e-4613-98b1-00bfae60d8eb-config-volume\") pod \"coredns-6f6b679f8f-btknb\" (UID: \"067d199f-9a9e-4613-98b1-00bfae60d8eb\") " pod="kube-system/coredns-6f6b679f8f-btknb" Feb 13 19:27:16.542404 kubelet[3184]: I0213 19:27:16.542241 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05580fe3-e15e-4a5a-9aa3-0e68d913ba82-config-volume\") pod \"coredns-6f6b679f8f-8r22k\" (UID: \"05580fe3-e15e-4a5a-9aa3-0e68d913ba82\") " pod="kube-system/coredns-6f6b679f8f-8r22k" Feb 13 19:27:16.728842 containerd[1918]: time="2025-02-13T19:27:16.728797662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8r22k,Uid:05580fe3-e15e-4a5a-9aa3-0e68d913ba82,Namespace:kube-system,Attempt:0,}" Feb 13 19:27:16.740231 containerd[1918]: time="2025-02-13T19:27:16.740011572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-btknb,Uid:067d199f-9a9e-4613-98b1-00bfae60d8eb,Namespace:kube-system,Attempt:0,}" Feb 13 19:27:22.051729 systemd-networkd[1749]: cilium_host: Link UP Feb 13 19:27:22.051913 systemd-networkd[1749]: cilium_net: Link UP Feb 13 19:27:22.054231 systemd-networkd[1749]: cilium_net: Gained carrier Feb 13 19:27:22.055785 systemd-networkd[1749]: cilium_host: Gained carrier Feb 13 19:27:22.154087 (udev-worker)[4160]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:27:22.154088 (udev-worker)[4193]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:27:22.324288 (udev-worker)[4204]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:27:22.336544 systemd-networkd[1749]: cilium_vxlan: Link UP Feb 13 19:27:22.336555 systemd-networkd[1749]: cilium_vxlan: Gained carrier Feb 13 19:27:22.502302 systemd-networkd[1749]: cilium_host: Gained IPv6LL Feb 13 19:27:22.654374 systemd-networkd[1749]: cilium_net: Gained IPv6LL Feb 13 19:27:23.999402 systemd-networkd[1749]: cilium_vxlan: Gained IPv6LL Feb 13 19:27:24.671189 kernel: NET: Registered PF_ALG protocol family Feb 13 19:27:25.910210 (udev-worker)[4203]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:27:25.914546 systemd-networkd[1749]: lxc_health: Link UP Feb 13 19:27:25.918427 systemd-networkd[1749]: lxc_health: Gained carrier Feb 13 19:27:26.413549 systemd-networkd[1749]: lxc8ca656e3f96c: Link UP Feb 13 19:27:26.419101 kernel: eth0: renamed from tmpb8674 Feb 13 19:27:26.431405 systemd-networkd[1749]: lxc8ca656e3f96c: Gained carrier Feb 13 19:27:26.470179 kernel: eth0: renamed from tmpc6294 Feb 13 19:27:26.474478 systemd-networkd[1749]: lxc487492cf8a08: Link UP Feb 13 19:27:26.477727 systemd-networkd[1749]: lxc487492cf8a08: Gained carrier Feb 13 19:27:27.122482 kubelet[3184]: I0213 19:27:27.122397 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kt6pf" podStartSLOduration=17.622306081 podStartE2EDuration="29.122370551s" podCreationTimestamp="2025-02-13 19:26:58 +0000 UTC" firstStartedPulling="2025-02-13 19:26:59.408364304 +0000 UTC m=+6.369299067" lastFinishedPulling="2025-02-13 19:27:10.908428779 +0000 UTC m=+17.869363537" observedRunningTime="2025-02-13 19:27:16.623049134 +0000 UTC m=+23.583983914" watchObservedRunningTime="2025-02-13 19:27:27.122370551 +0000 UTC m=+34.083305329" Feb 13 19:27:27.518346 systemd-networkd[1749]: lxc_health: Gained IPv6LL Feb 13 19:27:27.710605 systemd-networkd[1749]: lxc8ca656e3f96c: Gained IPv6LL Feb 13 19:27:27.966607 systemd-networkd[1749]: lxc487492cf8a08: Gained IPv6LL Feb 13 19:27:30.059395 ntpd[1890]: Listen normally on 7 cilium_host 192.168.0.24:123 Feb 13 19:27:30.060491 ntpd[1890]: 13 Feb 19:27:30 ntpd[1890]: Listen normally on 7 cilium_host 192.168.0.24:123 Feb 13 19:27:30.060491 ntpd[1890]: 13 Feb 19:27:30 ntpd[1890]: Listen normally on 8 cilium_net [fe80::8e6:efff:fe73:f9b2%4]:123 Feb 13 19:27:30.060491 ntpd[1890]: 13 Feb 19:27:30 ntpd[1890]: Listen normally on 9 cilium_host [fe80::c4c5:e0ff:fe84:1934%5]:123 Feb 13 19:27:30.060491 ntpd[1890]: 13 Feb 19:27:30 ntpd[1890]: Listen normally on 10 cilium_vxlan [fe80::787a:15ff:feba:deb7%6]:123 Feb 13 19:27:30.060491 ntpd[1890]: 13 Feb 19:27:30 ntpd[1890]: Listen normally on 11 lxc_health [fe80::6ce8:9dff:fe14:1a%8]:123 Feb 13 19:27:30.060491 ntpd[1890]: 13 Feb 19:27:30 ntpd[1890]: Listen normally on 12 lxc8ca656e3f96c [fe80::c089:c0ff:fe59:89a1%10]:123 Feb 13 19:27:30.060491 ntpd[1890]: 13 Feb 19:27:30 ntpd[1890]: Listen normally on 13 lxc487492cf8a08 [fe80::b0f3:e7ff:fe5d:91dc%12]:123 Feb 13 19:27:30.060115 ntpd[1890]: Listen normally on 8 cilium_net [fe80::8e6:efff:fe73:f9b2%4]:123 Feb 13 19:27:30.060200 ntpd[1890]: Listen normally on 9 cilium_host [fe80::c4c5:e0ff:fe84:1934%5]:123 Feb 13 19:27:30.060261 ntpd[1890]: Listen normally on 10 cilium_vxlan [fe80::787a:15ff:feba:deb7%6]:123 Feb 13 19:27:30.060308 ntpd[1890]: Listen normally on 11 lxc_health [fe80::6ce8:9dff:fe14:1a%8]:123 Feb 13 19:27:30.060351 ntpd[1890]: Listen normally on 12 lxc8ca656e3f96c [fe80::c089:c0ff:fe59:89a1%10]:123 Feb 13 19:27:30.060391 ntpd[1890]: Listen normally on 13 lxc487492cf8a08 [fe80::b0f3:e7ff:fe5d:91dc%12]:123 Feb 13 19:27:31.670512 containerd[1918]: time="2025-02-13T19:27:31.667638734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:31.670512 containerd[1918]: time="2025-02-13T19:27:31.667838260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:31.670512 containerd[1918]: time="2025-02-13T19:27:31.667865479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:31.670512 containerd[1918]: time="2025-02-13T19:27:31.668149184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:31.679277 containerd[1918]: time="2025-02-13T19:27:31.676164801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:27:31.679277 containerd[1918]: time="2025-02-13T19:27:31.676251848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:27:31.679277 containerd[1918]: time="2025-02-13T19:27:31.676270203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:31.679277 containerd[1918]: time="2025-02-13T19:27:31.676394568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:27:31.774325 systemd[1]: Started cri-containerd-b86744ee9ca9ed35497dbc5af58aab7a7c0b95d3c21711a49de468d079427d6d.scope - libcontainer container b86744ee9ca9ed35497dbc5af58aab7a7c0b95d3c21711a49de468d079427d6d. Feb 13 19:27:31.777361 systemd[1]: Started cri-containerd-c6294a6d0d1571986495693e57685c5bc03f99d4aae98e1ea9088ffd99b185d3.scope - libcontainer container c6294a6d0d1571986495693e57685c5bc03f99d4aae98e1ea9088ffd99b185d3. Feb 13 19:27:31.890180 containerd[1918]: time="2025-02-13T19:27:31.889997913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-btknb,Uid:067d199f-9a9e-4613-98b1-00bfae60d8eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b86744ee9ca9ed35497dbc5af58aab7a7c0b95d3c21711a49de468d079427d6d\"" Feb 13 19:27:31.899855 containerd[1918]: time="2025-02-13T19:27:31.899659968Z" level=info msg="CreateContainer within sandbox \"b86744ee9ca9ed35497dbc5af58aab7a7c0b95d3c21711a49de468d079427d6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:27:31.934989 containerd[1918]: time="2025-02-13T19:27:31.934534126Z" level=info msg="CreateContainer within sandbox \"b86744ee9ca9ed35497dbc5af58aab7a7c0b95d3c21711a49de468d079427d6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"186e578a18c6efd3cc7f36e8e9e59f57ebbd6d839419b447b09978fb046d1b3b\"" Feb 13 19:27:31.937843 containerd[1918]: time="2025-02-13T19:27:31.937502139Z" level=info msg="StartContainer for \"186e578a18c6efd3cc7f36e8e9e59f57ebbd6d839419b447b09978fb046d1b3b\"" Feb 13 19:27:31.965913 containerd[1918]: time="2025-02-13T19:27:31.965585131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8r22k,Uid:05580fe3-e15e-4a5a-9aa3-0e68d913ba82,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6294a6d0d1571986495693e57685c5bc03f99d4aae98e1ea9088ffd99b185d3\"" Feb 13 19:27:31.975815 containerd[1918]: time="2025-02-13T19:27:31.975114519Z" level=info msg="CreateContainer within sandbox \"c6294a6d0d1571986495693e57685c5bc03f99d4aae98e1ea9088ffd99b185d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:27:31.996757 systemd[1]: Started cri-containerd-186e578a18c6efd3cc7f36e8e9e59f57ebbd6d839419b447b09978fb046d1b3b.scope - libcontainer container 186e578a18c6efd3cc7f36e8e9e59f57ebbd6d839419b447b09978fb046d1b3b. Feb 13 19:27:32.008142 containerd[1918]: time="2025-02-13T19:27:32.008103082Z" level=info msg="CreateContainer within sandbox \"c6294a6d0d1571986495693e57685c5bc03f99d4aae98e1ea9088ffd99b185d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a75137ff7a8ec938451045b3f60662fee6a164e95de9f70e94393d7f4dad8db\"" Feb 13 19:27:32.009478 containerd[1918]: time="2025-02-13T19:27:32.009381479Z" level=info msg="StartContainer for \"1a75137ff7a8ec938451045b3f60662fee6a164e95de9f70e94393d7f4dad8db\"" Feb 13 19:27:32.095487 systemd[1]: Started cri-containerd-1a75137ff7a8ec938451045b3f60662fee6a164e95de9f70e94393d7f4dad8db.scope - libcontainer container 1a75137ff7a8ec938451045b3f60662fee6a164e95de9f70e94393d7f4dad8db. Feb 13 19:27:32.100280 containerd[1918]: time="2025-02-13T19:27:32.100236921Z" level=info msg="StartContainer for \"186e578a18c6efd3cc7f36e8e9e59f57ebbd6d839419b447b09978fb046d1b3b\" returns successfully" Feb 13 19:27:32.132487 containerd[1918]: time="2025-02-13T19:27:32.132424883Z" level=info msg="StartContainer for \"1a75137ff7a8ec938451045b3f60662fee6a164e95de9f70e94393d7f4dad8db\" returns successfully" Feb 13 19:27:32.700348 kubelet[3184]: I0213 19:27:32.699625 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-btknb" podStartSLOduration=33.699602659 podStartE2EDuration="33.699602659s" podCreationTimestamp="2025-02-13 19:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:27:32.698642944 +0000 UTC m=+39.659577722" watchObservedRunningTime="2025-02-13 19:27:32.699602659 +0000 UTC m=+39.660537436" Feb 13 19:27:32.719380 kubelet[3184]: I0213 19:27:32.719134 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8r22k" podStartSLOduration=33.719118439 podStartE2EDuration="33.719118439s" podCreationTimestamp="2025-02-13 19:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:27:32.718050728 +0000 UTC m=+39.678985507" watchObservedRunningTime="2025-02-13 19:27:32.719118439 +0000 UTC m=+39.680053217" Feb 13 19:27:41.582358 systemd[1]: Started sshd@9-172.31.19.191:22-139.178.68.195:56890.service - OpenSSH per-connection server daemon (139.178.68.195:56890). Feb 13 19:27:41.881982 sshd[4734]: Accepted publickey for core from 139.178.68.195 port 56890 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:27:41.886335 sshd-session[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:41.931272 systemd-logind[1896]: New session 10 of user core. Feb 13 19:27:41.938404 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:27:42.838902 sshd[4736]: Connection closed by 139.178.68.195 port 56890 Feb 13 19:27:42.839668 sshd-session[4734]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:42.843138 systemd[1]: sshd@9-172.31.19.191:22-139.178.68.195:56890.service: Deactivated successfully. Feb 13 19:27:42.845678 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:27:42.847583 systemd-logind[1896]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:27:42.848927 systemd-logind[1896]: Removed session 10. Feb 13 19:27:47.891543 systemd[1]: Started sshd@10-172.31.19.191:22-139.178.68.195:46070.service - OpenSSH per-connection server daemon (139.178.68.195:46070). Feb 13 19:27:48.105112 sshd[4749]: Accepted publickey for core from 139.178.68.195 port 46070 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:27:48.107455 sshd-session[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:48.116874 systemd-logind[1896]: New session 11 of user core. Feb 13 19:27:48.128579 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:27:48.409832 sshd[4751]: Connection closed by 139.178.68.195 port 46070 Feb 13 19:27:48.410950 sshd-session[4749]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:48.416288 systemd[1]: sshd@10-172.31.19.191:22-139.178.68.195:46070.service: Deactivated successfully. Feb 13 19:27:48.421644 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:27:48.424532 systemd-logind[1896]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:27:48.427864 systemd-logind[1896]: Removed session 11. Feb 13 19:27:53.453220 systemd[1]: Started sshd@11-172.31.19.191:22-139.178.68.195:46076.service - OpenSSH per-connection server daemon (139.178.68.195:46076). Feb 13 19:27:53.643041 sshd[4767]: Accepted publickey for core from 139.178.68.195 port 46076 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:27:53.644979 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:53.653032 systemd-logind[1896]: New session 12 of user core. Feb 13 19:27:53.655573 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:27:53.889930 sshd[4769]: Connection closed by 139.178.68.195 port 46076 Feb 13 19:27:53.891964 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:53.897315 systemd[1]: sshd@11-172.31.19.191:22-139.178.68.195:46076.service: Deactivated successfully. Feb 13 19:27:53.900793 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:27:53.904122 systemd-logind[1896]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:27:53.907397 systemd-logind[1896]: Removed session 12. Feb 13 19:27:58.930540 systemd[1]: Started sshd@12-172.31.19.191:22-139.178.68.195:35066.service - OpenSSH per-connection server daemon (139.178.68.195:35066). Feb 13 19:27:59.112050 sshd[4782]: Accepted publickey for core from 139.178.68.195 port 35066 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:27:59.113657 sshd-session[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:59.119944 systemd-logind[1896]: New session 13 of user core. Feb 13 19:27:59.127363 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:27:59.390602 sshd[4784]: Connection closed by 139.178.68.195 port 35066 Feb 13 19:27:59.392312 sshd-session[4782]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:59.396270 systemd-logind[1896]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:27:59.397482 systemd[1]: sshd@12-172.31.19.191:22-139.178.68.195:35066.service: Deactivated successfully. Feb 13 19:27:59.401045 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:27:59.405162 systemd-logind[1896]: Removed session 13. Feb 13 19:27:59.428528 systemd[1]: Started sshd@13-172.31.19.191:22-139.178.68.195:35080.service - OpenSSH per-connection server daemon (139.178.68.195:35080). Feb 13 19:27:59.609841 sshd[4798]: Accepted publickey for core from 139.178.68.195 port 35080 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:27:59.611608 sshd-session[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:59.618293 systemd-logind[1896]: New session 14 of user core. Feb 13 19:27:59.627322 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:27:59.922017 sshd[4800]: Connection closed by 139.178.68.195 port 35080 Feb 13 19:27:59.924656 sshd-session[4798]: pam_unix(sshd:session): session closed for user core Feb 13 19:27:59.936142 systemd-logind[1896]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:27:59.937767 systemd[1]: sshd@13-172.31.19.191:22-139.178.68.195:35080.service: Deactivated successfully. Feb 13 19:27:59.948991 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:27:59.976953 systemd-logind[1896]: Removed session 14. Feb 13 19:27:59.985548 systemd[1]: Started sshd@14-172.31.19.191:22-139.178.68.195:35090.service - OpenSSH per-connection server daemon (139.178.68.195:35090). Feb 13 19:28:00.203648 sshd[4810]: Accepted publickey for core from 139.178.68.195 port 35090 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:00.207716 sshd-session[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:00.215737 systemd-logind[1896]: New session 15 of user core. Feb 13 19:28:00.220783 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:28:00.489229 sshd[4813]: Connection closed by 139.178.68.195 port 35090 Feb 13 19:28:00.490542 sshd-session[4810]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:00.497059 systemd[1]: sshd@14-172.31.19.191:22-139.178.68.195:35090.service: Deactivated successfully. Feb 13 19:28:00.500889 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:28:00.502938 systemd-logind[1896]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:28:00.505584 systemd-logind[1896]: Removed session 15. Feb 13 19:28:05.528898 systemd[1]: Started sshd@15-172.31.19.191:22-139.178.68.195:35094.service - OpenSSH per-connection server daemon (139.178.68.195:35094). Feb 13 19:28:05.754242 sshd[4828]: Accepted publickey for core from 139.178.68.195 port 35094 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:05.756347 sshd-session[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:05.766346 systemd-logind[1896]: New session 16 of user core. Feb 13 19:28:05.775792 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:28:06.034297 sshd[4830]: Connection closed by 139.178.68.195 port 35094 Feb 13 19:28:06.035593 sshd-session[4828]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:06.040155 systemd-logind[1896]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:28:06.042518 systemd[1]: sshd@15-172.31.19.191:22-139.178.68.195:35094.service: Deactivated successfully. Feb 13 19:28:06.046931 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:28:06.050271 systemd-logind[1896]: Removed session 16. Feb 13 19:28:11.079802 systemd[1]: Started sshd@16-172.31.19.191:22-139.178.68.195:58856.service - OpenSSH per-connection server daemon (139.178.68.195:58856). Feb 13 19:28:11.264226 sshd[4842]: Accepted publickey for core from 139.178.68.195 port 58856 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:11.266309 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:11.273794 systemd-logind[1896]: New session 17 of user core. Feb 13 19:28:11.281406 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:28:11.532529 sshd[4844]: Connection closed by 139.178.68.195 port 58856 Feb 13 19:28:11.533773 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:11.538360 systemd-logind[1896]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:28:11.539598 systemd[1]: sshd@16-172.31.19.191:22-139.178.68.195:58856.service: Deactivated successfully. Feb 13 19:28:11.542416 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:28:11.543937 systemd-logind[1896]: Removed session 17. Feb 13 19:28:11.569483 systemd[1]: Started sshd@17-172.31.19.191:22-139.178.68.195:58860.service - OpenSSH per-connection server daemon (139.178.68.195:58860). Feb 13 19:28:11.744221 sshd[4856]: Accepted publickey for core from 139.178.68.195 port 58860 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:11.745833 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:11.751445 systemd-logind[1896]: New session 18 of user core. Feb 13 19:28:11.760289 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:28:16.664236 sshd[4858]: Connection closed by 139.178.68.195 port 58860 Feb 13 19:28:16.704507 systemd[1]: Started sshd@18-172.31.19.191:22-139.178.68.195:36986.service - OpenSSH per-connection server daemon (139.178.68.195:36986). Feb 13 19:28:16.723541 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:16.734785 systemd[1]: sshd@17-172.31.19.191:22-139.178.68.195:58860.service: Deactivated successfully. Feb 13 19:28:16.739602 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:28:16.746748 systemd-logind[1896]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:28:16.748460 systemd-logind[1896]: Removed session 18. Feb 13 19:28:16.915062 sshd[4866]: Accepted publickey for core from 139.178.68.195 port 36986 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:16.916890 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:16.923371 systemd-logind[1896]: New session 19 of user core. Feb 13 19:28:16.931345 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:28:19.622099 sshd[4871]: Connection closed by 139.178.68.195 port 36986 Feb 13 19:28:19.625091 sshd-session[4866]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:19.633896 systemd[1]: sshd@18-172.31.19.191:22-139.178.68.195:36986.service: Deactivated successfully. Feb 13 19:28:19.639674 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:28:19.646929 systemd-logind[1896]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:28:19.669253 systemd[1]: Started sshd@19-172.31.19.191:22-139.178.68.195:36996.service - OpenSSH per-connection server daemon (139.178.68.195:36996). Feb 13 19:28:19.670837 systemd-logind[1896]: Removed session 19. Feb 13 19:28:19.859439 sshd[4887]: Accepted publickey for core from 139.178.68.195 port 36996 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:19.861950 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:19.873352 systemd-logind[1896]: New session 20 of user core. Feb 13 19:28:19.879308 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:28:20.310616 sshd[4890]: Connection closed by 139.178.68.195 port 36996 Feb 13 19:28:20.313708 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:20.319267 systemd[1]: sshd@19-172.31.19.191:22-139.178.68.195:36996.service: Deactivated successfully. Feb 13 19:28:20.323054 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:28:20.324806 systemd-logind[1896]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:28:20.326381 systemd-logind[1896]: Removed session 20. Feb 13 19:28:20.347586 systemd[1]: Started sshd@20-172.31.19.191:22-139.178.68.195:37006.service - OpenSSH per-connection server daemon (139.178.68.195:37006). Feb 13 19:28:20.517582 sshd[4899]: Accepted publickey for core from 139.178.68.195 port 37006 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:20.520148 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:20.525601 systemd-logind[1896]: New session 21 of user core. Feb 13 19:28:20.531297 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:28:20.759607 sshd[4901]: Connection closed by 139.178.68.195 port 37006 Feb 13 19:28:20.761700 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:20.769774 systemd-logind[1896]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:28:20.770780 systemd[1]: sshd@20-172.31.19.191:22-139.178.68.195:37006.service: Deactivated successfully. Feb 13 19:28:20.773899 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:28:20.775522 systemd-logind[1896]: Removed session 21. Feb 13 19:28:25.839271 systemd[1]: Started sshd@21-172.31.19.191:22-139.178.68.195:37014.service - OpenSSH per-connection server daemon (139.178.68.195:37014). Feb 13 19:28:26.039312 sshd[4913]: Accepted publickey for core from 139.178.68.195 port 37014 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:26.040976 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:26.050242 systemd-logind[1896]: New session 22 of user core. Feb 13 19:28:26.058301 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:28:26.263755 sshd[4915]: Connection closed by 139.178.68.195 port 37014 Feb 13 19:28:26.264624 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:26.269600 systemd[1]: sshd@21-172.31.19.191:22-139.178.68.195:37014.service: Deactivated successfully. Feb 13 19:28:26.274499 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:28:26.276649 systemd-logind[1896]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:28:26.278753 systemd-logind[1896]: Removed session 22. Feb 13 19:28:31.306503 systemd[1]: Started sshd@22-172.31.19.191:22-139.178.68.195:47500.service - OpenSSH per-connection server daemon (139.178.68.195:47500). Feb 13 19:28:31.541664 sshd[4930]: Accepted publickey for core from 139.178.68.195 port 47500 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:31.545051 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:31.569056 systemd-logind[1896]: New session 23 of user core. Feb 13 19:28:31.576276 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:28:31.905001 sshd[4934]: Connection closed by 139.178.68.195 port 47500 Feb 13 19:28:31.908204 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:31.915192 systemd-logind[1896]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:28:31.915972 systemd[1]: sshd@22-172.31.19.191:22-139.178.68.195:47500.service: Deactivated successfully. Feb 13 19:28:31.924140 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:28:31.928222 systemd-logind[1896]: Removed session 23. Feb 13 19:28:36.951190 systemd[1]: Started sshd@23-172.31.19.191:22-139.178.68.195:49952.service - OpenSSH per-connection server daemon (139.178.68.195:49952). Feb 13 19:28:37.137646 sshd[4946]: Accepted publickey for core from 139.178.68.195 port 49952 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:37.139553 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:37.152281 systemd-logind[1896]: New session 24 of user core. Feb 13 19:28:37.157394 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:28:37.370093 sshd[4948]: Connection closed by 139.178.68.195 port 49952 Feb 13 19:28:37.370993 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:37.374570 systemd[1]: sshd@23-172.31.19.191:22-139.178.68.195:49952.service: Deactivated successfully. Feb 13 19:28:37.377286 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:28:37.379669 systemd-logind[1896]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:28:37.381437 systemd-logind[1896]: Removed session 24. Feb 13 19:28:42.410283 systemd[1]: Started sshd@24-172.31.19.191:22-139.178.68.195:49962.service - OpenSSH per-connection server daemon (139.178.68.195:49962). Feb 13 19:28:42.588065 sshd[4960]: Accepted publickey for core from 139.178.68.195 port 49962 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:42.589839 sshd-session[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:42.594957 systemd-logind[1896]: New session 25 of user core. Feb 13 19:28:42.604308 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:28:42.815734 sshd[4962]: Connection closed by 139.178.68.195 port 49962 Feb 13 19:28:42.816777 sshd-session[4960]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:42.821435 systemd-logind[1896]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:28:42.822824 systemd[1]: sshd@24-172.31.19.191:22-139.178.68.195:49962.service: Deactivated successfully. Feb 13 19:28:42.825636 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:28:42.826946 systemd-logind[1896]: Removed session 25. Feb 13 19:28:42.863505 systemd[1]: Started sshd@25-172.31.19.191:22-139.178.68.195:49976.service - OpenSSH per-connection server daemon (139.178.68.195:49976). Feb 13 19:28:43.084919 sshd[4974]: Accepted publickey for core from 139.178.68.195 port 49976 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:43.086356 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:43.091474 systemd-logind[1896]: New session 26 of user core. Feb 13 19:28:43.100490 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:28:46.724954 containerd[1918]: time="2025-02-13T19:28:46.724880247Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:28:46.744494 containerd[1918]: time="2025-02-13T19:28:46.744453215Z" level=info msg="StopContainer for \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\" with timeout 2 (s)" Feb 13 19:28:46.744974 containerd[1918]: time="2025-02-13T19:28:46.744938449Z" level=info msg="StopContainer for \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\" with timeout 30 (s)" Feb 13 19:28:46.753555 containerd[1918]: time="2025-02-13T19:28:46.753516311Z" level=info msg="Stop container \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\" with signal terminated" Feb 13 19:28:46.754958 containerd[1918]: time="2025-02-13T19:28:46.754892161Z" level=info msg="Stop container \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\" with signal terminated" Feb 13 19:28:46.776132 systemd-networkd[1749]: lxc_health: Link DOWN Feb 13 19:28:46.776142 systemd-networkd[1749]: lxc_health: Lost carrier Feb 13 19:28:46.796940 systemd[1]: cri-containerd-9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75.scope: Deactivated successfully. Feb 13 19:28:46.799872 systemd[1]: cri-containerd-39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77.scope: Deactivated successfully. Feb 13 19:28:46.800574 systemd[1]: cri-containerd-39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77.scope: Consumed 8.652s CPU time, 195.9M memory peak, 72.1M read from disk, 13.3M written to disk. Feb 13 19:28:46.871058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75-rootfs.mount: Deactivated successfully. Feb 13 19:28:46.893025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77-rootfs.mount: Deactivated successfully. Feb 13 19:28:46.899501 containerd[1918]: time="2025-02-13T19:28:46.899339534Z" level=info msg="shim disconnected" id=9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75 namespace=k8s.io Feb 13 19:28:46.899501 containerd[1918]: time="2025-02-13T19:28:46.899497395Z" level=warning msg="cleaning up after shim disconnected" id=9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75 namespace=k8s.io Feb 13 19:28:46.899769 containerd[1918]: time="2025-02-13T19:28:46.899510157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:28:46.903473 containerd[1918]: time="2025-02-13T19:28:46.903092949Z" level=info msg="shim disconnected" id=39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77 namespace=k8s.io Feb 13 19:28:46.903473 containerd[1918]: time="2025-02-13T19:28:46.903154179Z" level=warning msg="cleaning up after shim disconnected" id=39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77 namespace=k8s.io Feb 13 19:28:46.903473 containerd[1918]: time="2025-02-13T19:28:46.903166942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:28:46.941530 containerd[1918]: time="2025-02-13T19:28:46.941216652Z" level=info msg="StopContainer for \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\" returns successfully" Feb 13 19:28:46.942164 containerd[1918]: time="2025-02-13T19:28:46.941866945Z" level=info msg="StopPodSandbox for \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\"" Feb 13 19:28:46.946249 containerd[1918]: time="2025-02-13T19:28:46.946155861Z" level=info msg="Container to stop \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:28:46.950509 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a-shm.mount: Deactivated successfully. Feb 13 19:28:46.958302 containerd[1918]: time="2025-02-13T19:28:46.958257131Z" level=info msg="StopContainer for \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\" returns successfully" Feb 13 19:28:46.958955 containerd[1918]: time="2025-02-13T19:28:46.958927331Z" level=info msg="StopPodSandbox for \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\"" Feb 13 19:28:46.959230 containerd[1918]: time="2025-02-13T19:28:46.958965579Z" level=info msg="Container to stop \"80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:28:46.959230 containerd[1918]: time="2025-02-13T19:28:46.958982046Z" level=info msg="Container to stop \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:28:46.959230 containerd[1918]: time="2025-02-13T19:28:46.958998100Z" level=info msg="Container to stop \"78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:28:46.959230 containerd[1918]: time="2025-02-13T19:28:46.959013659Z" level=info msg="Container to stop \"8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:28:46.959230 containerd[1918]: time="2025-02-13T19:28:46.959026073Z" level=info msg="Container to stop \"f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:28:46.966721 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f-shm.mount: Deactivated successfully. Feb 13 19:28:46.971582 systemd[1]: cri-containerd-ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a.scope: Deactivated successfully. Feb 13 19:28:46.981929 systemd[1]: cri-containerd-718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f.scope: Deactivated successfully. Feb 13 19:28:47.053028 containerd[1918]: time="2025-02-13T19:28:47.052150054Z" level=info msg="shim disconnected" id=ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a namespace=k8s.io Feb 13 19:28:47.053028 containerd[1918]: time="2025-02-13T19:28:47.052220975Z" level=warning msg="cleaning up after shim disconnected" id=ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a namespace=k8s.io Feb 13 19:28:47.053028 containerd[1918]: time="2025-02-13T19:28:47.052232261Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:28:47.058943 containerd[1918]: time="2025-02-13T19:28:47.058050119Z" level=info msg="shim disconnected" id=718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f namespace=k8s.io Feb 13 19:28:47.059925 containerd[1918]: time="2025-02-13T19:28:47.058919178Z" level=warning msg="cleaning up after shim disconnected" id=718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f namespace=k8s.io Feb 13 19:28:47.059925 containerd[1918]: time="2025-02-13T19:28:47.059863814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:28:47.087976 containerd[1918]: time="2025-02-13T19:28:47.087933237Z" level=info msg="TearDown network for sandbox \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\" successfully" Feb 13 19:28:47.088312 containerd[1918]: time="2025-02-13T19:28:47.088143966Z" level=info msg="StopPodSandbox for \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\" returns successfully" Feb 13 19:28:47.097138 containerd[1918]: time="2025-02-13T19:28:47.097037561Z" level=info msg="TearDown network for sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" successfully" Feb 13 19:28:47.097138 containerd[1918]: time="2025-02-13T19:28:47.097097261Z" level=info msg="StopPodSandbox for \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" returns successfully" Feb 13 19:28:47.259304 kubelet[3184]: I0213 19:28:47.258386 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-hostproc\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.259304 kubelet[3184]: I0213 19:28:47.258488 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-etc-cni-netd\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.259304 kubelet[3184]: I0213 19:28:47.258521 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-bpf-maps\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.259304 kubelet[3184]: I0213 19:28:47.258544 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-cilium-cgroup\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.259304 kubelet[3184]: I0213 19:28:47.258570 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-lib-modules\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.259304 kubelet[3184]: I0213 19:28:47.258604 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2xmt\" (UniqueName: \"kubernetes.io/projected/68331348-b861-43ac-adb9-d774f9189db4-kube-api-access-q2xmt\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.260190 kubelet[3184]: I0213 19:28:47.258628 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-cni-path\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.260190 kubelet[3184]: I0213 19:28:47.258655 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/739da241-b1df-40a1-96df-d17dcb018fb8-cilium-config-path\") pod \"739da241-b1df-40a1-96df-d17dcb018fb8\" (UID: \"739da241-b1df-40a1-96df-d17dcb018fb8\") " Feb 13 19:28:47.260190 kubelet[3184]: I0213 19:28:47.258681 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-host-proc-sys-kernel\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.260190 kubelet[3184]: I0213 19:28:47.258707 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68331348-b861-43ac-adb9-d774f9189db4-hubble-tls\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.260190 kubelet[3184]: I0213 19:28:47.258731 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-host-proc-sys-net\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.260190 kubelet[3184]: I0213 19:28:47.258983 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf9hw\" (UniqueName: \"kubernetes.io/projected/739da241-b1df-40a1-96df-d17dcb018fb8-kube-api-access-cf9hw\") pod \"739da241-b1df-40a1-96df-d17dcb018fb8\" (UID: \"739da241-b1df-40a1-96df-d17dcb018fb8\") " Feb 13 19:28:47.260465 kubelet[3184]: I0213 19:28:47.259015 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-xtables-lock\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.260465 kubelet[3184]: I0213 19:28:47.259042 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68331348-b861-43ac-adb9-d774f9189db4-cilium-config-path\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.260465 kubelet[3184]: I0213 19:28:47.259084 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68331348-b861-43ac-adb9-d774f9189db4-clustermesh-secrets\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.260465 kubelet[3184]: I0213 19:28:47.259108 3184 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-cilium-run\") pod \"68331348-b861-43ac-adb9-d774f9189db4\" (UID: \"68331348-b861-43ac-adb9-d774f9189db4\") " Feb 13 19:28:47.282241 kubelet[3184]: I0213 19:28:47.277081 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:28:47.282241 kubelet[3184]: I0213 19:28:47.282225 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-hostproc" (OuterVolumeSpecName: "hostproc") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:28:47.282451 kubelet[3184]: I0213 19:28:47.282280 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:28:47.282451 kubelet[3184]: I0213 19:28:47.282305 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:28:47.282451 kubelet[3184]: I0213 19:28:47.282322 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:28:47.282451 kubelet[3184]: I0213 19:28:47.282343 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:28:47.285427 kubelet[3184]: I0213 19:28:47.277037 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:28:47.300381 kubelet[3184]: I0213 19:28:47.299959 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:28:47.300669 kubelet[3184]: I0213 19:28:47.300641 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-cni-path" (OuterVolumeSpecName: "cni-path") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:28:47.306860 kubelet[3184]: I0213 19:28:47.306545 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:28:47.315042 kubelet[3184]: I0213 19:28:47.312290 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68331348-b861-43ac-adb9-d774f9189db4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:28:47.318206 kubelet[3184]: I0213 19:28:47.318156 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68331348-b861-43ac-adb9-d774f9189db4-kube-api-access-q2xmt" (OuterVolumeSpecName: "kube-api-access-q2xmt") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "kube-api-access-q2xmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:28:47.319907 kubelet[3184]: I0213 19:28:47.319863 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/739da241-b1df-40a1-96df-d17dcb018fb8-kube-api-access-cf9hw" (OuterVolumeSpecName: "kube-api-access-cf9hw") pod "739da241-b1df-40a1-96df-d17dcb018fb8" (UID: "739da241-b1df-40a1-96df-d17dcb018fb8"). InnerVolumeSpecName "kube-api-access-cf9hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:28:47.325966 kubelet[3184]: I0213 19:28:47.325284 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68331348-b861-43ac-adb9-d774f9189db4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:28:47.325966 kubelet[3184]: I0213 19:28:47.325309 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68331348-b861-43ac-adb9-d774f9189db4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "68331348-b861-43ac-adb9-d774f9189db4" (UID: "68331348-b861-43ac-adb9-d774f9189db4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:28:47.325966 kubelet[3184]: I0213 19:28:47.325933 3184 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739da241-b1df-40a1-96df-d17dcb018fb8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "739da241-b1df-40a1-96df-d17dcb018fb8" (UID: "739da241-b1df-40a1-96df-d17dcb018fb8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:28:47.361990 systemd[1]: Removed slice kubepods-besteffort-pod739da241_b1df_40a1_96df_d17dcb018fb8.slice - libcontainer container kubepods-besteffort-pod739da241_b1df_40a1_96df_d17dcb018fb8.slice. Feb 13 19:28:47.366397 kubelet[3184]: I0213 19:28:47.366063 3184 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-bpf-maps\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366397 kubelet[3184]: I0213 19:28:47.366139 3184 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-cilium-cgroup\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366397 kubelet[3184]: I0213 19:28:47.366157 3184 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-lib-modules\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366397 kubelet[3184]: I0213 19:28:47.366171 3184 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-q2xmt\" (UniqueName: \"kubernetes.io/projected/68331348-b861-43ac-adb9-d774f9189db4-kube-api-access-q2xmt\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366397 kubelet[3184]: I0213 19:28:47.366184 3184 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-cni-path\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366397 kubelet[3184]: I0213 19:28:47.366198 3184 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/739da241-b1df-40a1-96df-d17dcb018fb8-cilium-config-path\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366397 kubelet[3184]: I0213 19:28:47.366213 3184 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-host-proc-sys-kernel\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366397 kubelet[3184]: I0213 19:28:47.366227 3184 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68331348-b861-43ac-adb9-d774f9189db4-hubble-tls\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366750 kubelet[3184]: I0213 19:28:47.366244 3184 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-host-proc-sys-net\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366750 kubelet[3184]: I0213 19:28:47.366256 3184 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cf9hw\" (UniqueName: \"kubernetes.io/projected/739da241-b1df-40a1-96df-d17dcb018fb8-kube-api-access-cf9hw\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366750 kubelet[3184]: I0213 19:28:47.366269 3184 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-xtables-lock\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366853 kubelet[3184]: I0213 19:28:47.366282 3184 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68331348-b861-43ac-adb9-d774f9189db4-cilium-config-path\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366911 kubelet[3184]: I0213 19:28:47.366859 3184 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68331348-b861-43ac-adb9-d774f9189db4-clustermesh-secrets\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366911 kubelet[3184]: I0213 19:28:47.366875 3184 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-cilium-run\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366911 kubelet[3184]: I0213 19:28:47.366887 3184 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-hostproc\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.366911 kubelet[3184]: I0213 19:28:47.366900 3184 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68331348-b861-43ac-adb9-d774f9189db4-etc-cni-netd\") on node \"ip-172-31-19-191\" DevicePath \"\"" Feb 13 19:28:47.369138 systemd[1]: Removed slice kubepods-burstable-pod68331348_b861_43ac_adb9_d774f9189db4.slice - libcontainer container kubepods-burstable-pod68331348_b861_43ac_adb9_d774f9189db4.slice. Feb 13 19:28:47.369519 systemd[1]: kubepods-burstable-pod68331348_b861_43ac_adb9_d774f9189db4.slice: Consumed 8.767s CPU time, 196.2M memory peak, 72.1M read from disk, 13.3M written to disk. Feb 13 19:28:47.681681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a-rootfs.mount: Deactivated successfully. Feb 13 19:28:47.681898 systemd[1]: var-lib-kubelet-pods-739da241\x2db1df\x2d40a1\x2d96df\x2dd17dcb018fb8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcf9hw.mount: Deactivated successfully. Feb 13 19:28:47.681999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f-rootfs.mount: Deactivated successfully. Feb 13 19:28:47.682256 systemd[1]: var-lib-kubelet-pods-68331348\x2db861\x2d43ac\x2dadb9\x2dd774f9189db4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq2xmt.mount: Deactivated successfully. Feb 13 19:28:47.682355 systemd[1]: var-lib-kubelet-pods-68331348\x2db861\x2d43ac\x2dadb9\x2dd774f9189db4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:28:47.682541 systemd[1]: var-lib-kubelet-pods-68331348\x2db861\x2d43ac\x2dadb9\x2dd774f9189db4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:28:47.937125 kubelet[3184]: I0213 19:28:47.936901 3184 scope.go:117] "RemoveContainer" containerID="9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75" Feb 13 19:28:47.985050 containerd[1918]: time="2025-02-13T19:28:47.984736219Z" level=info msg="RemoveContainer for \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\"" Feb 13 19:28:48.045713 containerd[1918]: time="2025-02-13T19:28:48.045657327Z" level=info msg="RemoveContainer for \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\" returns successfully" Feb 13 19:28:48.046232 kubelet[3184]: I0213 19:28:48.046202 3184 scope.go:117] "RemoveContainer" containerID="9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75" Feb 13 19:28:48.046546 containerd[1918]: time="2025-02-13T19:28:48.046506636Z" level=error msg="ContainerStatus for \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\": not found" Feb 13 19:28:48.078901 kubelet[3184]: E0213 19:28:48.078732 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\": not found" containerID="9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75" Feb 13 19:28:48.105396 kubelet[3184]: I0213 19:28:48.078809 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75"} err="failed to get container status \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bda3e8270b514cf8309b67c64b6404930e94cf521618b3a21cf19b0be661f75\": not found" Feb 13 19:28:48.106103 kubelet[3184]: I0213 19:28:48.105401 3184 scope.go:117] "RemoveContainer" containerID="39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77" Feb 13 19:28:48.110534 containerd[1918]: time="2025-02-13T19:28:48.109658503Z" level=info msg="RemoveContainer for \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\"" Feb 13 19:28:48.116991 containerd[1918]: time="2025-02-13T19:28:48.116938207Z" level=info msg="RemoveContainer for \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\" returns successfully" Feb 13 19:28:48.117600 kubelet[3184]: I0213 19:28:48.117257 3184 scope.go:117] "RemoveContainer" containerID="80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837" Feb 13 19:28:48.120446 containerd[1918]: time="2025-02-13T19:28:48.120413576Z" level=info msg="RemoveContainer for \"80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837\"" Feb 13 19:28:48.127117 containerd[1918]: time="2025-02-13T19:28:48.127014448Z" level=info msg="RemoveContainer for \"80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837\" returns successfully" Feb 13 19:28:48.127613 kubelet[3184]: I0213 19:28:48.127488 3184 scope.go:117] "RemoveContainer" containerID="f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51" Feb 13 19:28:48.128712 containerd[1918]: time="2025-02-13T19:28:48.128676739Z" level=info msg="RemoveContainer for \"f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51\"" Feb 13 19:28:48.137212 containerd[1918]: time="2025-02-13T19:28:48.137165268Z" level=info msg="RemoveContainer for \"f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51\" returns successfully" Feb 13 19:28:48.137506 kubelet[3184]: I0213 19:28:48.137429 3184 scope.go:117] "RemoveContainer" containerID="8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab" Feb 13 19:28:48.138889 containerd[1918]: time="2025-02-13T19:28:48.138835776Z" level=info msg="RemoveContainer for \"8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab\"" Feb 13 19:28:48.144816 containerd[1918]: time="2025-02-13T19:28:48.144772514Z" level=info msg="RemoveContainer for \"8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab\" returns successfully" Feb 13 19:28:48.145103 kubelet[3184]: I0213 19:28:48.145026 3184 scope.go:117] "RemoveContainer" containerID="78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d" Feb 13 19:28:48.146959 containerd[1918]: time="2025-02-13T19:28:48.146912292Z" level=info msg="RemoveContainer for \"78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d\"" Feb 13 19:28:48.194128 containerd[1918]: time="2025-02-13T19:28:48.189318528Z" level=info msg="RemoveContainer for \"78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d\" returns successfully" Feb 13 19:28:48.194252 kubelet[3184]: I0213 19:28:48.193997 3184 scope.go:117] "RemoveContainer" containerID="39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77" Feb 13 19:28:48.195006 containerd[1918]: time="2025-02-13T19:28:48.194676429Z" level=error msg="ContainerStatus for \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\": not found" Feb 13 19:28:48.195137 kubelet[3184]: E0213 19:28:48.194972 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\": not found" containerID="39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77" Feb 13 19:28:48.195677 kubelet[3184]: I0213 19:28:48.195226 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77"} err="failed to get container status \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\": rpc error: code = NotFound desc = an error occurred when try to find container \"39e9e3a625da0f0e5ae0a569e27df031d729ebb1dcdc46b32861c620f0908d77\": not found" Feb 13 19:28:48.195677 kubelet[3184]: I0213 19:28:48.195259 3184 scope.go:117] "RemoveContainer" containerID="80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837" Feb 13 19:28:48.197301 containerd[1918]: time="2025-02-13T19:28:48.196992511Z" level=error msg="ContainerStatus for \"80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837\": not found" Feb 13 19:28:48.197454 kubelet[3184]: E0213 19:28:48.197169 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837\": not found" containerID="80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837" Feb 13 19:28:48.197454 kubelet[3184]: I0213 19:28:48.197199 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837"} err="failed to get container status \"80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837\": rpc error: code = NotFound desc = an error occurred when try to find container \"80009165c125f089b2c05ea5e8c555aa0427fed5d42755bd9b36634ee9da5837\": not found" Feb 13 19:28:48.197454 kubelet[3184]: I0213 19:28:48.197223 3184 scope.go:117] "RemoveContainer" containerID="f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51" Feb 13 19:28:48.199033 containerd[1918]: time="2025-02-13T19:28:48.198014076Z" level=error msg="ContainerStatus for \"f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51\": not found" Feb 13 19:28:48.199129 kubelet[3184]: E0213 19:28:48.198588 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51\": not found" containerID="f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51" Feb 13 19:28:48.199129 kubelet[3184]: I0213 19:28:48.198640 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51"} err="failed to get container status \"f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9b914a4670fea0a1f29ab36e6deb8f81b327b12f19498f1bffeb9541593af51\": not found" Feb 13 19:28:48.199129 kubelet[3184]: I0213 19:28:48.198664 3184 scope.go:117] "RemoveContainer" containerID="8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab" Feb 13 19:28:48.199858 containerd[1918]: time="2025-02-13T19:28:48.199593974Z" level=error msg="ContainerStatus for \"8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab\": not found" Feb 13 19:28:48.200351 kubelet[3184]: E0213 19:28:48.200179 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab\": not found" containerID="8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab" Feb 13 19:28:48.200351 kubelet[3184]: I0213 19:28:48.200210 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab"} err="failed to get container status \"8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c8cf9c5eadc161c8493351dc2622b508594ca4758d8666008e96a4c0d3506ab\": not found" Feb 13 19:28:48.201137 kubelet[3184]: I0213 19:28:48.200680 3184 scope.go:117] "RemoveContainer" containerID="78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d" Feb 13 19:28:48.201265 containerd[1918]: time="2025-02-13T19:28:48.201230349Z" level=error msg="ContainerStatus for \"78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d\": not found" Feb 13 19:28:48.201586 kubelet[3184]: E0213 19:28:48.201484 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d\": not found" containerID="78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d" Feb 13 19:28:48.201586 kubelet[3184]: I0213 19:28:48.201512 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d"} err="failed to get container status \"78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d\": rpc error: code = NotFound desc = an error occurred when try to find container \"78f6048cc176308038b34cf596634ba6570859c1a7daf8c6d674ccdab54f1c2d\": not found" Feb 13 19:28:48.518666 sshd[4976]: Connection closed by 139.178.68.195 port 49976 Feb 13 19:28:48.521221 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:48.527830 systemd[1]: sshd@25-172.31.19.191:22-139.178.68.195:49976.service: Deactivated successfully. Feb 13 19:28:48.531670 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:28:48.533221 systemd-logind[1896]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:28:48.534920 systemd-logind[1896]: Removed session 26. Feb 13 19:28:48.558900 systemd[1]: Started sshd@26-172.31.19.191:22-139.178.68.195:55748.service - OpenSSH per-connection server daemon (139.178.68.195:55748). Feb 13 19:28:48.667722 kubelet[3184]: E0213 19:28:48.667669 3184 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:28:48.758173 sshd[5133]: Accepted publickey for core from 139.178.68.195 port 55748 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:48.759913 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:48.765652 systemd-logind[1896]: New session 27 of user core. Feb 13 19:28:48.781351 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:28:49.059785 ntpd[1890]: Deleting interface #11 lxc_health, fe80::6ce8:9dff:fe14:1a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Feb 13 19:28:49.061136 ntpd[1890]: 13 Feb 19:28:49 ntpd[1890]: Deleting interface #11 lxc_health, fe80::6ce8:9dff:fe14:1a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Feb 13 19:28:49.341035 kubelet[3184]: I0213 19:28:49.340920 3184 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68331348-b861-43ac-adb9-d774f9189db4" path="/var/lib/kubelet/pods/68331348-b861-43ac-adb9-d774f9189db4/volumes" Feb 13 19:28:49.345281 kubelet[3184]: I0213 19:28:49.345240 3184 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="739da241-b1df-40a1-96df-d17dcb018fb8" path="/var/lib/kubelet/pods/739da241-b1df-40a1-96df-d17dcb018fb8/volumes" Feb 13 19:28:50.504499 sshd[5135]: Connection closed by 139.178.68.195 port 55748 Feb 13 19:28:50.513336 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:50.520041 systemd-logind[1896]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:28:50.521608 systemd[1]: sshd@26-172.31.19.191:22-139.178.68.195:55748.service: Deactivated successfully. Feb 13 19:28:50.527440 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:28:50.548280 systemd-logind[1896]: Removed session 27. Feb 13 19:28:50.556495 systemd[1]: Started sshd@27-172.31.19.191:22-139.178.68.195:55754.service - OpenSSH per-connection server daemon (139.178.68.195:55754). Feb 13 19:28:50.579612 kubelet[3184]: E0213 19:28:50.579530 3184 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68331348-b861-43ac-adb9-d774f9189db4" containerName="mount-cgroup" Feb 13 19:28:50.580522 kubelet[3184]: E0213 19:28:50.579632 3184 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68331348-b861-43ac-adb9-d774f9189db4" containerName="apply-sysctl-overwrites" Feb 13 19:28:50.580522 kubelet[3184]: E0213 19:28:50.579684 3184 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68331348-b861-43ac-adb9-d774f9189db4" containerName="mount-bpf-fs" Feb 13 19:28:50.580522 kubelet[3184]: E0213 19:28:50.579714 3184 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="739da241-b1df-40a1-96df-d17dcb018fb8" containerName="cilium-operator" Feb 13 19:28:50.580522 kubelet[3184]: E0213 19:28:50.579763 3184 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68331348-b861-43ac-adb9-d774f9189db4" containerName="clean-cilium-state" Feb 13 19:28:50.580522 kubelet[3184]: E0213 19:28:50.579776 3184 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68331348-b861-43ac-adb9-d774f9189db4" containerName="cilium-agent" Feb 13 19:28:50.580522 kubelet[3184]: I0213 19:28:50.580137 3184 memory_manager.go:354] "RemoveStaleState removing state" podUID="68331348-b861-43ac-adb9-d774f9189db4" containerName="cilium-agent" Feb 13 19:28:50.580522 kubelet[3184]: I0213 19:28:50.580201 3184 memory_manager.go:354] "RemoveStaleState removing state" podUID="739da241-b1df-40a1-96df-d17dcb018fb8" containerName="cilium-operator" Feb 13 19:28:50.651495 systemd[1]: Created slice kubepods-burstable-pod1d3d8ba3_bcf1_4475_86c0_3bf8670f3280.slice - libcontainer container kubepods-burstable-pod1d3d8ba3_bcf1_4475_86c0_3bf8670f3280.slice. Feb 13 19:28:50.724181 kubelet[3184]: I0213 19:28:50.724137 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-lib-modules\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724433 kubelet[3184]: I0213 19:28:50.724192 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-host-proc-sys-net\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724433 kubelet[3184]: I0213 19:28:50.724223 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-xtables-lock\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724433 kubelet[3184]: I0213 19:28:50.724247 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-cilium-ipsec-secrets\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724433 kubelet[3184]: I0213 19:28:50.724314 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpszl\" (UniqueName: \"kubernetes.io/projected/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-kube-api-access-rpszl\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724433 kubelet[3184]: I0213 19:28:50.724347 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-clustermesh-secrets\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724787 kubelet[3184]: I0213 19:28:50.724381 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-cilium-cgroup\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724787 kubelet[3184]: I0213 19:28:50.724409 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-cni-path\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724787 kubelet[3184]: I0213 19:28:50.724431 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-etc-cni-netd\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724787 kubelet[3184]: I0213 19:28:50.724454 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-cilium-run\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724787 kubelet[3184]: I0213 19:28:50.724501 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-hubble-tls\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724787 kubelet[3184]: I0213 19:28:50.724589 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-bpf-maps\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724968 kubelet[3184]: I0213 19:28:50.724636 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-hostproc\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724968 kubelet[3184]: I0213 19:28:50.724659 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-cilium-config-path\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.724968 kubelet[3184]: I0213 19:28:50.724716 3184 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d3d8ba3-bcf1-4475-86c0-3bf8670f3280-host-proc-sys-kernel\") pod \"cilium-5hhrb\" (UID: \"1d3d8ba3-bcf1-4475-86c0-3bf8670f3280\") " pod="kube-system/cilium-5hhrb" Feb 13 19:28:50.788564 sshd[5144]: Accepted publickey for core from 139.178.68.195 port 55754 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:50.792205 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:50.801291 systemd-logind[1896]: New session 28 of user core. Feb 13 19:28:50.812343 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:28:50.951235 sshd[5147]: Connection closed by 139.178.68.195 port 55754 Feb 13 19:28:50.953141 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:50.958580 systemd[1]: sshd@27-172.31.19.191:22-139.178.68.195:55754.service: Deactivated successfully. Feb 13 19:28:50.962890 containerd[1918]: time="2025-02-13T19:28:50.962845176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hhrb,Uid:1d3d8ba3-bcf1-4475-86c0-3bf8670f3280,Namespace:kube-system,Attempt:0,}" Feb 13 19:28:50.966910 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:28:50.969915 systemd-logind[1896]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:28:50.972217 systemd-logind[1896]: Removed session 28. Feb 13 19:28:50.995279 systemd[1]: Started sshd@28-172.31.19.191:22-139.178.68.195:55760.service - OpenSSH per-connection server daemon (139.178.68.195:55760). Feb 13 19:28:51.025652 containerd[1918]: time="2025-02-13T19:28:51.025324238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:28:51.025652 containerd[1918]: time="2025-02-13T19:28:51.025413617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:28:51.025652 containerd[1918]: time="2025-02-13T19:28:51.025431475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:51.025652 containerd[1918]: time="2025-02-13T19:28:51.025546642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:51.079641 systemd[1]: Started cri-containerd-53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1.scope - libcontainer container 53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1. Feb 13 19:28:51.123874 containerd[1918]: time="2025-02-13T19:28:51.123610337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hhrb,Uid:1d3d8ba3-bcf1-4475-86c0-3bf8670f3280,Namespace:kube-system,Attempt:0,} returns sandbox id \"53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1\"" Feb 13 19:28:51.129113 containerd[1918]: time="2025-02-13T19:28:51.128322176Z" level=info msg="CreateContainer within sandbox \"53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:28:51.196411 containerd[1918]: time="2025-02-13T19:28:51.196202315Z" level=info msg="CreateContainer within sandbox \"53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"40ae56529800cb46fdfd9ef95c98157a14b175e5d2ca8c693854a3202d555a2f\"" Feb 13 19:28:51.198845 containerd[1918]: time="2025-02-13T19:28:51.197501507Z" level=info msg="StartContainer for \"40ae56529800cb46fdfd9ef95c98157a14b175e5d2ca8c693854a3202d555a2f\"" Feb 13 19:28:51.212022 sshd[5159]: Accepted publickey for core from 139.178.68.195 port 55760 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:28:51.214726 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:51.230550 systemd-logind[1896]: New session 29 of user core. Feb 13 19:28:51.241115 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:28:51.270330 systemd[1]: Started cri-containerd-40ae56529800cb46fdfd9ef95c98157a14b175e5d2ca8c693854a3202d555a2f.scope - libcontainer container 40ae56529800cb46fdfd9ef95c98157a14b175e5d2ca8c693854a3202d555a2f. Feb 13 19:28:51.310035 containerd[1918]: time="2025-02-13T19:28:51.309977690Z" level=info msg="StartContainer for \"40ae56529800cb46fdfd9ef95c98157a14b175e5d2ca8c693854a3202d555a2f\" returns successfully" Feb 13 19:28:51.329843 systemd[1]: cri-containerd-40ae56529800cb46fdfd9ef95c98157a14b175e5d2ca8c693854a3202d555a2f.scope: Deactivated successfully. Feb 13 19:28:51.330848 systemd[1]: cri-containerd-40ae56529800cb46fdfd9ef95c98157a14b175e5d2ca8c693854a3202d555a2f.scope: Consumed 24ms CPU time, 9.9M memory peak, 3.2M read from disk. Feb 13 19:28:51.415694 containerd[1918]: time="2025-02-13T19:28:51.415504213Z" level=info msg="shim disconnected" id=40ae56529800cb46fdfd9ef95c98157a14b175e5d2ca8c693854a3202d555a2f namespace=k8s.io Feb 13 19:28:51.415694 containerd[1918]: time="2025-02-13T19:28:51.415692275Z" level=warning msg="cleaning up after shim disconnected" id=40ae56529800cb46fdfd9ef95c98157a14b175e5d2ca8c693854a3202d555a2f namespace=k8s.io Feb 13 19:28:51.416410 containerd[1918]: time="2025-02-13T19:28:51.415705887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:28:51.962603 containerd[1918]: time="2025-02-13T19:28:51.962405176Z" level=info msg="CreateContainer within sandbox \"53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:28:52.021330 containerd[1918]: time="2025-02-13T19:28:52.021279766Z" level=info msg="CreateContainer within sandbox \"53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5267df9b92ff3dbad3378fa3346ff5eedd4ad57e3c45d418ee37f00b3af5a0ff\"" Feb 13 19:28:52.028908 containerd[1918]: time="2025-02-13T19:28:52.024754254Z" level=info msg="StartContainer for \"5267df9b92ff3dbad3378fa3346ff5eedd4ad57e3c45d418ee37f00b3af5a0ff\"" Feb 13 19:28:52.122302 systemd[1]: run-containerd-runc-k8s.io-5267df9b92ff3dbad3378fa3346ff5eedd4ad57e3c45d418ee37f00b3af5a0ff-runc.4sv8Ok.mount: Deactivated successfully. Feb 13 19:28:52.143306 systemd[1]: Started cri-containerd-5267df9b92ff3dbad3378fa3346ff5eedd4ad57e3c45d418ee37f00b3af5a0ff.scope - libcontainer container 5267df9b92ff3dbad3378fa3346ff5eedd4ad57e3c45d418ee37f00b3af5a0ff. Feb 13 19:28:52.194842 containerd[1918]: time="2025-02-13T19:28:52.194792064Z" level=info msg="StartContainer for \"5267df9b92ff3dbad3378fa3346ff5eedd4ad57e3c45d418ee37f00b3af5a0ff\" returns successfully" Feb 13 19:28:52.213517 systemd[1]: cri-containerd-5267df9b92ff3dbad3378fa3346ff5eedd4ad57e3c45d418ee37f00b3af5a0ff.scope: Deactivated successfully. Feb 13 19:28:52.215880 systemd[1]: cri-containerd-5267df9b92ff3dbad3378fa3346ff5eedd4ad57e3c45d418ee37f00b3af5a0ff.scope: Consumed 22ms CPU time, 7.4M memory peak, 2.2M read from disk. Feb 13 19:28:52.266017 containerd[1918]: time="2025-02-13T19:28:52.265951317Z" level=info msg="shim disconnected" id=5267df9b92ff3dbad3378fa3346ff5eedd4ad57e3c45d418ee37f00b3af5a0ff namespace=k8s.io Feb 13 19:28:52.266017 containerd[1918]: time="2025-02-13T19:28:52.266011009Z" level=warning msg="cleaning up after shim disconnected" id=5267df9b92ff3dbad3378fa3346ff5eedd4ad57e3c45d418ee37f00b3af5a0ff namespace=k8s.io Feb 13 19:28:52.266017 containerd[1918]: time="2025-02-13T19:28:52.266023301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:28:52.338253 kubelet[3184]: E0213 19:28:52.338163 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-8r22k" podUID="05580fe3-e15e-4a5a-9aa3-0e68d913ba82" Feb 13 19:28:52.837379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5267df9b92ff3dbad3378fa3346ff5eedd4ad57e3c45d418ee37f00b3af5a0ff-rootfs.mount: Deactivated successfully. Feb 13 19:28:52.967508 containerd[1918]: time="2025-02-13T19:28:52.967448456Z" level=info msg="CreateContainer within sandbox \"53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:28:53.003030 containerd[1918]: time="2025-02-13T19:28:53.002982265Z" level=info msg="CreateContainer within sandbox \"53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"360c84af829c20b7e938142c7cdb5ae28710eabfed80ca18fa029f4627d6918d\"" Feb 13 19:28:53.003848 containerd[1918]: time="2025-02-13T19:28:53.003810352Z" level=info msg="StartContainer for \"360c84af829c20b7e938142c7cdb5ae28710eabfed80ca18fa029f4627d6918d\"" Feb 13 19:28:53.057335 systemd[1]: Started cri-containerd-360c84af829c20b7e938142c7cdb5ae28710eabfed80ca18fa029f4627d6918d.scope - libcontainer container 360c84af829c20b7e938142c7cdb5ae28710eabfed80ca18fa029f4627d6918d. Feb 13 19:28:53.104081 containerd[1918]: time="2025-02-13T19:28:53.103719878Z" level=info msg="StartContainer for \"360c84af829c20b7e938142c7cdb5ae28710eabfed80ca18fa029f4627d6918d\" returns successfully" Feb 13 19:28:53.111868 systemd[1]: cri-containerd-360c84af829c20b7e938142c7cdb5ae28710eabfed80ca18fa029f4627d6918d.scope: Deactivated successfully. Feb 13 19:28:53.177767 containerd[1918]: time="2025-02-13T19:28:53.177638312Z" level=info msg="shim disconnected" id=360c84af829c20b7e938142c7cdb5ae28710eabfed80ca18fa029f4627d6918d namespace=k8s.io Feb 13 19:28:53.177767 containerd[1918]: time="2025-02-13T19:28:53.177717559Z" level=warning msg="cleaning up after shim disconnected" id=360c84af829c20b7e938142c7cdb5ae28710eabfed80ca18fa029f4627d6918d namespace=k8s.io Feb 13 19:28:53.177767 containerd[1918]: time="2025-02-13T19:28:53.177732877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:28:53.354507 containerd[1918]: time="2025-02-13T19:28:53.354336523Z" level=info msg="StopPodSandbox for \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\"" Feb 13 19:28:53.354889 containerd[1918]: time="2025-02-13T19:28:53.354481982Z" level=info msg="TearDown network for sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" successfully" Feb 13 19:28:53.354889 containerd[1918]: time="2025-02-13T19:28:53.354839300Z" level=info msg="StopPodSandbox for \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" returns successfully" Feb 13 19:28:53.355279 containerd[1918]: time="2025-02-13T19:28:53.355258468Z" level=info msg="RemovePodSandbox for \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\"" Feb 13 19:28:53.355339 containerd[1918]: time="2025-02-13T19:28:53.355304293Z" level=info msg="Forcibly stopping sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\"" Feb 13 19:28:53.355433 containerd[1918]: time="2025-02-13T19:28:53.355381978Z" level=info msg="TearDown network for sandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" successfully" Feb 13 19:28:53.365920 containerd[1918]: time="2025-02-13T19:28:53.365872410Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:28:53.366082 containerd[1918]: time="2025-02-13T19:28:53.365955128Z" level=info msg="RemovePodSandbox \"718a3ec2a3d4ddc2eb04259d6dcc8e439974bf5d5cd381ceb839a11572e6933f\" returns successfully" Feb 13 19:28:53.366560 containerd[1918]: time="2025-02-13T19:28:53.366531055Z" level=info msg="StopPodSandbox for \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\"" Feb 13 19:28:53.366655 containerd[1918]: time="2025-02-13T19:28:53.366622509Z" level=info msg="TearDown network for sandbox \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\" successfully" Feb 13 19:28:53.366655 containerd[1918]: time="2025-02-13T19:28:53.366636957Z" level=info msg="StopPodSandbox for \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\" returns successfully" Feb 13 19:28:53.367500 containerd[1918]: time="2025-02-13T19:28:53.367424775Z" level=info msg="RemovePodSandbox for \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\"" Feb 13 19:28:53.367500 containerd[1918]: time="2025-02-13T19:28:53.367492354Z" level=info msg="Forcibly stopping sandbox \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\"" Feb 13 19:28:53.367737 containerd[1918]: time="2025-02-13T19:28:53.367560782Z" level=info msg="TearDown network for sandbox \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\" successfully" Feb 13 19:28:53.372602 containerd[1918]: time="2025-02-13T19:28:53.372560877Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:28:53.372761 containerd[1918]: time="2025-02-13T19:28:53.372624103Z" level=info msg="RemovePodSandbox \"ecda7cc0b2f19b1aa238e5597ccd3ed045c128d7834aa7c5873ed2ef557e397a\" returns successfully" Feb 13 19:28:53.669508 kubelet[3184]: E0213 19:28:53.669359 3184 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:28:53.839411 systemd[1]: run-containerd-runc-k8s.io-360c84af829c20b7e938142c7cdb5ae28710eabfed80ca18fa029f4627d6918d-runc.bLHX0K.mount: Deactivated successfully. Feb 13 19:28:53.840975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-360c84af829c20b7e938142c7cdb5ae28710eabfed80ca18fa029f4627d6918d-rootfs.mount: Deactivated successfully. Feb 13 19:28:53.975427 containerd[1918]: time="2025-02-13T19:28:53.974907896Z" level=info msg="CreateContainer within sandbox \"53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:28:54.038095 containerd[1918]: time="2025-02-13T19:28:54.034679286Z" level=info msg="CreateContainer within sandbox \"53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"98887d4210bcad4faeddc6b9697eec1c9a6135b2e6c47cb5483af0ff9ca6bf66\"" Feb 13 19:28:54.040420 containerd[1918]: time="2025-02-13T19:28:54.038964062Z" level=info msg="StartContainer for \"98887d4210bcad4faeddc6b9697eec1c9a6135b2e6c47cb5483af0ff9ca6bf66\"" Feb 13 19:28:54.093519 systemd[1]: Started cri-containerd-98887d4210bcad4faeddc6b9697eec1c9a6135b2e6c47cb5483af0ff9ca6bf66.scope - libcontainer container 98887d4210bcad4faeddc6b9697eec1c9a6135b2e6c47cb5483af0ff9ca6bf66. Feb 13 19:28:54.133426 systemd[1]: cri-containerd-98887d4210bcad4faeddc6b9697eec1c9a6135b2e6c47cb5483af0ff9ca6bf66.scope: Deactivated successfully. Feb 13 19:28:54.141764 containerd[1918]: time="2025-02-13T19:28:54.141722955Z" level=info msg="StartContainer for \"98887d4210bcad4faeddc6b9697eec1c9a6135b2e6c47cb5483af0ff9ca6bf66\" returns successfully" Feb 13 19:28:54.199014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98887d4210bcad4faeddc6b9697eec1c9a6135b2e6c47cb5483af0ff9ca6bf66-rootfs.mount: Deactivated successfully. Feb 13 19:28:54.211286 containerd[1918]: time="2025-02-13T19:28:54.211159099Z" level=info msg="shim disconnected" id=98887d4210bcad4faeddc6b9697eec1c9a6135b2e6c47cb5483af0ff9ca6bf66 namespace=k8s.io Feb 13 19:28:54.211286 containerd[1918]: time="2025-02-13T19:28:54.211276328Z" level=warning msg="cleaning up after shim disconnected" id=98887d4210bcad4faeddc6b9697eec1c9a6135b2e6c47cb5483af0ff9ca6bf66 namespace=k8s.io Feb 13 19:28:54.211286 containerd[1918]: time="2025-02-13T19:28:54.211293300Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:28:54.338307 kubelet[3184]: E0213 19:28:54.338158 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-8r22k" podUID="05580fe3-e15e-4a5a-9aa3-0e68d913ba82" Feb 13 19:28:55.038487 containerd[1918]: time="2025-02-13T19:28:55.031987352Z" level=info msg="CreateContainer within sandbox \"53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:28:55.080855 containerd[1918]: time="2025-02-13T19:28:55.080806633Z" level=info msg="CreateContainer within sandbox \"53ef8553ab3afdd49c722e18dd779819b91604cb3f1acf14e49213e3c7d84ec1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"42849df09889be17b8400675c3bbeac24ac824fdc26242b5459d4d2b4541a801\"" Feb 13 19:28:55.082874 containerd[1918]: time="2025-02-13T19:28:55.081603382Z" level=info msg="StartContainer for \"42849df09889be17b8400675c3bbeac24ac824fdc26242b5459d4d2b4541a801\"" Feb 13 19:28:55.136726 systemd[1]: run-containerd-runc-k8s.io-42849df09889be17b8400675c3bbeac24ac824fdc26242b5459d4d2b4541a801-runc.951sp5.mount: Deactivated successfully. Feb 13 19:28:55.149355 systemd[1]: Started cri-containerd-42849df09889be17b8400675c3bbeac24ac824fdc26242b5459d4d2b4541a801.scope - libcontainer container 42849df09889be17b8400675c3bbeac24ac824fdc26242b5459d4d2b4541a801. Feb 13 19:28:55.206465 containerd[1918]: time="2025-02-13T19:28:55.206385654Z" level=info msg="StartContainer for \"42849df09889be17b8400675c3bbeac24ac824fdc26242b5459d4d2b4541a801\" returns successfully" Feb 13 19:28:55.339663 kubelet[3184]: E0213 19:28:55.338663 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-btknb" podUID="067d199f-9a9e-4613-98b1-00bfae60d8eb" Feb 13 19:28:56.239248 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 19:28:56.338096 kubelet[3184]: E0213 19:28:56.338002 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-8r22k" podUID="05580fe3-e15e-4a5a-9aa3-0e68d913ba82" Feb 13 19:28:57.194655 kubelet[3184]: I0213 19:28:57.194602 3184 setters.go:600] "Node became not ready" node="ip-172-31-19-191" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:28:57Z","lastTransitionTime":"2025-02-13T19:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:28:57.338039 kubelet[3184]: E0213 19:28:57.337478 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-btknb" podUID="067d199f-9a9e-4613-98b1-00bfae60d8eb" Feb 13 19:28:58.339223 kubelet[3184]: E0213 19:28:58.338366 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-8r22k" podUID="05580fe3-e15e-4a5a-9aa3-0e68d913ba82" Feb 13 19:28:58.395065 systemd[1]: run-containerd-runc-k8s.io-42849df09889be17b8400675c3bbeac24ac824fdc26242b5459d4d2b4541a801-runc.nUZ8X3.mount: Deactivated successfully. Feb 13 19:28:59.939272 systemd-networkd[1749]: lxc_health: Link UP Feb 13 19:28:59.939673 systemd-networkd[1749]: lxc_health: Gained carrier Feb 13 19:28:59.946136 (udev-worker)[6006]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:29:01.015647 kubelet[3184]: I0213 19:29:01.015557 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5hhrb" podStartSLOduration=11.003864254 podStartE2EDuration="11.003864254s" podCreationTimestamp="2025-02-13 19:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:28:56.105108971 +0000 UTC m=+123.066043750" watchObservedRunningTime="2025-02-13 19:29:01.003864254 +0000 UTC m=+127.964799042" Feb 13 19:29:01.086333 systemd-networkd[1749]: lxc_health: Gained IPv6LL Feb 13 19:29:04.059494 ntpd[1890]: Listen normally on 14 lxc_health [fe80::7cf6:5cff:fed3:b11c%14]:123 Feb 13 19:29:04.060097 ntpd[1890]: 13 Feb 19:29:04 ntpd[1890]: Listen normally on 14 lxc_health [fe80::7cf6:5cff:fed3:b11c%14]:123 Feb 13 19:29:05.709779 systemd[1]: run-containerd-runc-k8s.io-42849df09889be17b8400675c3bbeac24ac824fdc26242b5459d4d2b4541a801-runc.SrvV7y.mount: Deactivated successfully. Feb 13 19:29:05.902213 sshd[5216]: Connection closed by 139.178.68.195 port 55760 Feb 13 19:29:05.908712 sshd-session[5159]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:05.919504 systemd[1]: sshd@28-172.31.19.191:22-139.178.68.195:55760.service: Deactivated successfully. Feb 13 19:29:05.925374 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:29:05.929136 systemd-logind[1896]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:29:05.933686 systemd-logind[1896]: Removed session 29. Feb 13 19:29:22.194970 systemd[1]: cri-containerd-c7130dbea31f9a8a8ec655bb67151410cd995b632926f86ad535dfffc9aee6e8.scope: Deactivated successfully. Feb 13 19:29:22.196330 systemd[1]: cri-containerd-c7130dbea31f9a8a8ec655bb67151410cd995b632926f86ad535dfffc9aee6e8.scope: Consumed 3.051s CPU time, 67.4M memory peak, 20.2M read from disk. Feb 13 19:29:22.233180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7130dbea31f9a8a8ec655bb67151410cd995b632926f86ad535dfffc9aee6e8-rootfs.mount: Deactivated successfully. Feb 13 19:29:22.264731 containerd[1918]: time="2025-02-13T19:29:22.264062947Z" level=info msg="shim disconnected" id=c7130dbea31f9a8a8ec655bb67151410cd995b632926f86ad535dfffc9aee6e8 namespace=k8s.io Feb 13 19:29:22.264731 containerd[1918]: time="2025-02-13T19:29:22.264716051Z" level=warning msg="cleaning up after shim disconnected" id=c7130dbea31f9a8a8ec655bb67151410cd995b632926f86ad535dfffc9aee6e8 namespace=k8s.io Feb 13 19:29:22.264731 containerd[1918]: time="2025-02-13T19:29:22.264730878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:29:23.165936 kubelet[3184]: I0213 19:29:23.165248 3184 scope.go:117] "RemoveContainer" containerID="c7130dbea31f9a8a8ec655bb67151410cd995b632926f86ad535dfffc9aee6e8" Feb 13 19:29:23.168882 containerd[1918]: time="2025-02-13T19:29:23.168848813Z" level=info msg="CreateContainer within sandbox \"1e4f7fef9894bc2bcfac6325e3d95712731a70c7fb17a0b3e031aa127b2add44\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:29:23.203171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424262579.mount: Deactivated successfully. Feb 13 19:29:23.206818 containerd[1918]: time="2025-02-13T19:29:23.206772052Z" level=info msg="CreateContainer within sandbox \"1e4f7fef9894bc2bcfac6325e3d95712731a70c7fb17a0b3e031aa127b2add44\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"49c60e17831082a6746f5ce83aafe669f79e514723c81b1c8b40832e4b81e73c\"" Feb 13 19:29:23.207380 containerd[1918]: time="2025-02-13T19:29:23.207350089Z" level=info msg="StartContainer for \"49c60e17831082a6746f5ce83aafe669f79e514723c81b1c8b40832e4b81e73c\"" Feb 13 19:29:23.306823 systemd[1]: Started cri-containerd-49c60e17831082a6746f5ce83aafe669f79e514723c81b1c8b40832e4b81e73c.scope - libcontainer container 49c60e17831082a6746f5ce83aafe669f79e514723c81b1c8b40832e4b81e73c. Feb 13 19:29:23.392746 containerd[1918]: time="2025-02-13T19:29:23.392699190Z" level=info msg="StartContainer for \"49c60e17831082a6746f5ce83aafe669f79e514723c81b1c8b40832e4b81e73c\" returns successfully" Feb 13 19:29:26.762415 kubelet[3184]: E0213 19:29:26.761943 3184 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-191?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:29:27.684899 systemd[1]: cri-containerd-5e97cee1cfa323bd3aec8dae503ee4a409843f8e57a6f4ffec5f44b2c59c15b1.scope: Deactivated successfully. Feb 13 19:29:27.685732 systemd[1]: cri-containerd-5e97cee1cfa323bd3aec8dae503ee4a409843f8e57a6f4ffec5f44b2c59c15b1.scope: Consumed 1.904s CPU time, 27.4M memory peak, 10.1M read from disk. Feb 13 19:29:27.748350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e97cee1cfa323bd3aec8dae503ee4a409843f8e57a6f4ffec5f44b2c59c15b1-rootfs.mount: Deactivated successfully. Feb 13 19:29:27.776430 containerd[1918]: time="2025-02-13T19:29:27.776341028Z" level=info msg="shim disconnected" id=5e97cee1cfa323bd3aec8dae503ee4a409843f8e57a6f4ffec5f44b2c59c15b1 namespace=k8s.io Feb 13 19:29:27.776430 containerd[1918]: time="2025-02-13T19:29:27.776411643Z" level=warning msg="cleaning up after shim disconnected" id=5e97cee1cfa323bd3aec8dae503ee4a409843f8e57a6f4ffec5f44b2c59c15b1 namespace=k8s.io Feb 13 19:29:27.776430 containerd[1918]: time="2025-02-13T19:29:27.776428571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:29:28.185855 kubelet[3184]: I0213 19:29:28.185822 3184 scope.go:117] "RemoveContainer" containerID="5e97cee1cfa323bd3aec8dae503ee4a409843f8e57a6f4ffec5f44b2c59c15b1" Feb 13 19:29:28.188700 containerd[1918]: time="2025-02-13T19:29:28.188654846Z" level=info msg="CreateContainer within sandbox \"de10b154c0cf0e891d1373c5681f498b7e640b49b32f439171bfa4d36401af13\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:29:28.225529 containerd[1918]: time="2025-02-13T19:29:28.225434462Z" level=info msg="CreateContainer within sandbox \"de10b154c0cf0e891d1373c5681f498b7e640b49b32f439171bfa4d36401af13\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"cde2908d4c16c993f4b6e45cc2d06787772d652e385358e6f976ad29f1a76699\"" Feb 13 19:29:28.227853 containerd[1918]: time="2025-02-13T19:29:28.226492728Z" level=info msg="StartContainer for \"cde2908d4c16c993f4b6e45cc2d06787772d652e385358e6f976ad29f1a76699\"" Feb 13 19:29:28.313288 systemd[1]: Started cri-containerd-cde2908d4c16c993f4b6e45cc2d06787772d652e385358e6f976ad29f1a76699.scope - libcontainer container cde2908d4c16c993f4b6e45cc2d06787772d652e385358e6f976ad29f1a76699. Feb 13 19:29:28.395341 containerd[1918]: time="2025-02-13T19:29:28.395267715Z" level=info msg="StartContainer for \"cde2908d4c16c993f4b6e45cc2d06787772d652e385358e6f976ad29f1a76699\" returns successfully" Feb 13 19:29:36.762578 kubelet[3184]: E0213 19:29:36.762207 3184 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-191?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"