Feb 13 15:36:54.001887 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:36:54.001930 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:36:54.005670 kernel: BIOS-provided physical RAM map: Feb 13 15:36:54.005683 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:36:54.005694 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:36:54.005704 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:36:54.005722 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 15:36:54.005733 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 15:36:54.005744 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 15:36:54.005754 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:36:54.005877 kernel: NX (Execute Disable) protection: active Feb 13 15:36:54.005892 kernel: APIC: Static calls initialized Feb 13 15:36:54.005903 kernel: SMBIOS 2.7 present. Feb 13 15:36:54.005914 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 15:36:54.005933 kernel: Hypervisor detected: KVM Feb 13 15:36:54.005945 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:36:54.006017 kernel: kvm-clock: using sched offset of 7948248690 cycles Feb 13 15:36:54.006032 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:36:54.006043 kernel: tsc: Detected 2500.004 MHz processor Feb 13 15:36:54.006111 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:36:54.006126 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:36:54.006143 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 15:36:54.006155 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:36:54.006167 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:36:54.006179 kernel: Using GB pages for direct mapping Feb 13 15:36:54.006191 kernel: ACPI: Early table checksum verification disabled Feb 13 15:36:54.006203 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 15:36:54.006215 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 15:36:54.006226 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:36:54.006238 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 15:36:54.006253 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 15:36:54.006264 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 15:36:54.006276 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:36:54.006287 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 15:36:54.006298 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:36:54.006311 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 15:36:54.006322 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 15:36:54.006334 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 15:36:54.006345 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 15:36:54.006360 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 15:36:54.006377 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 15:36:54.006389 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 15:36:54.006401 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 15:36:54.006413 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 15:36:54.006428 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 15:36:54.006440 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 15:36:54.006452 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 15:36:54.006464 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 15:36:54.006476 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:36:54.006489 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:36:54.006501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 15:36:54.006514 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 15:36:54.006526 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 15:36:54.006540 kernel: Zone ranges: Feb 13 15:36:54.006553 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:36:54.006565 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 15:36:54.006595 kernel: Normal empty Feb 13 15:36:54.006608 kernel: Movable zone start for each node Feb 13 15:36:54.006626 kernel: Early memory node ranges Feb 13 15:36:54.006638 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:36:54.006652 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 15:36:54.006666 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 15:36:54.006684 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:36:54.006698 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:36:54.006713 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 15:36:54.006727 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 15:36:54.006850 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:36:54.006868 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 15:36:54.006884 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:36:54.006899 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:36:54.006914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:36:54.006928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:36:54.006947 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:36:54.006962 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:36:54.006977 kernel: TSC deadline timer available Feb 13 15:36:54.008222 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:36:54.008240 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:36:54.008254 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 15:36:54.008268 kernel: Booting paravirtualized kernel on KVM Feb 13 15:36:54.008282 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:36:54.008296 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:36:54.008363 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:36:54.008380 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:36:54.008394 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:36:54.008407 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:36:54.008421 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:36:54.008437 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:36:54.008452 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:36:54.008466 kernel: random: crng init done Feb 13 15:36:54.008484 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:36:54.008497 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:36:54.008511 kernel: Fallback order for Node 0: 0 Feb 13 15:36:54.008525 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 15:36:54.008539 kernel: Policy zone: DMA32 Feb 13 15:36:54.008552 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:36:54.008567 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Feb 13 15:36:54.008580 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:36:54.008657 kernel: Kernel/User page tables isolation: enabled Feb 13 15:36:54.008674 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:36:54.008689 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:36:54.008702 kernel: Dynamic Preempt: voluntary Feb 13 15:36:54.008716 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:36:54.008732 kernel: rcu: RCU event tracing is enabled. Feb 13 15:36:54.008745 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:36:54.008759 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:36:54.008773 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:36:54.008787 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:36:54.008804 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:36:54.008818 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:36:54.008832 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 15:36:54.008846 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:36:54.008860 kernel: Console: colour VGA+ 80x25 Feb 13 15:36:54.008874 kernel: printk: console [ttyS0] enabled Feb 13 15:36:54.008888 kernel: ACPI: Core revision 20230628 Feb 13 15:36:54.008902 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 15:36:54.008916 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:36:54.008932 kernel: x2apic enabled Feb 13 15:36:54.008946 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:36:54.008971 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Feb 13 15:36:54.008999 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Feb 13 15:36:54.009014 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 15:36:54.009028 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 15:36:54.009043 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:36:54.009057 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:36:54.009071 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:36:54.009085 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:36:54.009100 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 15:36:54.009115 kernel: RETBleed: Vulnerable Feb 13 15:36:54.009133 kernel: Speculative Store Bypass: Vulnerable Feb 13 15:36:54.009147 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:36:54.009220 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:36:54.009279 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 15:36:54.009296 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:36:54.009312 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:36:54.009327 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:36:54.009346 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 15:36:54.009365 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 15:36:54.009386 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 15:36:54.009414 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 15:36:54.009435 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 15:36:54.009451 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 15:36:54.009467 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:36:54.009483 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 15:36:54.009499 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 15:36:54.009515 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 15:36:54.009569 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 15:36:54.009585 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 15:36:54.009601 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 15:36:54.009644 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 15:36:54.009662 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:36:54.009678 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:36:54.009856 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:36:54.009874 kernel: landlock: Up and running. Feb 13 15:36:54.009891 kernel: SELinux: Initializing. Feb 13 15:36:54.009907 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:36:54.009923 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:36:54.009939 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 15:36:54.010017 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:36:54.010036 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:36:54.010052 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:36:54.010069 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 15:36:54.010085 kernel: signal: max sigframe size: 3632 Feb 13 15:36:54.010101 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:36:54.010118 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:36:54.010135 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:36:54.010151 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:36:54.010172 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:36:54.010188 kernel: .... node #0, CPUs: #1 Feb 13 15:36:54.010206 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 15:36:54.010223 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:36:54.010239 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:36:54.010255 kernel: smpboot: Max logical packages: 1 Feb 13 15:36:54.010271 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Feb 13 15:36:54.010287 kernel: devtmpfs: initialized Feb 13 15:36:54.010306 kernel: x86/mm: Memory block size: 128MB Feb 13 15:36:54.010322 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:36:54.010339 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:36:54.010355 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:36:54.010371 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:36:54.010387 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:36:54.010403 kernel: audit: type=2000 audit(1739461013.695:1): state=initialized audit_enabled=0 res=1 Feb 13 15:36:54.010419 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:36:54.010435 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:36:54.010454 kernel: cpuidle: using governor menu Feb 13 15:36:54.010470 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:36:54.010486 kernel: dca service started, version 1.12.1 Feb 13 15:36:54.010502 kernel: PCI: Using configuration type 1 for base access Feb 13 15:36:54.010518 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:36:54.010534 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:36:54.010550 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:36:54.010567 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:36:54.010583 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:36:54.010601 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:36:54.010617 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:36:54.010633 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:36:54.010650 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:36:54.010666 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 15:36:54.010682 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:36:54.010800 kernel: ACPI: Interpreter enabled Feb 13 15:36:54.010835 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:36:54.010852 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:36:54.010869 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:36:54.010889 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:36:54.010905 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 15:36:54.010922 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:36:54.021917 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:36:54.022147 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 15:36:54.022289 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 15:36:54.022307 kernel: acpiphp: Slot [3] registered Feb 13 15:36:54.022327 kernel: acpiphp: Slot [4] registered Feb 13 15:36:54.022341 kernel: acpiphp: Slot [5] registered Feb 13 15:36:54.022355 kernel: acpiphp: Slot [6] registered Feb 13 15:36:54.022370 kernel: acpiphp: Slot [7] registered Feb 13 15:36:54.022385 kernel: acpiphp: Slot [8] registered Feb 13 15:36:54.022401 kernel: acpiphp: Slot [9] registered Feb 13 15:36:54.022417 kernel: acpiphp: Slot [10] registered Feb 13 15:36:54.022433 kernel: acpiphp: Slot [11] registered Feb 13 15:36:54.022450 kernel: acpiphp: Slot [12] registered Feb 13 15:36:54.022470 kernel: acpiphp: Slot [13] registered Feb 13 15:36:54.022487 kernel: acpiphp: Slot [14] registered Feb 13 15:36:54.022503 kernel: acpiphp: Slot [15] registered Feb 13 15:36:54.022519 kernel: acpiphp: Slot [16] registered Feb 13 15:36:54.022535 kernel: acpiphp: Slot [17] registered Feb 13 15:36:54.022552 kernel: acpiphp: Slot [18] registered Feb 13 15:36:54.022569 kernel: acpiphp: Slot [19] registered Feb 13 15:36:54.022585 kernel: acpiphp: Slot [20] registered Feb 13 15:36:54.022602 kernel: acpiphp: Slot [21] registered Feb 13 15:36:54.022618 kernel: acpiphp: Slot [22] registered Feb 13 15:36:54.022638 kernel: acpiphp: Slot [23] registered Feb 13 15:36:54.022655 kernel: acpiphp: Slot [24] registered Feb 13 15:36:54.022672 kernel: acpiphp: Slot [25] registered Feb 13 15:36:54.022688 kernel: acpiphp: Slot [26] registered Feb 13 15:36:54.022705 kernel: acpiphp: Slot [27] registered Feb 13 15:36:54.022721 kernel: acpiphp: Slot [28] registered Feb 13 15:36:54.022738 kernel: acpiphp: Slot [29] registered Feb 13 15:36:54.022755 kernel: acpiphp: Slot [30] registered Feb 13 15:36:54.022771 kernel: acpiphp: Slot [31] registered Feb 13 15:36:54.022792 kernel: PCI host bridge to bus 0000:00 Feb 13 15:36:54.022963 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:36:54.027158 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:36:54.027333 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:36:54.027453 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 15:36:54.027687 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:36:54.027902 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 15:36:54.028283 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 15:36:54.028503 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 15:36:54.028676 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 15:36:54.028835 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 15:36:54.033048 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 15:36:54.033309 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 15:36:54.033501 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 15:36:54.033791 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 15:36:54.034057 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 15:36:54.034251 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 15:36:54.034452 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 15:36:54.034586 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 15:36:54.034724 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 15:36:54.034866 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:36:54.037226 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:36:54.037559 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 15:36:54.037791 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:36:54.039107 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 15:36:54.039135 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:36:54.039150 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:36:54.039173 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:36:54.039188 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:36:54.039202 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 15:36:54.039216 kernel: iommu: Default domain type: Translated Feb 13 15:36:54.039230 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:36:54.039244 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:36:54.039258 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:36:54.039272 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:36:54.039286 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 15:36:54.039427 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 15:36:54.039614 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 15:36:54.039742 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:36:54.039760 kernel: vgaarb: loaded Feb 13 15:36:54.039774 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 15:36:54.039789 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 15:36:54.039802 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:36:54.039816 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:36:54.039830 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:36:54.039849 kernel: pnp: PnP ACPI init Feb 13 15:36:54.039869 kernel: pnp: PnP ACPI: found 5 devices Feb 13 15:36:54.039883 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:36:54.039897 kernel: NET: Registered PF_INET protocol family Feb 13 15:36:54.039912 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:36:54.039925 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 15:36:54.039939 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:36:54.039954 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:36:54.039971 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 15:36:54.042021 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 15:36:54.042042 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:36:54.042057 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:36:54.042071 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:36:54.042085 kernel: NET: Registered PF_XDP protocol family Feb 13 15:36:54.042288 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:36:54.042414 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:36:54.042531 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:36:54.042741 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 15:36:54.042886 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 15:36:54.042905 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:36:54.042920 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:36:54.042936 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Feb 13 15:36:54.042950 kernel: clocksource: Switched to clocksource tsc Feb 13 15:36:54.042964 kernel: Initialise system trusted keyrings Feb 13 15:36:54.042978 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 15:36:54.043020 kernel: Key type asymmetric registered Feb 13 15:36:54.043032 kernel: Asymmetric key parser 'x509' registered Feb 13 15:36:54.043046 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:36:54.043059 kernel: io scheduler mq-deadline registered Feb 13 15:36:54.043073 kernel: io scheduler kyber registered Feb 13 15:36:54.043087 kernel: io scheduler bfq registered Feb 13 15:36:54.043100 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:36:54.043114 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:36:54.043128 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:36:54.043146 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:36:54.043159 kernel: i8042: Warning: Keylock active Feb 13 15:36:54.043172 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:36:54.043185 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:36:54.043332 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 15:36:54.043452 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 15:36:54.043571 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:36:53 UTC (1739461013) Feb 13 15:36:54.043689 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 15:36:54.043709 kernel: intel_pstate: CPU model not supported Feb 13 15:36:54.043723 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:36:54.043737 kernel: Segment Routing with IPv6 Feb 13 15:36:54.043750 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:36:54.043764 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:36:54.043778 kernel: Key type dns_resolver registered Feb 13 15:36:54.043790 kernel: IPI shorthand broadcast: enabled Feb 13 15:36:54.043805 kernel: sched_clock: Marking stable (507001687, 198620163)->(801196870, -95575020) Feb 13 15:36:54.043818 kernel: registered taskstats version 1 Feb 13 15:36:54.043835 kernel: Loading compiled-in X.509 certificates Feb 13 15:36:54.043849 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:36:54.043870 kernel: Key type .fscrypt registered Feb 13 15:36:54.043884 kernel: Key type fscrypt-provisioning registered Feb 13 15:36:54.043898 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:36:54.043912 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:36:54.043925 kernel: ima: No architecture policies found Feb 13 15:36:54.043938 kernel: clk: Disabling unused clocks Feb 13 15:36:54.043951 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:36:54.043968 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:36:54.046016 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:36:54.046037 kernel: Run /init as init process Feb 13 15:36:54.046052 kernel: with arguments: Feb 13 15:36:54.046067 kernel: /init Feb 13 15:36:54.046080 kernel: with environment: Feb 13 15:36:54.046093 kernel: HOME=/ Feb 13 15:36:54.046106 kernel: TERM=linux Feb 13 15:36:54.046119 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:36:54.046145 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:36:54.046175 systemd[1]: Detected virtualization amazon. Feb 13 15:36:54.046193 systemd[1]: Detected architecture x86-64. Feb 13 15:36:54.046208 systemd[1]: Running in initrd. Feb 13 15:36:54.046222 systemd[1]: No hostname configured, using default hostname. Feb 13 15:36:54.046240 systemd[1]: Hostname set to . Feb 13 15:36:54.046255 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:36:54.046270 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:36:54.046285 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:36:54.046300 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:36:54.046317 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:36:54.046332 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:36:54.046347 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:36:54.046365 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:36:54.046382 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:36:54.046398 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:36:54.046413 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:36:54.046428 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:36:54.046443 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:36:54.046461 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:36:54.046476 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:36:54.046491 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:36:54.046523 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:36:54.046539 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:36:54.046554 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:36:54.046569 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:36:54.046584 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:36:54.046599 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:36:54.046618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:36:54.046633 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:36:54.046648 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:36:54.046663 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:36:54.046678 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:36:54.046693 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:36:54.046708 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:36:54.046727 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:36:54.046776 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 15:36:54.046813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:36:54.046829 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:36:54.046844 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:36:54.046859 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:36:54.046875 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:36:54.046897 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:36:54.046916 systemd-journald[179]: Journal started Feb 13 15:36:54.046950 systemd-journald[179]: Runtime Journal (/run/log/journal/ec29e67964546584e5a9f7b8f8f5808b) is 4.8M, max 38.6M, 33.7M free. Feb 13 15:36:54.013008 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 15:36:54.160606 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:36:54.160642 kernel: Bridge firewalling registered Feb 13 15:36:54.077544 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 15:36:54.165002 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:36:54.165967 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:36:54.167385 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:36:54.169718 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:36:54.181247 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:36:54.186170 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:36:54.188036 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:36:54.211031 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:36:54.229376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:36:54.248966 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:36:54.253191 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:36:54.262255 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:36:54.264003 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:36:54.277186 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:36:54.296698 dracut-cmdline[212]: dracut-dracut-053 Feb 13 15:36:54.301561 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:36:54.366894 systemd-resolved[215]: Positive Trust Anchors: Feb 13 15:36:54.367029 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:36:54.367095 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:36:54.372564 systemd-resolved[215]: Defaulting to hostname 'linux'. Feb 13 15:36:54.374474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:36:54.376237 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:36:54.459009 kernel: SCSI subsystem initialized Feb 13 15:36:54.470011 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:36:54.484118 kernel: iscsi: registered transport (tcp) Feb 13 15:36:54.520579 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:36:54.520661 kernel: QLogic iSCSI HBA Driver Feb 13 15:36:54.598168 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:36:54.607796 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:36:54.642320 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:36:54.642390 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:36:54.643672 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:36:54.695011 kernel: raid6: avx512x4 gen() 12676 MB/s Feb 13 15:36:54.712306 kernel: raid6: avx512x2 gen() 8351 MB/s Feb 13 15:36:54.730173 kernel: raid6: avx512x1 gen() 12997 MB/s Feb 13 15:36:54.747014 kernel: raid6: avx2x4 gen() 11497 MB/s Feb 13 15:36:54.764036 kernel: raid6: avx2x2 gen() 11639 MB/s Feb 13 15:36:54.781016 kernel: raid6: avx2x1 gen() 9150 MB/s Feb 13 15:36:54.781109 kernel: raid6: using algorithm avx512x1 gen() 12997 MB/s Feb 13 15:36:54.798240 kernel: raid6: .... xor() 14201 MB/s, rmw enabled Feb 13 15:36:54.798513 kernel: raid6: using avx512x2 recovery algorithm Feb 13 15:36:54.838021 kernel: xor: automatically using best checksumming function avx Feb 13 15:36:55.105012 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:36:55.117612 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:36:55.132461 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:36:55.182363 systemd-udevd[398]: Using default interface naming scheme 'v255'. Feb 13 15:36:55.191522 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:36:55.202213 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:36:55.231492 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 13 15:36:55.272647 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:36:55.278218 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:36:55.340442 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:36:55.353140 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:36:55.399439 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:36:55.401554 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:36:55.404827 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:36:55.407253 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:36:55.416199 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:36:55.445709 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:36:55.466597 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:36:55.466805 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 15:36:55.467003 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:05:df:73:28:4f Feb 13 15:36:55.457856 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:36:55.476000 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:36:55.497627 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:36:55.515417 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:36:55.515513 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:36:55.518125 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:36:55.529549 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:36:55.529780 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 15:36:55.522131 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:36:55.522219 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:36:55.523628 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:36:55.541191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:36:55.544512 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:36:55.544738 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:36:55.550002 kernel: AES CTR mode by8 optimization enabled Feb 13 15:36:55.554001 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:36:55.554060 kernel: GPT:9289727 != 16777215 Feb 13 15:36:55.554079 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:36:55.555469 kernel: GPT:9289727 != 16777215 Feb 13 15:36:55.555516 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:36:55.555536 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:36:55.650014 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (450) Feb 13 15:36:55.660011 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (461) Feb 13 15:36:55.717454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:36:55.730852 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:36:55.790403 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:36:55.791959 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:36:55.799953 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:36:55.807134 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:36:55.810074 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:36:55.818593 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:36:55.835218 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:36:55.843647 disk-uuid[632]: Primary Header is updated. Feb 13 15:36:55.843647 disk-uuid[632]: Secondary Entries is updated. Feb 13 15:36:55.843647 disk-uuid[632]: Secondary Header is updated. Feb 13 15:36:55.849009 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:36:56.861046 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:36:56.861910 disk-uuid[633]: The operation has completed successfully. Feb 13 15:36:57.005125 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:36:57.005260 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:36:57.036185 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:36:57.050764 sh[893]: Success Feb 13 15:36:57.065005 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:36:57.187941 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:36:57.208127 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:36:57.210243 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:36:57.258420 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:36:57.258482 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:36:57.258502 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:36:57.258528 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:36:57.259040 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:36:57.346046 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:36:57.364822 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:36:57.368722 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:36:57.375275 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:36:57.379379 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:36:57.399709 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:36:57.399775 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:36:57.399795 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:36:57.408855 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:36:57.421201 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:36:57.423196 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:36:57.431796 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:36:57.440272 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:36:57.574569 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:36:57.592007 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:36:57.655881 systemd-networkd[1085]: lo: Link UP Feb 13 15:36:57.657063 systemd-networkd[1085]: lo: Gained carrier Feb 13 15:36:57.658722 systemd-networkd[1085]: Enumeration completed Feb 13 15:36:57.660789 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:36:57.662162 systemd[1]: Reached target network.target - Network. Feb 13 15:36:57.665855 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:36:57.665860 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:36:57.677409 systemd-networkd[1085]: eth0: Link UP Feb 13 15:36:57.677419 systemd-networkd[1085]: eth0: Gained carrier Feb 13 15:36:57.677434 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:36:57.697655 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.28.54/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:36:57.722909 ignition[1006]: Ignition 2.20.0 Feb 13 15:36:57.723028 ignition[1006]: Stage: fetch-offline Feb 13 15:36:57.723335 ignition[1006]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:36:57.723346 ignition[1006]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:36:57.723721 ignition[1006]: Ignition finished successfully Feb 13 15:36:57.729320 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:36:57.738215 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:36:57.767051 ignition[1094]: Ignition 2.20.0 Feb 13 15:36:57.767065 ignition[1094]: Stage: fetch Feb 13 15:36:57.767466 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:36:57.767480 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:36:57.767600 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:36:57.786964 ignition[1094]: PUT result: OK Feb 13 15:36:57.789820 ignition[1094]: parsed url from cmdline: "" Feb 13 15:36:57.789923 ignition[1094]: no config URL provided Feb 13 15:36:57.789933 ignition[1094]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:36:57.789945 ignition[1094]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:36:57.789962 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:36:57.792218 ignition[1094]: PUT result: OK Feb 13 15:36:57.793121 ignition[1094]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:36:57.796205 ignition[1094]: GET result: OK Feb 13 15:36:57.797030 ignition[1094]: parsing config with SHA512: de0ee9f55002297bae6ab21048cc5452af6e85257a236ef53ce841f1266ea7eace8d88ca219501a59bbeddffae5e650e4b196ce55dfa22d266e267c93ff1abfe Feb 13 15:36:57.804129 unknown[1094]: fetched base config from "system" Feb 13 15:36:57.804146 unknown[1094]: fetched base config from "system" Feb 13 15:36:57.804749 ignition[1094]: fetch: fetch complete Feb 13 15:36:57.804153 unknown[1094]: fetched user config from "aws" Feb 13 15:36:57.804755 ignition[1094]: fetch: fetch passed Feb 13 15:36:57.807622 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:36:57.804862 ignition[1094]: Ignition finished successfully Feb 13 15:36:57.816370 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:36:57.841580 ignition[1101]: Ignition 2.20.0 Feb 13 15:36:57.841597 ignition[1101]: Stage: kargs Feb 13 15:36:57.842051 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:36:57.842064 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:36:57.842562 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:36:57.846139 ignition[1101]: PUT result: OK Feb 13 15:36:57.851991 ignition[1101]: kargs: kargs passed Feb 13 15:36:57.852073 ignition[1101]: Ignition finished successfully Feb 13 15:36:57.854616 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:36:57.864168 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:36:57.884533 ignition[1107]: Ignition 2.20.0 Feb 13 15:36:57.884548 ignition[1107]: Stage: disks Feb 13 15:36:57.885116 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:36:57.885129 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:36:57.885258 ignition[1107]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:36:57.886530 ignition[1107]: PUT result: OK Feb 13 15:36:57.892342 ignition[1107]: disks: disks passed Feb 13 15:36:57.892413 ignition[1107]: Ignition finished successfully Feb 13 15:36:57.895189 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:36:57.897533 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:36:57.898659 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:36:57.901096 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:36:57.903558 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:36:57.905618 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:36:57.913226 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:36:57.945082 systemd-fsck[1115]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:36:57.949935 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:36:57.955086 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:36:58.092046 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:36:58.093304 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:36:58.098047 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:36:58.116101 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:36:58.119887 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:36:58.122137 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:36:58.122191 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:36:58.122218 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:36:58.135056 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:36:58.140008 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1134) Feb 13 15:36:58.146152 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:36:58.146284 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:36:58.146351 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:36:58.148287 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:36:58.158003 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:36:58.161788 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:36:58.402075 initrd-setup-root[1158]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:36:58.442860 initrd-setup-root[1165]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:36:58.453631 initrd-setup-root[1172]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:36:58.472002 initrd-setup-root[1179]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:36:58.661460 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:36:58.669117 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:36:58.681397 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:36:58.690741 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:36:58.691851 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:36:58.746458 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:36:58.751745 ignition[1246]: INFO : Ignition 2.20.0 Feb 13 15:36:58.751745 ignition[1246]: INFO : Stage: mount Feb 13 15:36:58.754734 ignition[1246]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:36:58.754734 ignition[1246]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:36:58.754734 ignition[1246]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:36:58.754734 ignition[1246]: INFO : PUT result: OK Feb 13 15:36:58.760979 ignition[1246]: INFO : mount: mount passed Feb 13 15:36:58.762069 ignition[1246]: INFO : Ignition finished successfully Feb 13 15:36:58.764272 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:36:58.771130 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:36:58.786220 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:36:58.811262 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1258) Feb 13 15:36:58.811327 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:36:58.813954 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:36:58.814035 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:36:58.831043 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:36:58.837939 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:36:58.867736 ignition[1275]: INFO : Ignition 2.20.0 Feb 13 15:36:58.867736 ignition[1275]: INFO : Stage: files Feb 13 15:36:58.870025 ignition[1275]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:36:58.870025 ignition[1275]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:36:58.870025 ignition[1275]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:36:58.874241 ignition[1275]: INFO : PUT result: OK Feb 13 15:36:58.877022 ignition[1275]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:36:58.879413 ignition[1275]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:36:58.879413 ignition[1275]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:36:58.904025 ignition[1275]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:36:58.906104 ignition[1275]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:36:58.906104 ignition[1275]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:36:58.904726 unknown[1275]: wrote ssh authorized keys file for user: core Feb 13 15:36:58.910378 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:36:58.910378 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:36:58.992372 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:36:59.239202 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:36:59.245330 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:36:59.245330 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:36:59.346182 systemd-networkd[1085]: eth0: Gained IPv6LL Feb 13 15:36:59.735543 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:36:59.858695 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:36:59.858695 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:36:59.866831 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 15:36:59.954536 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:37:00.521463 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:37:00.521463 ignition[1275]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:37:00.525791 ignition[1275]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:37:00.525791 ignition[1275]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:37:00.525791 ignition[1275]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:37:00.525791 ignition[1275]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:37:00.525791 ignition[1275]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:37:00.525791 ignition[1275]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:37:00.525791 ignition[1275]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:37:00.525791 ignition[1275]: INFO : files: files passed Feb 13 15:37:00.525791 ignition[1275]: INFO : Ignition finished successfully Feb 13 15:37:00.529796 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:37:00.540565 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:37:00.560364 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:37:00.562067 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:37:00.562167 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:37:00.583446 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:37:00.583446 initrd-setup-root-after-ignition[1304]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:37:00.588417 initrd-setup-root-after-ignition[1308]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:37:00.590616 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:37:00.593365 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:37:00.600340 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:37:00.663937 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:37:00.665930 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:37:00.670285 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:37:00.672894 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:37:00.674107 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:37:00.683160 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:37:00.730516 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:37:00.738214 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:37:00.758823 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:37:00.762141 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:37:00.770381 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:37:00.776692 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:37:00.777137 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:37:00.781129 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:37:00.783388 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:37:00.785805 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:37:00.799098 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:37:00.807024 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:37:00.809648 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:37:00.811077 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:37:00.821325 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:37:00.833670 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:37:00.846261 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:37:00.846918 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:37:00.847161 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:37:00.855323 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:37:00.855949 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:37:00.870961 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:37:00.872222 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:37:00.879312 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:37:00.879445 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:37:00.881120 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:37:00.881275 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:37:00.886755 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:37:00.886915 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:37:00.902713 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:37:00.904545 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:37:00.904924 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:37:00.915337 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:37:00.918278 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:37:00.918516 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:37:00.923216 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:37:00.923544 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:37:00.940399 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:37:00.940553 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:37:00.948536 ignition[1328]: INFO : Ignition 2.20.0 Feb 13 15:37:00.948536 ignition[1328]: INFO : Stage: umount Feb 13 15:37:00.950929 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:00.950929 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:37:00.950929 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:37:00.955690 ignition[1328]: INFO : PUT result: OK Feb 13 15:37:00.958857 ignition[1328]: INFO : umount: umount passed Feb 13 15:37:00.960463 ignition[1328]: INFO : Ignition finished successfully Feb 13 15:37:00.964594 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:37:00.964962 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:37:00.966736 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:37:00.966802 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:37:00.967875 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:37:00.967932 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:37:00.969537 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:37:00.969593 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:37:00.972752 systemd[1]: Stopped target network.target - Network. Feb 13 15:37:00.976752 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:37:00.976832 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:37:00.991720 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:37:00.996688 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:37:01.002634 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:37:01.015170 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:37:01.016386 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:37:01.019068 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:37:01.019137 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:37:01.022284 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:37:01.022348 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:37:01.025525 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:37:01.025596 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:37:01.033164 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:37:01.033257 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:37:01.043552 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:37:01.045127 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:37:01.049646 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:37:01.050734 systemd-networkd[1085]: eth0: DHCPv6 lease lost Feb 13 15:37:01.052212 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:37:01.052357 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:37:01.054441 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:37:01.054587 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:37:01.065180 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:37:01.065260 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:37:01.072272 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:37:01.073725 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:37:01.073829 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:37:01.075356 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:37:01.075425 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:37:01.076846 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:37:01.076901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:37:01.079078 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:37:01.079148 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:37:01.093349 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:37:01.115145 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:37:01.116412 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:37:01.118134 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:37:01.118194 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:37:01.125395 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:37:01.125465 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:37:01.141570 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:37:01.141878 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:37:01.147580 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:37:01.147690 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:37:01.149929 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:37:01.150023 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:37:01.157533 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:37:01.160054 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:37:01.160129 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:37:01.161548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:37:01.161622 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:37:01.169661 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:37:01.169790 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:37:01.171251 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:37:01.171391 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:37:01.172911 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:37:01.173011 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:37:01.178760 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:37:01.181091 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:37:01.181191 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:37:01.191298 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:37:01.221862 systemd[1]: Switching root. Feb 13 15:37:01.274504 systemd-journald[179]: Journal stopped Feb 13 15:37:03.415871 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 15:37:03.415969 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:37:03.418145 kernel: SELinux: policy capability open_perms=1 Feb 13 15:37:03.418179 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:37:03.418199 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:37:03.418219 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:37:03.418240 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:37:03.418269 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:37:03.418289 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:37:03.418312 kernel: audit: type=1403 audit(1739461021.695:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:37:03.418335 systemd[1]: Successfully loaded SELinux policy in 64.162ms. Feb 13 15:37:03.418378 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.719ms. Feb 13 15:37:03.418410 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:37:03.418433 systemd[1]: Detected virtualization amazon. Feb 13 15:37:03.418455 systemd[1]: Detected architecture x86-64. Feb 13 15:37:03.418481 systemd[1]: Detected first boot. Feb 13 15:37:03.418504 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:37:03.418527 zram_generator::config[1370]: No configuration found. Feb 13 15:37:03.418624 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:37:03.418654 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:37:03.418677 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:37:03.418700 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:37:03.418723 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:37:03.418746 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:37:03.418773 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:37:03.418799 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:37:03.418821 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:37:03.418843 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:37:03.418865 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:37:03.418887 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:37:03.418909 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:37:03.418933 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:37:03.418957 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:37:03.419001 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:37:03.419023 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:37:03.419045 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:37:03.419064 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:37:03.419085 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:37:03.419105 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:37:03.419180 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:37:03.419205 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:37:03.419231 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:37:03.419253 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:37:03.419275 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:37:03.419297 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:37:03.419320 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:37:03.419342 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:37:03.419364 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:37:03.419388 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:37:03.419415 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:37:03.419438 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:37:03.419461 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:37:03.419484 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:37:03.419506 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:37:03.419529 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:37:03.419553 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:37:03.419575 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:37:03.419598 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:37:03.419623 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:37:03.419647 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:37:03.419670 systemd[1]: Reached target machines.target - Containers. Feb 13 15:37:03.419692 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:37:03.419713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:37:03.419735 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:37:03.419755 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:37:03.419776 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:37:03.419809 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:37:03.419830 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:37:03.419849 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:37:03.419872 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:37:03.419894 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:37:03.419914 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:37:03.419936 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:37:03.419957 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:37:03.419977 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:37:03.427322 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:37:03.427353 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:37:03.427373 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:37:03.427394 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:37:03.427414 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:37:03.427433 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:37:03.427453 systemd[1]: Stopped verity-setup.service. Feb 13 15:37:03.427474 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:37:03.427493 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:37:03.427519 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:37:03.427539 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:37:03.427559 kernel: loop: module loaded Feb 13 15:37:03.427580 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:37:03.427600 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:37:03.427622 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:37:03.427643 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:37:03.427662 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:37:03.427682 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:37:03.427702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:37:03.427722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:37:03.427742 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:37:03.427762 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:37:03.427793 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:37:03.427813 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:37:03.427834 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:37:03.427858 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:37:03.427879 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:37:03.427899 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:37:03.427923 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:37:03.427944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:37:03.427965 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:37:03.427999 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:37:03.428018 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:37:03.428036 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:37:03.428055 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:37:03.428112 systemd-journald[1445]: Collecting audit messages is disabled. Feb 13 15:37:03.428162 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:37:03.428188 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:37:03.428208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:37:03.428228 systemd-journald[1445]: Journal started Feb 13 15:37:03.428268 systemd-journald[1445]: Runtime Journal (/run/log/journal/ec29e67964546584e5a9f7b8f8f5808b) is 4.8M, max 38.6M, 33.7M free. Feb 13 15:37:02.900226 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:37:02.936425 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:37:02.936812 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:37:03.446106 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:37:03.446167 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:37:03.454130 kernel: fuse: init (API version 7.39) Feb 13 15:37:03.479078 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:37:03.479545 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:37:03.479585 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:37:03.476473 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:37:03.477053 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:37:03.517104 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:37:03.574586 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:37:03.609535 kernel: ACPI: bus type drm_connector registered Feb 13 15:37:03.578193 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:37:03.580642 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:37:03.583297 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:37:03.603562 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:37:03.632429 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:37:03.634653 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:37:03.634868 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:37:03.655666 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:37:03.690466 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:37:03.705245 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:37:03.707215 kernel: loop0: detected capacity change from 0 to 62848 Feb 13 15:37:03.707587 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:37:03.719972 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:37:03.729449 systemd-journald[1445]: Time spent on flushing to /var/log/journal/ec29e67964546584e5a9f7b8f8f5808b is 61.351ms for 968 entries. Feb 13 15:37:03.729449 systemd-journald[1445]: System Journal (/var/log/journal/ec29e67964546584e5a9f7b8f8f5808b) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:37:03.803188 systemd-journald[1445]: Received client request to flush runtime journal. Feb 13 15:37:03.803252 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:37:03.725652 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:37:03.726813 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:37:03.807641 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:37:03.814190 kernel: loop1: detected capacity change from 0 to 210664 Feb 13 15:37:03.813430 udevadm[1506]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:37:03.889816 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:37:03.901657 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 15:37:03.903154 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:37:03.970659 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Feb 13 15:37:03.971126 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Feb 13 15:37:03.980627 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:37:03.994020 kernel: loop3: detected capacity change from 0 to 140992 Feb 13 15:37:04.130038 kernel: loop4: detected capacity change from 0 to 62848 Feb 13 15:37:04.159147 kernel: loop5: detected capacity change from 0 to 210664 Feb 13 15:37:04.234031 kernel: loop6: detected capacity change from 0 to 138184 Feb 13 15:37:04.266011 kernel: loop7: detected capacity change from 0 to 140992 Feb 13 15:37:04.304303 (sd-merge)[1524]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:37:04.304897 (sd-merge)[1524]: Merged extensions into '/usr'. Feb 13 15:37:04.312203 systemd[1]: Reloading requested from client PID 1467 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:37:04.312340 systemd[1]: Reloading... Feb 13 15:37:04.398009 zram_generator::config[1547]: No configuration found. Feb 13 15:37:04.721307 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:37:04.872726 systemd[1]: Reloading finished in 559 ms. Feb 13 15:37:04.908165 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:37:04.917477 systemd[1]: Starting ensure-sysext.service... Feb 13 15:37:04.936484 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:37:04.964223 systemd[1]: Reloading requested from client PID 1598 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:37:04.964246 systemd[1]: Reloading... Feb 13 15:37:04.964264 systemd-tmpfiles[1599]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:37:04.965138 systemd-tmpfiles[1599]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:37:04.967897 systemd-tmpfiles[1599]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:37:04.969709 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Feb 13 15:37:04.969799 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Feb 13 15:37:04.979088 systemd-tmpfiles[1599]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:37:04.979103 systemd-tmpfiles[1599]: Skipping /boot Feb 13 15:37:04.996446 systemd-tmpfiles[1599]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:37:04.996548 systemd-tmpfiles[1599]: Skipping /boot Feb 13 15:37:05.132013 zram_generator::config[1627]: No configuration found. Feb 13 15:37:05.331806 ldconfig[1460]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:37:05.382556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:37:05.447364 systemd[1]: Reloading finished in 482 ms. Feb 13 15:37:05.463732 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:37:05.465479 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:37:05.470506 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:37:05.494243 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:37:05.501341 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:37:05.515351 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:37:05.519842 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:37:05.531853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:37:05.537597 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:37:05.554596 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:37:05.554967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:37:05.565050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:37:05.577163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:37:05.592608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:37:05.594193 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:37:05.594409 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:37:05.595783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:37:05.598710 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:37:05.619438 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:37:05.622635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:37:05.622871 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:37:05.641135 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:37:05.641668 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:37:05.650639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:37:05.659510 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:37:05.662295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:37:05.662571 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:37:05.664226 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:37:05.664532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:37:05.677270 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:37:05.684587 systemd-udevd[1687]: Using default interface naming scheme 'v255'. Feb 13 15:37:05.690217 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:37:05.692548 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:37:05.693073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:37:05.724412 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:37:05.735611 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:37:05.737656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:37:05.737766 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:37:05.747171 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:37:05.750124 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:37:05.751020 systemd[1]: Finished ensure-sysext.service. Feb 13 15:37:05.761242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:37:05.761746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:37:05.764722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:37:05.767404 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:37:05.772851 augenrules[1718]: No rules Feb 13 15:37:05.770640 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:37:05.772102 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:37:05.783666 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:37:05.789491 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:37:05.791035 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:37:05.801885 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:37:05.802206 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:37:05.803972 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:37:05.815580 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:37:05.818936 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:37:05.833829 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:37:05.846279 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:37:05.875212 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:37:05.878195 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:37:06.076442 systemd-networkd[1742]: lo: Link UP Feb 13 15:37:06.076457 systemd-networkd[1742]: lo: Gained carrier Feb 13 15:37:06.082315 systemd-networkd[1742]: Enumeration completed Feb 13 15:37:06.083114 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:37:06.106061 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:37:06.119559 systemd-resolved[1685]: Positive Trust Anchors: Feb 13 15:37:06.119590 systemd-resolved[1685]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:37:06.119641 systemd-resolved[1685]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:37:06.122342 (udev-worker)[1749]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:37:06.125801 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:37:06.134268 systemd-resolved[1685]: Defaulting to hostname 'linux'. Feb 13 15:37:06.142672 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:37:06.144102 systemd[1]: Reached target network.target - Network. Feb 13 15:37:06.145209 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:37:06.169049 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1743) Feb 13 15:37:06.259925 systemd-networkd[1742]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:37:06.259938 systemd-networkd[1742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:37:06.264879 systemd-networkd[1742]: eth0: Link UP Feb 13 15:37:06.265089 systemd-networkd[1742]: eth0: Gained carrier Feb 13 15:37:06.265115 systemd-networkd[1742]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:37:06.274008 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 15:37:06.275187 systemd-networkd[1742]: eth0: DHCPv4 address 172.31.28.54/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:37:06.282428 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:37:06.282500 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 15:37:06.289032 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 15:37:06.317013 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 15:37:06.399009 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 15:37:06.465019 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:37:06.479828 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:37:06.494401 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:37:06.504478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:37:06.516267 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:37:06.518877 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:37:06.531276 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:37:06.564042 lvm[1853]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:37:06.601495 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:37:06.603533 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:37:06.612273 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:37:06.623131 lvm[1856]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:37:06.659925 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:37:06.798935 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:37:06.812072 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:37:06.816102 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:37:06.818510 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:37:06.820629 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:37:06.821990 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:37:06.826661 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:37:06.828752 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:37:06.828801 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:37:06.834004 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:37:06.839579 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:37:06.844442 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:37:06.854878 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:37:06.856800 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:37:06.858425 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:37:06.859380 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:37:06.861023 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:37:06.861107 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:37:06.866277 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:37:06.873197 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:37:06.879227 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:37:06.885134 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:37:06.899534 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:37:06.902158 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:37:06.904189 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:37:06.940168 jq[1866]: false Feb 13 15:37:06.943303 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:37:06.953184 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:37:06.958925 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:37:07.002195 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:37:07.024265 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:37:07.054621 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:37:07.063567 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:37:07.065752 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:37:07.073352 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:37:07.084888 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:37:07.093701 dbus-daemon[1865]: [system] SELinux support is enabled Feb 13 15:37:07.109291 dbus-daemon[1865]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1742 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:37:07.115955 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:37:07.116227 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:37:07.116441 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:37:07.122922 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:37:07.124037 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:37:07.131184 jq[1885]: true Feb 13 15:37:07.132548 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:37:07.132843 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:37:07.135626 ntpd[1869]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting Feb 13 15:37:07.139639 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting Feb 13 15:37:07.139639 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:37:07.139639 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: ---------------------------------------------------- Feb 13 15:37:07.139639 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:37:07.139639 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:37:07.139639 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: corporation. Support and training for ntp-4 are Feb 13 15:37:07.139639 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: available at https://www.nwtime.org/support Feb 13 15:37:07.139639 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: ---------------------------------------------------- Feb 13 15:37:07.135661 ntpd[1869]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:37:07.135672 ntpd[1869]: ---------------------------------------------------- Feb 13 15:37:07.135681 ntpd[1869]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:37:07.135691 ntpd[1869]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:37:07.135700 ntpd[1869]: corporation. Support and training for ntp-4 are Feb 13 15:37:07.135710 ntpd[1869]: available at https://www.nwtime.org/support Feb 13 15:37:07.135720 ntpd[1869]: ---------------------------------------------------- Feb 13 15:37:07.162190 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: proto: precision = 0.092 usec (-23) Feb 13 15:37:07.160710 ntpd[1869]: proto: precision = 0.092 usec (-23) Feb 13 15:37:07.184151 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: basedate set to 2025-02-01 Feb 13 15:37:07.184151 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: gps base set to 2025-02-02 (week 2352) Feb 13 15:37:07.180254 ntpd[1869]: basedate set to 2025-02-01 Feb 13 15:37:07.180282 ntpd[1869]: gps base set to 2025-02-02 (week 2352) Feb 13 15:37:07.199921 jq[1894]: true Feb 13 15:37:07.200395 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:37:07.202157 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:37:07.202198 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:37:07.203657 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:37:07.203690 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:37:07.219174 dbus-daemon[1865]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:37:07.223128 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:37:07.223128 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:37:07.223128 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:37:07.223128 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: Listen normally on 3 eth0 172.31.28.54:123 Feb 13 15:37:07.223128 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: Listen normally on 4 lo [::1]:123 Feb 13 15:37:07.223128 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: bind(21) AF_INET6 fe80::405:dfff:fe73:284f%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:37:07.223128 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: unable to create socket on eth0 (5) for fe80::405:dfff:fe73:284f%2#123 Feb 13 15:37:07.223128 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: failed to init interface for address fe80::405:dfff:fe73:284f%2 Feb 13 15:37:07.223128 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: Listening on routing socket on fd #21 for interface updates Feb 13 15:37:07.219739 ntpd[1869]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:37:07.219803 ntpd[1869]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:37:07.222152 ntpd[1869]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:37:07.222196 ntpd[1869]: Listen normally on 3 eth0 172.31.28.54:123 Feb 13 15:37:07.222237 ntpd[1869]: Listen normally on 4 lo [::1]:123 Feb 13 15:37:07.222284 ntpd[1869]: bind(21) AF_INET6 fe80::405:dfff:fe73:284f%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:37:07.222308 ntpd[1869]: unable to create socket on eth0 (5) for fe80::405:dfff:fe73:284f%2#123 Feb 13 15:37:07.222322 ntpd[1869]: failed to init interface for address fe80::405:dfff:fe73:284f%2 Feb 13 15:37:07.222355 ntpd[1869]: Listening on routing socket on fd #21 for interface updates Feb 13 15:37:07.237010 extend-filesystems[1867]: Found loop4 Feb 13 15:37:07.237010 extend-filesystems[1867]: Found loop5 Feb 13 15:37:07.237010 extend-filesystems[1867]: Found loop6 Feb 13 15:37:07.237010 extend-filesystems[1867]: Found loop7 Feb 13 15:37:07.237010 extend-filesystems[1867]: Found nvme0n1 Feb 13 15:37:07.237010 extend-filesystems[1867]: Found nvme0n1p1 Feb 13 15:37:07.237010 extend-filesystems[1867]: Found nvme0n1p2 Feb 13 15:37:07.237010 extend-filesystems[1867]: Found nvme0n1p3 Feb 13 15:37:07.237010 extend-filesystems[1867]: Found usr Feb 13 15:37:07.237010 extend-filesystems[1867]: Found nvme0n1p4 Feb 13 15:37:07.237010 extend-filesystems[1867]: Found nvme0n1p6 Feb 13 15:37:07.237010 extend-filesystems[1867]: Found nvme0n1p7 Feb 13 15:37:07.237010 extend-filesystems[1867]: Found nvme0n1p9 Feb 13 15:37:07.237010 extend-filesystems[1867]: Checking size of /dev/nvme0n1p9 Feb 13 15:37:07.233297 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:37:07.296457 coreos-metadata[1864]: Feb 13 15:37:07.288 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:37:07.296457 coreos-metadata[1864]: Feb 13 15:37:07.291 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:37:07.296457 coreos-metadata[1864]: Feb 13 15:37:07.293 INFO Fetch successful Feb 13 15:37:07.296457 coreos-metadata[1864]: Feb 13 15:37:07.294 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:37:07.296924 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:37:07.296924 ntpd[1869]: 13 Feb 15:37:07 ntpd[1869]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:37:07.297026 tar[1892]: linux-amd64/helm Feb 13 15:37:07.297234 update_engine[1884]: I20250213 15:37:07.255822 1884 main.cc:92] Flatcar Update Engine starting Feb 13 15:37:07.241269 ntpd[1869]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:37:07.252228 (ntainerd)[1895]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:37:07.241304 ntpd[1869]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:37:07.290496 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:37:07.298508 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:37:07.314250 coreos-metadata[1864]: Feb 13 15:37:07.304 INFO Fetch successful Feb 13 15:37:07.314250 coreos-metadata[1864]: Feb 13 15:37:07.304 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:37:07.314250 coreos-metadata[1864]: Feb 13 15:37:07.305 INFO Fetch successful Feb 13 15:37:07.314250 coreos-metadata[1864]: Feb 13 15:37:07.305 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:37:07.314250 coreos-metadata[1864]: Feb 13 15:37:07.313 INFO Fetch successful Feb 13 15:37:07.314468 update_engine[1884]: I20250213 15:37:07.301179 1884 update_check_scheduler.cc:74] Next update check in 6m34s Feb 13 15:37:07.312919 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:37:07.314846 coreos-metadata[1864]: Feb 13 15:37:07.314 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:37:07.327037 coreos-metadata[1864]: Feb 13 15:37:07.319 INFO Fetch failed with 404: resource not found Feb 13 15:37:07.327037 coreos-metadata[1864]: Feb 13 15:37:07.319 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:37:07.327037 coreos-metadata[1864]: Feb 13 15:37:07.319 INFO Fetch successful Feb 13 15:37:07.327037 coreos-metadata[1864]: Feb 13 15:37:07.319 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:37:07.327037 coreos-metadata[1864]: Feb 13 15:37:07.320 INFO Fetch successful Feb 13 15:37:07.327037 coreos-metadata[1864]: Feb 13 15:37:07.320 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:37:07.327037 coreos-metadata[1864]: Feb 13 15:37:07.323 INFO Fetch successful Feb 13 15:37:07.327037 coreos-metadata[1864]: Feb 13 15:37:07.323 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:37:07.327037 coreos-metadata[1864]: Feb 13 15:37:07.324 INFO Fetch successful Feb 13 15:37:07.327037 coreos-metadata[1864]: Feb 13 15:37:07.324 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:37:07.327037 coreos-metadata[1864]: Feb 13 15:37:07.325 INFO Fetch successful Feb 13 15:37:07.327631 extend-filesystems[1867]: Resized partition /dev/nvme0n1p9 Feb 13 15:37:07.328778 extend-filesystems[1921]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:37:07.339036 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:37:07.438051 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:37:07.439998 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:37:07.480809 systemd-logind[1876]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:37:07.480839 systemd-logind[1876]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 15:37:07.480862 systemd-logind[1876]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:37:07.484670 systemd-logind[1876]: New seat seat0. Feb 13 15:37:07.489602 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:37:07.504023 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:37:07.521365 extend-filesystems[1921]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:37:07.521365 extend-filesystems[1921]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:37:07.521365 extend-filesystems[1921]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:37:07.528058 extend-filesystems[1867]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:37:07.529293 bash[1940]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:37:07.521852 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:37:07.522380 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:37:07.529811 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:37:07.566511 systemd[1]: Starting sshkeys.service... Feb 13 15:37:07.636188 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:37:07.669102 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:37:07.687011 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1739) Feb 13 15:37:07.831165 dbus-daemon[1865]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:37:07.834121 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:37:07.837190 dbus-daemon[1865]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1911 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:37:07.857587 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:37:07.863914 coreos-metadata[1951]: Feb 13 15:37:07.863 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:37:07.871507 coreos-metadata[1951]: Feb 13 15:37:07.871 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:37:07.881152 coreos-metadata[1951]: Feb 13 15:37:07.880 INFO Fetch successful Feb 13 15:37:07.881152 coreos-metadata[1951]: Feb 13 15:37:07.880 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:37:07.885566 coreos-metadata[1951]: Feb 13 15:37:07.884 INFO Fetch successful Feb 13 15:37:07.890852 unknown[1951]: wrote ssh authorized keys file for user: core Feb 13 15:37:07.959428 polkitd[1976]: Started polkitd version 121 Feb 13 15:37:07.964054 update-ssh-keys[1989]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:37:07.965386 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:37:07.975738 systemd[1]: Finished sshkeys.service. Feb 13 15:37:07.991973 polkitd[1976]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:37:07.992147 polkitd[1976]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:37:07.999236 polkitd[1976]: Finished loading, compiling and executing 2 rules Feb 13 15:37:08.000191 dbus-daemon[1865]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:37:08.000483 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:37:08.001266 polkitd[1976]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:37:08.082387 locksmithd[1917]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:37:08.104789 systemd-hostnamed[1911]: Hostname set to (transient) Feb 13 15:37:08.104911 systemd-resolved[1685]: System hostname changed to 'ip-172-31-28-54'. Feb 13 15:37:08.136784 ntpd[1869]: bind(24) AF_INET6 fe80::405:dfff:fe73:284f%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:37:08.140603 ntpd[1869]: 13 Feb 15:37:08 ntpd[1869]: bind(24) AF_INET6 fe80::405:dfff:fe73:284f%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:37:08.140603 ntpd[1869]: 13 Feb 15:37:08 ntpd[1869]: unable to create socket on eth0 (6) for fe80::405:dfff:fe73:284f%2#123 Feb 13 15:37:08.140603 ntpd[1869]: 13 Feb 15:37:08 ntpd[1869]: failed to init interface for address fe80::405:dfff:fe73:284f%2 Feb 13 15:37:08.136826 ntpd[1869]: unable to create socket on eth0 (6) for fe80::405:dfff:fe73:284f%2#123 Feb 13 15:37:08.136842 ntpd[1869]: failed to init interface for address fe80::405:dfff:fe73:284f%2 Feb 13 15:37:08.182847 systemd-networkd[1742]: eth0: Gained IPv6LL Feb 13 15:37:08.193645 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:37:08.198553 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:37:08.212342 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:37:08.220311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:08.225344 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:37:08.284305 containerd[1895]: time="2025-02-13T15:37:08.284218248Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:37:08.401661 containerd[1895]: time="2025-02-13T15:37:08.401515624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:37:08.411451 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:37:08.415158 containerd[1895]: time="2025-02-13T15:37:08.414227646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:37:08.415158 containerd[1895]: time="2025-02-13T15:37:08.414278813Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:37:08.415158 containerd[1895]: time="2025-02-13T15:37:08.414321280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:37:08.415158 containerd[1895]: time="2025-02-13T15:37:08.414657252Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:37:08.415158 containerd[1895]: time="2025-02-13T15:37:08.414681819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:37:08.415158 containerd[1895]: time="2025-02-13T15:37:08.414784677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:37:08.415158 containerd[1895]: time="2025-02-13T15:37:08.414802280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:37:08.417128 containerd[1895]: time="2025-02-13T15:37:08.415698779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:37:08.417128 containerd[1895]: time="2025-02-13T15:37:08.416947756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:37:08.417128 containerd[1895]: time="2025-02-13T15:37:08.416978067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:37:08.417128 containerd[1895]: time="2025-02-13T15:37:08.417068761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:37:08.417488 containerd[1895]: time="2025-02-13T15:37:08.417469067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:37:08.422964 containerd[1895]: time="2025-02-13T15:37:08.421600834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:37:08.422964 containerd[1895]: time="2025-02-13T15:37:08.421805855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:37:08.422964 containerd[1895]: time="2025-02-13T15:37:08.421830882Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:37:08.422964 containerd[1895]: time="2025-02-13T15:37:08.421947581Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:37:08.422964 containerd[1895]: time="2025-02-13T15:37:08.422046539Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:37:08.443335 sshd_keygen[1891]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:37:08.445268 containerd[1895]: time="2025-02-13T15:37:08.445026018Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:37:08.445268 containerd[1895]: time="2025-02-13T15:37:08.445100463Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:37:08.445268 containerd[1895]: time="2025-02-13T15:37:08.445123621Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:37:08.445268 containerd[1895]: time="2025-02-13T15:37:08.445146308Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:37:08.445268 containerd[1895]: time="2025-02-13T15:37:08.445169340Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:37:08.445806 containerd[1895]: time="2025-02-13T15:37:08.445727968Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:37:08.464053 containerd[1895]: time="2025-02-13T15:37:08.463821361Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:37:08.464446 containerd[1895]: time="2025-02-13T15:37:08.464277167Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:37:08.464446 containerd[1895]: time="2025-02-13T15:37:08.464309406Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:37:08.464446 containerd[1895]: time="2025-02-13T15:37:08.464333587Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:37:08.464446 containerd[1895]: time="2025-02-13T15:37:08.464354893Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:37:08.464446 containerd[1895]: time="2025-02-13T15:37:08.464372406Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:37:08.464446 containerd[1895]: time="2025-02-13T15:37:08.464389159Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:37:08.464446 containerd[1895]: time="2025-02-13T15:37:08.464407546Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:37:08.464446 containerd[1895]: time="2025-02-13T15:37:08.464428619Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464452026Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464470362Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464487205Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464515015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464535145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464553878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464572569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464590386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464609410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464626690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464646945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464665215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464747946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465054 containerd[1895]: time="2025-02-13T15:37:08.464885014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465565 containerd[1895]: time="2025-02-13T15:37:08.464907427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465565 containerd[1895]: time="2025-02-13T15:37:08.464926170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.465565 containerd[1895]: time="2025-02-13T15:37:08.464948606Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:37:08.468459 containerd[1895]: time="2025-02-13T15:37:08.468027554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.468459 containerd[1895]: time="2025-02-13T15:37:08.468080130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.468459 containerd[1895]: time="2025-02-13T15:37:08.468101638Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:37:08.468459 containerd[1895]: time="2025-02-13T15:37:08.468182557Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:37:08.468459 containerd[1895]: time="2025-02-13T15:37:08.468209989Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:37:08.468459 containerd[1895]: time="2025-02-13T15:37:08.468226580Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:37:08.468459 containerd[1895]: time="2025-02-13T15:37:08.468244907Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:37:08.468459 containerd[1895]: time="2025-02-13T15:37:08.468259960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.468459 containerd[1895]: time="2025-02-13T15:37:08.468283023Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:37:08.468459 containerd[1895]: time="2025-02-13T15:37:08.468298460Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:37:08.468459 containerd[1895]: time="2025-02-13T15:37:08.468313972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:37:08.469322 containerd[1895]: time="2025-02-13T15:37:08.468739336Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:37:08.469322 containerd[1895]: time="2025-02-13T15:37:08.468884970Z" level=info msg="Connect containerd service" Feb 13 15:37:08.469322 containerd[1895]: time="2025-02-13T15:37:08.468937658Z" level=info msg="using legacy CRI server" Feb 13 15:37:08.469322 containerd[1895]: time="2025-02-13T15:37:08.468948551Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:37:08.472450 amazon-ssm-agent[2056]: Initializing new seelog logger Feb 13 15:37:08.472732 amazon-ssm-agent[2056]: New Seelog Logger Creation Complete Feb 13 15:37:08.472732 amazon-ssm-agent[2056]: 2025/02/13 15:37:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:37:08.472732 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:37:08.474289 containerd[1895]: time="2025-02-13T15:37:08.474201105Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:37:08.481365 containerd[1895]: time="2025-02-13T15:37:08.474892241Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:37:08.481365 containerd[1895]: time="2025-02-13T15:37:08.475321763Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:37:08.481365 containerd[1895]: time="2025-02-13T15:37:08.475652237Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:37:08.481365 containerd[1895]: time="2025-02-13T15:37:08.475715668Z" level=info msg="Start subscribing containerd event" Feb 13 15:37:08.481365 containerd[1895]: time="2025-02-13T15:37:08.475765921Z" level=info msg="Start recovering state" Feb 13 15:37:08.481365 containerd[1895]: time="2025-02-13T15:37:08.475846108Z" level=info msg="Start event monitor" Feb 13 15:37:08.481365 containerd[1895]: time="2025-02-13T15:37:08.475865825Z" level=info msg="Start snapshots syncer" Feb 13 15:37:08.481365 containerd[1895]: time="2025-02-13T15:37:08.475879436Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:37:08.481365 containerd[1895]: time="2025-02-13T15:37:08.475890784Z" level=info msg="Start streaming server" Feb 13 15:37:08.481365 containerd[1895]: time="2025-02-13T15:37:08.475959695Z" level=info msg="containerd successfully booted in 0.193171s" Feb 13 15:37:08.481873 amazon-ssm-agent[2056]: 2025/02/13 15:37:08 processing appconfig overrides Feb 13 15:37:08.481873 amazon-ssm-agent[2056]: 2025/02/13 15:37:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:37:08.481873 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:37:08.481873 amazon-ssm-agent[2056]: 2025/02/13 15:37:08 processing appconfig overrides Feb 13 15:37:08.481873 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO Proxy environment variables: Feb 13 15:37:08.476407 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:37:08.483228 amazon-ssm-agent[2056]: 2025/02/13 15:37:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:37:08.483228 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:37:08.483228 amazon-ssm-agent[2056]: 2025/02/13 15:37:08 processing appconfig overrides Feb 13 15:37:08.492498 amazon-ssm-agent[2056]: 2025/02/13 15:37:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:37:08.492498 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:37:08.492498 amazon-ssm-agent[2056]: 2025/02/13 15:37:08 processing appconfig overrides Feb 13 15:37:08.536838 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:37:08.551153 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:37:08.566410 systemd[1]: Started sshd@0-172.31.28.54:22-139.178.89.65:54856.service - OpenSSH per-connection server daemon (139.178.89.65:54856). Feb 13 15:37:08.584126 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO https_proxy: Feb 13 15:37:08.599494 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:37:08.599877 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:37:08.622389 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:37:08.670824 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:37:08.687424 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:37:08.692105 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO http_proxy: Feb 13 15:37:08.702110 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:37:08.704138 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:37:08.790069 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO no_proxy: Feb 13 15:37:08.877925 sshd[2097]: Accepted publickey for core from 139.178.89.65 port 54856 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:08.882267 sshd-session[2097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:08.894121 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:37:08.914121 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:37:08.928372 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:37:08.936315 systemd-logind[1876]: New session 1 of user core. Feb 13 15:37:08.971568 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:37:08.986525 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:37:08.993012 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:37:09.014299 (systemd)[2109]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:37:09.095696 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO Agent will take identity from EC2 Feb 13 15:37:09.195363 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:37:09.276672 tar[1892]: linux-amd64/LICENSE Feb 13 15:37:09.276672 tar[1892]: linux-amd64/README.md Feb 13 15:37:09.294783 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:37:09.325521 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:37:09.356578 systemd[2109]: Queued start job for default target default.target. Feb 13 15:37:09.362568 systemd[2109]: Created slice app.slice - User Application Slice. Feb 13 15:37:09.362613 systemd[2109]: Reached target paths.target - Paths. Feb 13 15:37:09.362636 systemd[2109]: Reached target timers.target - Timers. Feb 13 15:37:09.377156 systemd[2109]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:37:09.391479 systemd[2109]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:37:09.391631 systemd[2109]: Reached target sockets.target - Sockets. Feb 13 15:37:09.391654 systemd[2109]: Reached target basic.target - Basic System. Feb 13 15:37:09.391705 systemd[2109]: Reached target default.target - Main User Target. Feb 13 15:37:09.391753 systemd[2109]: Startup finished in 360ms. Feb 13 15:37:09.391870 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:37:09.395145 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:37:09.402224 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:37:09.494516 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:37:09.569246 systemd[1]: Started sshd@1-172.31.28.54:22-139.178.89.65:54858.service - OpenSSH per-connection server daemon (139.178.89.65:54858). Feb 13 15:37:09.596546 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 15:37:09.697090 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:37:09.797615 sshd[2124]: Accepted publickey for core from 139.178.89.65 port 54858 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:09.799047 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:37:09.799554 sshd-session[2124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:09.802910 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO [Registrar] Starting registrar module Feb 13 15:37:09.802910 amazon-ssm-agent[2056]: 2025-02-13 15:37:08 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:37:09.802910 amazon-ssm-agent[2056]: 2025-02-13 15:37:09 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:37:09.802910 amazon-ssm-agent[2056]: 2025-02-13 15:37:09 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:37:09.803222 amazon-ssm-agent[2056]: 2025-02-13 15:37:09 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:37:09.803222 amazon-ssm-agent[2056]: 2025-02-13 15:37:09 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:37:09.807945 systemd-logind[1876]: New session 2 of user core. Feb 13 15:37:09.811172 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:37:09.898670 amazon-ssm-agent[2056]: 2025-02-13 15:37:09 INFO [CredentialRefresher] Next credential rotation will be in 30.86666025805 minutes Feb 13 15:37:09.938791 sshd[2126]: Connection closed by 139.178.89.65 port 54858 Feb 13 15:37:09.939429 sshd-session[2124]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:09.944789 systemd[1]: sshd@1-172.31.28.54:22-139.178.89.65:54858.service: Deactivated successfully. Feb 13 15:37:09.948556 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:37:09.949759 systemd-logind[1876]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:37:09.951757 systemd-logind[1876]: Removed session 2. Feb 13 15:37:09.970314 systemd[1]: Started sshd@2-172.31.28.54:22-139.178.89.65:54870.service - OpenSSH per-connection server daemon (139.178.89.65:54870). Feb 13 15:37:10.165925 sshd[2131]: Accepted publickey for core from 139.178.89.65 port 54870 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:10.167825 sshd-session[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:10.173842 systemd-logind[1876]: New session 3 of user core. Feb 13 15:37:10.179172 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:37:10.321372 sshd[2133]: Connection closed by 139.178.89.65 port 54870 Feb 13 15:37:10.323143 sshd-session[2131]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:10.326341 systemd[1]: sshd@2-172.31.28.54:22-139.178.89.65:54870.service: Deactivated successfully. Feb 13 15:37:10.328873 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:37:10.330702 systemd-logind[1876]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:37:10.332031 systemd-logind[1876]: Removed session 3. Feb 13 15:37:10.365131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:10.367841 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:37:10.369924 systemd[1]: Startup finished in 646ms (kernel) + 7.975s (initrd) + 8.735s (userspace) = 17.357s. Feb 13 15:37:10.493012 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:37:10.820373 amazon-ssm-agent[2056]: 2025-02-13 15:37:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:37:10.925045 amazon-ssm-agent[2056]: 2025-02-13 15:37:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2152) started Feb 13 15:37:11.023454 amazon-ssm-agent[2056]: 2025-02-13 15:37:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:37:11.137063 ntpd[1869]: Listen normally on 7 eth0 [fe80::405:dfff:fe73:284f%2]:123 Feb 13 15:37:11.137707 ntpd[1869]: 13 Feb 15:37:11 ntpd[1869]: Listen normally on 7 eth0 [fe80::405:dfff:fe73:284f%2]:123 Feb 13 15:37:11.525403 kubelet[2142]: E0213 15:37:11.525269 2142 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:37:11.528639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:37:11.528839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:11.529320 systemd[1]: kubelet.service: Consumed 1.057s CPU time. Feb 13 15:37:15.058422 systemd-resolved[1685]: Clock change detected. Flushing caches. Feb 13 15:37:21.283482 systemd[1]: Started sshd@3-172.31.28.54:22-139.178.89.65:50224.service - OpenSSH per-connection server daemon (139.178.89.65:50224). Feb 13 15:37:21.463168 sshd[2165]: Accepted publickey for core from 139.178.89.65 port 50224 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:21.465419 sshd-session[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:21.472465 systemd-logind[1876]: New session 4 of user core. Feb 13 15:37:21.486348 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:37:21.613507 sshd[2167]: Connection closed by 139.178.89.65 port 50224 Feb 13 15:37:21.614412 sshd-session[2165]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:21.620237 systemd[1]: sshd@3-172.31.28.54:22-139.178.89.65:50224.service: Deactivated successfully. Feb 13 15:37:21.622251 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:37:21.626570 systemd-logind[1876]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:37:21.627952 systemd-logind[1876]: Removed session 4. Feb 13 15:37:21.657281 systemd[1]: Started sshd@4-172.31.28.54:22-139.178.89.65:50226.service - OpenSSH per-connection server daemon (139.178.89.65:50226). Feb 13 15:37:21.863943 sshd[2172]: Accepted publickey for core from 139.178.89.65 port 50226 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:21.865693 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:21.871369 systemd-logind[1876]: New session 5 of user core. Feb 13 15:37:21.883308 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:37:21.998413 sshd[2174]: Connection closed by 139.178.89.65 port 50226 Feb 13 15:37:21.999240 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:22.005007 systemd-logind[1876]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:37:22.006168 systemd[1]: sshd@4-172.31.28.54:22-139.178.89.65:50226.service: Deactivated successfully. Feb 13 15:37:22.010780 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:37:22.016977 systemd-logind[1876]: Removed session 5. Feb 13 15:37:22.049528 systemd[1]: Started sshd@5-172.31.28.54:22-139.178.89.65:50234.service - OpenSSH per-connection server daemon (139.178.89.65:50234). Feb 13 15:37:22.243692 sshd[2179]: Accepted publickey for core from 139.178.89.65 port 50234 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:22.245028 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:22.251944 systemd-logind[1876]: New session 6 of user core. Feb 13 15:37:22.261282 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:37:22.383148 sshd[2181]: Connection closed by 139.178.89.65 port 50234 Feb 13 15:37:22.384361 sshd-session[2179]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:22.388360 systemd[1]: sshd@5-172.31.28.54:22-139.178.89.65:50234.service: Deactivated successfully. Feb 13 15:37:22.390901 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:37:22.392631 systemd-logind[1876]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:37:22.401968 systemd-logind[1876]: Removed session 6. Feb 13 15:37:22.435482 systemd[1]: Started sshd@6-172.31.28.54:22-139.178.89.65:50250.service - OpenSSH per-connection server daemon (139.178.89.65:50250). Feb 13 15:37:22.583246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:37:22.595650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:22.624978 sshd[2186]: Accepted publickey for core from 139.178.89.65 port 50250 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:22.625634 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:22.639651 systemd-logind[1876]: New session 7 of user core. Feb 13 15:37:22.641990 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:37:22.769364 sudo[2192]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:37:22.772279 sudo[2192]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:37:22.788625 sudo[2192]: pam_unix(sudo:session): session closed for user root Feb 13 15:37:22.813867 sshd[2191]: Connection closed by 139.178.89.65 port 50250 Feb 13 15:37:22.816820 sshd-session[2186]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:22.823686 systemd[1]: sshd@6-172.31.28.54:22-139.178.89.65:50250.service: Deactivated successfully. Feb 13 15:37:22.828715 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:37:22.830462 systemd-logind[1876]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:37:22.831977 systemd-logind[1876]: Removed session 7. Feb 13 15:37:22.864490 systemd[1]: Started sshd@7-172.31.28.54:22-139.178.89.65:50258.service - OpenSSH per-connection server daemon (139.178.89.65:50258). Feb 13 15:37:22.892295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:22.895819 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:37:22.966953 kubelet[2203]: E0213 15:37:22.966868 2203 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:37:22.973788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:37:22.974381 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:23.055450 sshd[2199]: Accepted publickey for core from 139.178.89.65 port 50258 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:23.057182 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:23.064126 systemd-logind[1876]: New session 8 of user core. Feb 13 15:37:23.081374 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:37:23.193803 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:37:23.195087 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:37:23.199857 sudo[2214]: pam_unix(sudo:session): session closed for user root Feb 13 15:37:23.210143 sudo[2213]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:37:23.210694 sudo[2213]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:37:23.232523 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:37:23.279004 augenrules[2236]: No rules Feb 13 15:37:23.280760 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:37:23.281115 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:37:23.282718 sudo[2213]: pam_unix(sudo:session): session closed for user root Feb 13 15:37:23.305293 sshd[2212]: Connection closed by 139.178.89.65 port 50258 Feb 13 15:37:23.306247 sshd-session[2199]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:23.311826 systemd-logind[1876]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:37:23.312982 systemd[1]: sshd@7-172.31.28.54:22-139.178.89.65:50258.service: Deactivated successfully. Feb 13 15:37:23.315872 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:37:23.317143 systemd-logind[1876]: Removed session 8. Feb 13 15:37:23.339033 systemd[1]: Started sshd@8-172.31.28.54:22-139.178.89.65:50264.service - OpenSSH per-connection server daemon (139.178.89.65:50264). Feb 13 15:37:23.548095 sshd[2244]: Accepted publickey for core from 139.178.89.65 port 50264 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:23.550645 sshd-session[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:23.558117 systemd-logind[1876]: New session 9 of user core. Feb 13 15:37:23.566440 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:37:23.675436 sudo[2247]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:37:23.675916 sudo[2247]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:37:24.150870 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:37:24.152567 (dockerd)[2266]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:37:24.542333 dockerd[2266]: time="2025-02-13T15:37:24.541146921Z" level=info msg="Starting up" Feb 13 15:37:24.673350 dockerd[2266]: time="2025-02-13T15:37:24.673302449Z" level=info msg="Loading containers: start." Feb 13 15:37:24.884326 kernel: Initializing XFRM netlink socket Feb 13 15:37:24.943260 (udev-worker)[2372]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:37:25.012973 systemd-networkd[1742]: docker0: Link UP Feb 13 15:37:25.047029 dockerd[2266]: time="2025-02-13T15:37:25.046986789Z" level=info msg="Loading containers: done." Feb 13 15:37:25.071649 dockerd[2266]: time="2025-02-13T15:37:25.071211412Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:37:25.071649 dockerd[2266]: time="2025-02-13T15:37:25.071319360Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:37:25.071649 dockerd[2266]: time="2025-02-13T15:37:25.071439832Z" level=info msg="Daemon has completed initialization" Feb 13 15:37:25.113362 dockerd[2266]: time="2025-02-13T15:37:25.113287379Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:37:25.113334 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:37:26.338383 containerd[1895]: time="2025-02-13T15:37:26.338328966Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:37:27.012511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3595078528.mount: Deactivated successfully. Feb 13 15:37:29.702137 containerd[1895]: time="2025-02-13T15:37:29.702084151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:29.703371 containerd[1895]: time="2025-02-13T15:37:29.703323760Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 15:37:29.705415 containerd[1895]: time="2025-02-13T15:37:29.704125218Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:29.706832 containerd[1895]: time="2025-02-13T15:37:29.706796824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:29.708342 containerd[1895]: time="2025-02-13T15:37:29.708302911Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 3.369921634s" Feb 13 15:37:29.708438 containerd[1895]: time="2025-02-13T15:37:29.708342169Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 15:37:29.735638 containerd[1895]: time="2025-02-13T15:37:29.735550982Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:37:32.540856 containerd[1895]: time="2025-02-13T15:37:32.540805027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:32.542941 containerd[1895]: time="2025-02-13T15:37:32.542748345Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 15:37:32.545804 containerd[1895]: time="2025-02-13T15:37:32.545307915Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:32.553341 containerd[1895]: time="2025-02-13T15:37:32.553294996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:32.557654 containerd[1895]: time="2025-02-13T15:37:32.556769661Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 2.821172687s" Feb 13 15:37:32.557654 containerd[1895]: time="2025-02-13T15:37:32.556820575Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 15:37:32.584900 containerd[1895]: time="2025-02-13T15:37:32.584855140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:37:33.224271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:37:33.237356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:33.460757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:33.467109 (kubelet)[2533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:37:33.519788 kubelet[2533]: E0213 15:37:33.519626 2533 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:37:33.522517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:37:33.522712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:34.884508 containerd[1895]: time="2025-02-13T15:37:34.884456900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:34.886182 containerd[1895]: time="2025-02-13T15:37:34.886132697Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 15:37:34.887205 containerd[1895]: time="2025-02-13T15:37:34.887091819Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:34.890289 containerd[1895]: time="2025-02-13T15:37:34.890233950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:34.891866 containerd[1895]: time="2025-02-13T15:37:34.891492619Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 2.306396606s" Feb 13 15:37:34.891866 containerd[1895]: time="2025-02-13T15:37:34.891547527Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 15:37:34.924408 containerd[1895]: time="2025-02-13T15:37:34.924369027Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:37:36.135330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1815675784.mount: Deactivated successfully. Feb 13 15:37:36.826128 containerd[1895]: time="2025-02-13T15:37:36.826078933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:36.828399 containerd[1895]: time="2025-02-13T15:37:36.828006882Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 15:37:36.830941 containerd[1895]: time="2025-02-13T15:37:36.830749612Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:36.841671 containerd[1895]: time="2025-02-13T15:37:36.841611304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:36.846286 containerd[1895]: time="2025-02-13T15:37:36.844702660Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.920288411s" Feb 13 15:37:36.846286 containerd[1895]: time="2025-02-13T15:37:36.844751467Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 15:37:36.921247 containerd[1895]: time="2025-02-13T15:37:36.921208760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:37:37.545015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3399553133.mount: Deactivated successfully. Feb 13 15:37:38.903669 containerd[1895]: time="2025-02-13T15:37:38.903612679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:38.905757 containerd[1895]: time="2025-02-13T15:37:38.905492388Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:37:38.908233 containerd[1895]: time="2025-02-13T15:37:38.907828979Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:38.912292 containerd[1895]: time="2025-02-13T15:37:38.912250963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:38.913813 containerd[1895]: time="2025-02-13T15:37:38.913697244Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.991492316s" Feb 13 15:37:38.913967 containerd[1895]: time="2025-02-13T15:37:38.913943087Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:37:38.941500 containerd[1895]: time="2025-02-13T15:37:38.941464344Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:37:39.061499 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:37:39.460974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422255445.mount: Deactivated successfully. Feb 13 15:37:39.472306 containerd[1895]: time="2025-02-13T15:37:39.472198269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:39.474284 containerd[1895]: time="2025-02-13T15:37:39.474070248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:37:39.477828 containerd[1895]: time="2025-02-13T15:37:39.476392551Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:39.481188 containerd[1895]: time="2025-02-13T15:37:39.479948864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:39.481188 containerd[1895]: time="2025-02-13T15:37:39.480780647Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 539.276997ms" Feb 13 15:37:39.481188 containerd[1895]: time="2025-02-13T15:37:39.480814665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:37:39.509546 containerd[1895]: time="2025-02-13T15:37:39.509513488Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:37:40.074600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704426399.mount: Deactivated successfully. Feb 13 15:37:43.376946 containerd[1895]: time="2025-02-13T15:37:43.376881705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:43.379319 containerd[1895]: time="2025-02-13T15:37:43.378908973Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 15:37:43.381964 containerd[1895]: time="2025-02-13T15:37:43.381366908Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:43.385852 containerd[1895]: time="2025-02-13T15:37:43.385807155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:37:43.387804 containerd[1895]: time="2025-02-13T15:37:43.387760033Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.878214917s" Feb 13 15:37:43.388004 containerd[1895]: time="2025-02-13T15:37:43.387808364Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 15:37:43.773011 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:37:43.779086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:44.072623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:44.083630 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:37:44.174456 kubelet[2697]: E0213 15:37:44.174385 2697 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:37:44.179153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:37:44.180002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:37:47.540067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:47.547418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:47.584355 systemd[1]: Reloading requested from client PID 2752 ('systemctl') (unit session-9.scope)... Feb 13 15:37:47.584374 systemd[1]: Reloading... Feb 13 15:37:47.738076 zram_generator::config[2793]: No configuration found. Feb 13 15:37:47.919017 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:37:48.052656 systemd[1]: Reloading finished in 467 ms. Feb 13 15:37:48.144144 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:37:48.144263 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:37:48.144710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:48.150681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:48.343712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:48.359627 (kubelet)[2852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:37:48.426916 kubelet[2852]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:37:48.428188 kubelet[2852]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:37:48.428188 kubelet[2852]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:37:48.433108 kubelet[2852]: I0213 15:37:48.432273 2852 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:37:48.740207 kubelet[2852]: I0213 15:37:48.739992 2852 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:37:48.740207 kubelet[2852]: I0213 15:37:48.740017 2852 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:37:48.740986 kubelet[2852]: I0213 15:37:48.740956 2852 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:37:48.768904 kubelet[2852]: I0213 15:37:48.768857 2852 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:37:48.770820 kubelet[2852]: E0213 15:37:48.770787 2852 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.28.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:48.800712 kubelet[2852]: I0213 15:37:48.800675 2852 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:37:48.801059 kubelet[2852]: I0213 15:37:48.801006 2852 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:37:48.801274 kubelet[2852]: I0213 15:37:48.801067 2852 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-54","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:37:48.802332 kubelet[2852]: I0213 15:37:48.802307 2852 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:37:48.802332 kubelet[2852]: I0213 15:37:48.802336 2852 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:37:48.802507 kubelet[2852]: I0213 15:37:48.802489 2852 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:37:48.803776 kubelet[2852]: I0213 15:37:48.803753 2852 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:37:48.803864 kubelet[2852]: I0213 15:37:48.803779 2852 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:37:48.803864 kubelet[2852]: I0213 15:37:48.803809 2852 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:37:48.803864 kubelet[2852]: I0213 15:37:48.803827 2852 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:37:48.811573 kubelet[2852]: W0213 15:37:48.811034 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:48.811573 kubelet[2852]: E0213 15:37:48.811151 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:48.811573 kubelet[2852]: W0213 15:37:48.811233 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-54&limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:48.811573 kubelet[2852]: E0213 15:37:48.811274 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-54&limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:48.812573 kubelet[2852]: I0213 15:37:48.812549 2852 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:37:48.815478 kubelet[2852]: I0213 15:37:48.815442 2852 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:37:48.815586 kubelet[2852]: W0213 15:37:48.815522 2852 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:37:48.816105 kubelet[2852]: I0213 15:37:48.816083 2852 server.go:1264] "Started kubelet" Feb 13 15:37:48.822622 kubelet[2852]: I0213 15:37:48.822573 2852 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:37:48.825748 kubelet[2852]: I0213 15:37:48.824794 2852 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:37:48.827493 kubelet[2852]: I0213 15:37:48.827324 2852 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:37:48.827775 kubelet[2852]: I0213 15:37:48.827753 2852 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:37:48.828075 kubelet[2852]: E0213 15:37:48.827936 2852 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.54:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.54:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-54.1823cea043670d93 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-54,UID:ip-172-31-28-54,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-54,},FirstTimestamp:2025-02-13 15:37:48.816059795 +0000 UTC m=+0.450245321,LastTimestamp:2025-02-13 15:37:48.816059795 +0000 UTC m=+0.450245321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-54,}" Feb 13 15:37:48.830296 kubelet[2852]: I0213 15:37:48.830277 2852 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:37:48.830508 kubelet[2852]: I0213 15:37:48.830490 2852 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:37:48.835265 kubelet[2852]: I0213 15:37:48.835242 2852 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:37:48.837081 kubelet[2852]: I0213 15:37:48.837021 2852 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:37:48.839353 kubelet[2852]: E0213 15:37:48.839321 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-54?timeout=10s\": dial tcp 172.31.28.54:6443: connect: connection refused" interval="200ms" Feb 13 15:37:48.840880 kubelet[2852]: I0213 15:37:48.840855 2852 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:37:48.841136 kubelet[2852]: I0213 15:37:48.841117 2852 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:37:48.842979 kubelet[2852]: W0213 15:37:48.842905 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:48.842979 kubelet[2852]: E0213 15:37:48.842978 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:48.844319 kubelet[2852]: I0213 15:37:48.843686 2852 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:37:48.849684 kubelet[2852]: E0213 15:37:48.849653 2852 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:37:48.896876 kubelet[2852]: I0213 15:37:48.896824 2852 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:37:48.900315 kubelet[2852]: I0213 15:37:48.900280 2852 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:37:48.900619 kubelet[2852]: I0213 15:37:48.900490 2852 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:37:48.900619 kubelet[2852]: I0213 15:37:48.900526 2852 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:37:48.900756 kubelet[2852]: E0213 15:37:48.900594 2852 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:37:48.908232 kubelet[2852]: W0213 15:37:48.908121 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:48.908232 kubelet[2852]: E0213 15:37:48.908190 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:48.912285 kubelet[2852]: I0213 15:37:48.912256 2852 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:37:48.912285 kubelet[2852]: I0213 15:37:48.912276 2852 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:37:48.912471 kubelet[2852]: I0213 15:37:48.912296 2852 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:37:48.914686 kubelet[2852]: I0213 15:37:48.914660 2852 policy_none.go:49] "None policy: Start" Feb 13 15:37:48.915956 kubelet[2852]: I0213 15:37:48.915800 2852 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:37:48.916230 kubelet[2852]: I0213 15:37:48.915970 2852 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:37:48.926111 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:37:48.936613 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:37:48.943863 kubelet[2852]: I0213 15:37:48.943531 2852 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-54" Feb 13 15:37:48.943698 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:37:48.945019 kubelet[2852]: E0213 15:37:48.943918 2852 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.54:6443/api/v1/nodes\": dial tcp 172.31.28.54:6443: connect: connection refused" node="ip-172-31-28-54" Feb 13 15:37:48.952421 kubelet[2852]: I0213 15:37:48.952341 2852 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:37:48.952849 kubelet[2852]: I0213 15:37:48.952806 2852 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:37:48.952959 kubelet[2852]: I0213 15:37:48.952941 2852 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:37:48.956970 kubelet[2852]: E0213 15:37:48.956936 2852 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-54\" not found" Feb 13 15:37:49.007653 kubelet[2852]: I0213 15:37:49.004433 2852 topology_manager.go:215] "Topology Admit Handler" podUID="dadb99872011cc8df8c049ea9c525f3d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-28-54" Feb 13 15:37:49.008946 kubelet[2852]: I0213 15:37:49.008916 2852 topology_manager.go:215] "Topology Admit Handler" podUID="458bb572f2f9b589e0c20a5e48cf170d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-28-54" Feb 13 15:37:49.010794 kubelet[2852]: I0213 15:37:49.010760 2852 topology_manager.go:215] "Topology Admit Handler" podUID="bd96a96f9f6c00c144b7e246e200ff5a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-28-54" Feb 13 15:37:49.021417 systemd[1]: Created slice kubepods-burstable-poddadb99872011cc8df8c049ea9c525f3d.slice - libcontainer container kubepods-burstable-poddadb99872011cc8df8c049ea9c525f3d.slice. Feb 13 15:37:49.037554 systemd[1]: Created slice kubepods-burstable-pod458bb572f2f9b589e0c20a5e48cf170d.slice - libcontainer container kubepods-burstable-pod458bb572f2f9b589e0c20a5e48cf170d.slice. Feb 13 15:37:49.038598 kubelet[2852]: I0213 15:37:49.038562 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dadb99872011cc8df8c049ea9c525f3d-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-54\" (UID: \"dadb99872011cc8df8c049ea9c525f3d\") " pod="kube-system/kube-apiserver-ip-172-31-28-54" Feb 13 15:37:49.038752 kubelet[2852]: I0213 15:37:49.038601 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/458bb572f2f9b589e0c20a5e48cf170d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-54\" (UID: \"458bb572f2f9b589e0c20a5e48cf170d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-54" Feb 13 15:37:49.038752 kubelet[2852]: I0213 15:37:49.038630 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/458bb572f2f9b589e0c20a5e48cf170d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-54\" (UID: \"458bb572f2f9b589e0c20a5e48cf170d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-54" Feb 13 15:37:49.038752 kubelet[2852]: I0213 15:37:49.038724 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/458bb572f2f9b589e0c20a5e48cf170d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-54\" (UID: \"458bb572f2f9b589e0c20a5e48cf170d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-54" Feb 13 15:37:49.039097 kubelet[2852]: I0213 15:37:49.038753 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd96a96f9f6c00c144b7e246e200ff5a-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-54\" (UID: \"bd96a96f9f6c00c144b7e246e200ff5a\") " pod="kube-system/kube-scheduler-ip-172-31-28-54" Feb 13 15:37:49.039097 kubelet[2852]: I0213 15:37:49.038778 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dadb99872011cc8df8c049ea9c525f3d-ca-certs\") pod \"kube-apiserver-ip-172-31-28-54\" (UID: \"dadb99872011cc8df8c049ea9c525f3d\") " pod="kube-system/kube-apiserver-ip-172-31-28-54" Feb 13 15:37:49.039097 kubelet[2852]: I0213 15:37:49.038806 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dadb99872011cc8df8c049ea9c525f3d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-54\" (UID: \"dadb99872011cc8df8c049ea9c525f3d\") " pod="kube-system/kube-apiserver-ip-172-31-28-54" Feb 13 15:37:49.039097 kubelet[2852]: I0213 15:37:49.038829 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/458bb572f2f9b589e0c20a5e48cf170d-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-54\" (UID: \"458bb572f2f9b589e0c20a5e48cf170d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-54" Feb 13 15:37:49.039097 kubelet[2852]: I0213 15:37:49.038860 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/458bb572f2f9b589e0c20a5e48cf170d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-54\" (UID: \"458bb572f2f9b589e0c20a5e48cf170d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-54" Feb 13 15:37:49.040618 kubelet[2852]: E0213 15:37:49.040537 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-54?timeout=10s\": dial tcp 172.31.28.54:6443: connect: connection refused" interval="400ms" Feb 13 15:37:49.047276 systemd[1]: Created slice kubepods-burstable-podbd96a96f9f6c00c144b7e246e200ff5a.slice - libcontainer container kubepods-burstable-podbd96a96f9f6c00c144b7e246e200ff5a.slice. Feb 13 15:37:49.146837 kubelet[2852]: I0213 15:37:49.146439 2852 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-54" Feb 13 15:37:49.146977 kubelet[2852]: E0213 15:37:49.146896 2852 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.54:6443/api/v1/nodes\": dial tcp 172.31.28.54:6443: connect: connection refused" node="ip-172-31-28-54" Feb 13 15:37:49.333108 containerd[1895]: time="2025-02-13T15:37:49.333066386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-54,Uid:dadb99872011cc8df8c049ea9c525f3d,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:49.350971 containerd[1895]: time="2025-02-13T15:37:49.350922326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-54,Uid:458bb572f2f9b589e0c20a5e48cf170d,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:49.351976 containerd[1895]: time="2025-02-13T15:37:49.351691385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-54,Uid:bd96a96f9f6c00c144b7e246e200ff5a,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:49.441133 kubelet[2852]: E0213 15:37:49.440983 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-54?timeout=10s\": dial tcp 172.31.28.54:6443: connect: connection refused" interval="800ms" Feb 13 15:37:49.554100 kubelet[2852]: I0213 15:37:49.554061 2852 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-54" Feb 13 15:37:49.554513 kubelet[2852]: E0213 15:37:49.554475 2852 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.54:6443/api/v1/nodes\": dial tcp 172.31.28.54:6443: connect: connection refused" node="ip-172-31-28-54" Feb 13 15:37:49.630720 kubelet[2852]: W0213 15:37:49.630414 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-54&limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:49.630720 kubelet[2852]: E0213 15:37:49.630485 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-54&limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:49.716348 kubelet[2852]: W0213 15:37:49.716282 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:49.716348 kubelet[2852]: E0213 15:37:49.716354 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:49.820828 kubelet[2852]: W0213 15:37:49.820769 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:49.820828 kubelet[2852]: E0213 15:37:49.820817 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:49.830718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050626064.mount: Deactivated successfully. Feb 13 15:37:49.849204 containerd[1895]: time="2025-02-13T15:37:49.849147413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:37:49.855635 containerd[1895]: time="2025-02-13T15:37:49.855572747Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:37:49.857613 containerd[1895]: time="2025-02-13T15:37:49.857564594Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:37:49.859934 containerd[1895]: time="2025-02-13T15:37:49.859886943Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:37:49.863538 containerd[1895]: time="2025-02-13T15:37:49.863475924Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:37:49.865877 containerd[1895]: time="2025-02-13T15:37:49.865835608Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:37:49.868295 containerd[1895]: time="2025-02-13T15:37:49.868252153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:37:49.869295 containerd[1895]: time="2025-02-13T15:37:49.869257286Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.086754ms" Feb 13 15:37:49.869993 containerd[1895]: time="2025-02-13T15:37:49.869952332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:37:49.873597 containerd[1895]: time="2025-02-13T15:37:49.873551900Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 521.797588ms" Feb 13 15:37:49.878037 containerd[1895]: time="2025-02-13T15:37:49.877990758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.96312ms" Feb 13 15:37:50.219424 containerd[1895]: time="2025-02-13T15:37:50.213434511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:50.219424 containerd[1895]: time="2025-02-13T15:37:50.219145317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:50.219424 containerd[1895]: time="2025-02-13T15:37:50.219192196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:50.220080 containerd[1895]: time="2025-02-13T15:37:50.219404157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:50.232859 containerd[1895]: time="2025-02-13T15:37:50.232377438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:50.232859 containerd[1895]: time="2025-02-13T15:37:50.232451706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:50.232859 containerd[1895]: time="2025-02-13T15:37:50.232478116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:50.232859 containerd[1895]: time="2025-02-13T15:37:50.232589778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:50.235235 containerd[1895]: time="2025-02-13T15:37:50.232806569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:50.235235 containerd[1895]: time="2025-02-13T15:37:50.234105647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:50.235235 containerd[1895]: time="2025-02-13T15:37:50.234135759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:50.235235 containerd[1895]: time="2025-02-13T15:37:50.234238942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:50.242340 kubelet[2852]: E0213 15:37:50.242283 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-54?timeout=10s\": dial tcp 172.31.28.54:6443: connect: connection refused" interval="1.6s" Feb 13 15:37:50.270291 systemd[1]: Started cri-containerd-ee910a2663f7c8ee3b174a3e21c106dfba4cb7a5242daa1aa498eba3f4f1d750.scope - libcontainer container ee910a2663f7c8ee3b174a3e21c106dfba4cb7a5242daa1aa498eba3f4f1d750. Feb 13 15:37:50.279314 systemd[1]: Started cri-containerd-5239bf091720f949bce1aee1e51e7758c69807063cf12bbf196cdb72cd54cec0.scope - libcontainer container 5239bf091720f949bce1aee1e51e7758c69807063cf12bbf196cdb72cd54cec0. Feb 13 15:37:50.302679 systemd[1]: Started cri-containerd-b5c163438d6864df2c9222a2898d86b40173a54ffd3fbd06c717373b54dd485d.scope - libcontainer container b5c163438d6864df2c9222a2898d86b40173a54ffd3fbd06c717373b54dd485d. Feb 13 15:37:50.358321 kubelet[2852]: I0213 15:37:50.357957 2852 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-54" Feb 13 15:37:50.358321 kubelet[2852]: E0213 15:37:50.358289 2852 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.54:6443/api/v1/nodes\": dial tcp 172.31.28.54:6443: connect: connection refused" node="ip-172-31-28-54" Feb 13 15:37:50.386920 kubelet[2852]: W0213 15:37:50.386508 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:50.386920 kubelet[2852]: E0213 15:37:50.386871 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:50.429512 containerd[1895]: time="2025-02-13T15:37:50.427769337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-54,Uid:bd96a96f9f6c00c144b7e246e200ff5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee910a2663f7c8ee3b174a3e21c106dfba4cb7a5242daa1aa498eba3f4f1d750\"" Feb 13 15:37:50.444533 containerd[1895]: time="2025-02-13T15:37:50.444366147Z" level=info msg="CreateContainer within sandbox \"ee910a2663f7c8ee3b174a3e21c106dfba4cb7a5242daa1aa498eba3f4f1d750\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:37:50.448217 containerd[1895]: time="2025-02-13T15:37:50.448186070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-54,Uid:dadb99872011cc8df8c049ea9c525f3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5c163438d6864df2c9222a2898d86b40173a54ffd3fbd06c717373b54dd485d\"" Feb 13 15:37:50.452241 containerd[1895]: time="2025-02-13T15:37:50.452109837Z" level=info msg="CreateContainer within sandbox \"b5c163438d6864df2c9222a2898d86b40173a54ffd3fbd06c717373b54dd485d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:37:50.454465 containerd[1895]: time="2025-02-13T15:37:50.454346956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-54,Uid:458bb572f2f9b589e0c20a5e48cf170d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5239bf091720f949bce1aee1e51e7758c69807063cf12bbf196cdb72cd54cec0\"" Feb 13 15:37:50.462481 containerd[1895]: time="2025-02-13T15:37:50.462269046Z" level=info msg="CreateContainer within sandbox \"5239bf091720f949bce1aee1e51e7758c69807063cf12bbf196cdb72cd54cec0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:37:50.513133 containerd[1895]: time="2025-02-13T15:37:50.512529517Z" level=info msg="CreateContainer within sandbox \"b5c163438d6864df2c9222a2898d86b40173a54ffd3fbd06c717373b54dd485d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"528e5b3911aacae9bf9ba3f796bc2f9850240110314985014af91a99650c8ad6\"" Feb 13 15:37:50.515191 containerd[1895]: time="2025-02-13T15:37:50.513841833Z" level=info msg="StartContainer for \"528e5b3911aacae9bf9ba3f796bc2f9850240110314985014af91a99650c8ad6\"" Feb 13 15:37:50.522662 containerd[1895]: time="2025-02-13T15:37:50.522622581Z" level=info msg="CreateContainer within sandbox \"ee910a2663f7c8ee3b174a3e21c106dfba4cb7a5242daa1aa498eba3f4f1d750\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"72673319e4abda4a63ac35b1bfcf904ca2e8623f70145724951e792dcef71850\"" Feb 13 15:37:50.524004 containerd[1895]: time="2025-02-13T15:37:50.523969203Z" level=info msg="StartContainer for \"72673319e4abda4a63ac35b1bfcf904ca2e8623f70145724951e792dcef71850\"" Feb 13 15:37:50.537169 containerd[1895]: time="2025-02-13T15:37:50.537065089Z" level=info msg="CreateContainer within sandbox \"5239bf091720f949bce1aee1e51e7758c69807063cf12bbf196cdb72cd54cec0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"167d52cfb8aa7a2801f0abfd8263ec141e25760950b42e444b8382c8bb53d1ac\"" Feb 13 15:37:50.538565 containerd[1895]: time="2025-02-13T15:37:50.538419577Z" level=info msg="StartContainer for \"167d52cfb8aa7a2801f0abfd8263ec141e25760950b42e444b8382c8bb53d1ac\"" Feb 13 15:37:50.616626 systemd[1]: Started cri-containerd-528e5b3911aacae9bf9ba3f796bc2f9850240110314985014af91a99650c8ad6.scope - libcontainer container 528e5b3911aacae9bf9ba3f796bc2f9850240110314985014af91a99650c8ad6. Feb 13 15:37:50.628299 systemd[1]: Started cri-containerd-72673319e4abda4a63ac35b1bfcf904ca2e8623f70145724951e792dcef71850.scope - libcontainer container 72673319e4abda4a63ac35b1bfcf904ca2e8623f70145724951e792dcef71850. Feb 13 15:37:50.657256 systemd[1]: Started cri-containerd-167d52cfb8aa7a2801f0abfd8263ec141e25760950b42e444b8382c8bb53d1ac.scope - libcontainer container 167d52cfb8aa7a2801f0abfd8263ec141e25760950b42e444b8382c8bb53d1ac. Feb 13 15:37:50.716077 containerd[1895]: time="2025-02-13T15:37:50.715635479Z" level=info msg="StartContainer for \"528e5b3911aacae9bf9ba3f796bc2f9850240110314985014af91a99650c8ad6\" returns successfully" Feb 13 15:37:50.769186 containerd[1895]: time="2025-02-13T15:37:50.768650707Z" level=info msg="StartContainer for \"72673319e4abda4a63ac35b1bfcf904ca2e8623f70145724951e792dcef71850\" returns successfully" Feb 13 15:37:50.802426 containerd[1895]: time="2025-02-13T15:37:50.802337397Z" level=info msg="StartContainer for \"167d52cfb8aa7a2801f0abfd8263ec141e25760950b42e444b8382c8bb53d1ac\" returns successfully" Feb 13 15:37:50.946533 kubelet[2852]: E0213 15:37:50.946250 2852 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.28.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:51.297961 kubelet[2852]: W0213 15:37:51.297824 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-54&limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:51.297961 kubelet[2852]: E0213 15:37:51.297909 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-54&limit=500&resourceVersion=0": dial tcp 172.31.28.54:6443: connect: connection refused Feb 13 15:37:51.962113 kubelet[2852]: I0213 15:37:51.961702 2852 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-54" Feb 13 15:37:53.937986 update_engine[1884]: I20250213 15:37:53.937096 1884 update_attempter.cc:509] Updating boot flags... Feb 13 15:37:54.077187 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3138) Feb 13 15:37:54.428418 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3140) Feb 13 15:37:54.775072 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3140) Feb 13 15:37:55.270564 kubelet[2852]: E0213 15:37:55.270502 2852 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-54\" not found" node="ip-172-31-28-54" Feb 13 15:37:55.475724 kubelet[2852]: I0213 15:37:55.473992 2852 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-28-54" Feb 13 15:37:55.496943 kubelet[2852]: E0213 15:37:55.496894 2852 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-28-54\" not found" Feb 13 15:37:55.813854 kubelet[2852]: I0213 15:37:55.813806 2852 apiserver.go:52] "Watching apiserver" Feb 13 15:37:55.835768 kubelet[2852]: I0213 15:37:55.835722 2852 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:37:57.862029 systemd[1]: Reloading requested from client PID 3392 ('systemctl') (unit session-9.scope)... Feb 13 15:37:57.862070 systemd[1]: Reloading... Feb 13 15:37:58.037070 zram_generator::config[3435]: No configuration found. Feb 13 15:37:58.291517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:37:58.457808 systemd[1]: Reloading finished in 595 ms. Feb 13 15:37:58.506283 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:58.509314 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:37:58.509578 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:58.518450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:37:58.842520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:37:58.854526 (kubelet)[3490]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:37:58.945905 kubelet[3490]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:37:58.945905 kubelet[3490]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:37:58.945905 kubelet[3490]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:37:58.947516 kubelet[3490]: I0213 15:37:58.947460 3490 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:37:58.954068 kubelet[3490]: I0213 15:37:58.954021 3490 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:37:58.954068 kubelet[3490]: I0213 15:37:58.954064 3490 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:37:58.954386 kubelet[3490]: I0213 15:37:58.954363 3490 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:37:58.955992 kubelet[3490]: I0213 15:37:58.955922 3490 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:37:58.958772 kubelet[3490]: I0213 15:37:58.958744 3490 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:37:58.971938 kubelet[3490]: I0213 15:37:58.971886 3490 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:37:58.972826 kubelet[3490]: I0213 15:37:58.972773 3490 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:37:58.973292 kubelet[3490]: I0213 15:37:58.972897 3490 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-54","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:37:58.973692 kubelet[3490]: I0213 15:37:58.973509 3490 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:37:58.973692 kubelet[3490]: I0213 15:37:58.973531 3490 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:37:58.973692 kubelet[3490]: I0213 15:37:58.973588 3490 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:37:58.973834 kubelet[3490]: I0213 15:37:58.973825 3490 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:37:58.974912 kubelet[3490]: I0213 15:37:58.974131 3490 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:37:58.974912 kubelet[3490]: I0213 15:37:58.974765 3490 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:37:58.974912 kubelet[3490]: I0213 15:37:58.974791 3490 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:37:58.976173 kubelet[3490]: I0213 15:37:58.976074 3490 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:37:58.980646 kubelet[3490]: I0213 15:37:58.979397 3490 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:37:58.981920 kubelet[3490]: I0213 15:37:58.981441 3490 server.go:1264] "Started kubelet" Feb 13 15:37:58.985249 kubelet[3490]: I0213 15:37:58.985210 3490 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:37:58.997301 kubelet[3490]: I0213 15:37:58.997247 3490 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:37:59.003178 kubelet[3490]: I0213 15:37:59.003149 3490 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:37:59.008229 kubelet[3490]: I0213 15:37:59.008092 3490 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:37:59.012662 kubelet[3490]: I0213 15:37:59.012528 3490 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:37:59.019147 kubelet[3490]: I0213 15:37:59.019088 3490 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:37:59.026132 kubelet[3490]: I0213 15:37:59.025924 3490 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:37:59.026513 kubelet[3490]: I0213 15:37:59.026314 3490 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:37:59.037054 kubelet[3490]: I0213 15:37:59.035714 3490 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:37:59.037054 kubelet[3490]: I0213 15:37:59.035818 3490 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:37:59.036560 sudo[3506]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:37:59.036971 sudo[3506]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:37:59.052141 kubelet[3490]: I0213 15:37:59.051594 3490 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:37:59.052141 kubelet[3490]: I0213 15:37:59.051974 3490 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:37:59.058379 kubelet[3490]: E0213 15:37:59.058194 3490 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:37:59.067568 kubelet[3490]: I0213 15:37:59.067272 3490 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:37:59.069680 kubelet[3490]: I0213 15:37:59.069534 3490 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:37:59.069680 kubelet[3490]: I0213 15:37:59.069569 3490 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:37:59.080744 kubelet[3490]: E0213 15:37:59.080538 3490 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:37:59.122108 kubelet[3490]: I0213 15:37:59.120306 3490 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-54" Feb 13 15:37:59.155773 kubelet[3490]: I0213 15:37:59.155738 3490 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-28-54" Feb 13 15:37:59.155901 kubelet[3490]: I0213 15:37:59.155818 3490 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-28-54" Feb 13 15:37:59.175170 kubelet[3490]: I0213 15:37:59.175139 3490 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:37:59.175170 kubelet[3490]: I0213 15:37:59.175165 3490 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:37:59.175429 kubelet[3490]: I0213 15:37:59.175189 3490 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:37:59.175429 kubelet[3490]: I0213 15:37:59.175422 3490 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:37:59.175559 kubelet[3490]: I0213 15:37:59.175438 3490 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:37:59.175559 kubelet[3490]: I0213 15:37:59.175466 3490 policy_none.go:49] "None policy: Start" Feb 13 15:37:59.177053 kubelet[3490]: I0213 15:37:59.176933 3490 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:37:59.177053 kubelet[3490]: I0213 15:37:59.176971 3490 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:37:59.177586 kubelet[3490]: I0213 15:37:59.177406 3490 state_mem.go:75] "Updated machine memory state" Feb 13 15:37:59.181501 kubelet[3490]: E0213 15:37:59.181472 3490 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:37:59.189250 kubelet[3490]: I0213 15:37:59.188978 3490 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:37:59.189898 kubelet[3490]: I0213 15:37:59.189773 3490 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:37:59.197182 kubelet[3490]: I0213 15:37:59.196085 3490 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:37:59.381785 kubelet[3490]: I0213 15:37:59.381629 3490 topology_manager.go:215] "Topology Admit Handler" podUID="bd96a96f9f6c00c144b7e246e200ff5a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-28-54" Feb 13 15:37:59.392852 kubelet[3490]: I0213 15:37:59.388952 3490 topology_manager.go:215] "Topology Admit Handler" podUID="dadb99872011cc8df8c049ea9c525f3d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-28-54" Feb 13 15:37:59.392852 kubelet[3490]: I0213 15:37:59.390665 3490 topology_manager.go:215] "Topology Admit Handler" podUID="458bb572f2f9b589e0c20a5e48cf170d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-28-54" Feb 13 15:37:59.409318 kubelet[3490]: E0213 15:37:59.408990 3490 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-28-54\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-54" Feb 13 15:37:59.438175 kubelet[3490]: I0213 15:37:59.437665 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dadb99872011cc8df8c049ea9c525f3d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-54\" (UID: \"dadb99872011cc8df8c049ea9c525f3d\") " pod="kube-system/kube-apiserver-ip-172-31-28-54" Feb 13 15:37:59.438175 kubelet[3490]: I0213 15:37:59.437716 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/458bb572f2f9b589e0c20a5e48cf170d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-54\" (UID: \"458bb572f2f9b589e0c20a5e48cf170d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-54" Feb 13 15:37:59.438175 kubelet[3490]: I0213 15:37:59.437748 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/458bb572f2f9b589e0c20a5e48cf170d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-54\" (UID: \"458bb572f2f9b589e0c20a5e48cf170d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-54" Feb 13 15:37:59.438175 kubelet[3490]: I0213 15:37:59.437773 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/458bb572f2f9b589e0c20a5e48cf170d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-54\" (UID: \"458bb572f2f9b589e0c20a5e48cf170d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-54" Feb 13 15:37:59.438175 kubelet[3490]: I0213 15:37:59.437799 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/458bb572f2f9b589e0c20a5e48cf170d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-54\" (UID: \"458bb572f2f9b589e0c20a5e48cf170d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-54" Feb 13 15:37:59.438649 kubelet[3490]: I0213 15:37:59.437825 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd96a96f9f6c00c144b7e246e200ff5a-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-54\" (UID: \"bd96a96f9f6c00c144b7e246e200ff5a\") " pod="kube-system/kube-scheduler-ip-172-31-28-54" Feb 13 15:37:59.438649 kubelet[3490]: I0213 15:37:59.437850 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dadb99872011cc8df8c049ea9c525f3d-ca-certs\") pod \"kube-apiserver-ip-172-31-28-54\" (UID: \"dadb99872011cc8df8c049ea9c525f3d\") " pod="kube-system/kube-apiserver-ip-172-31-28-54" Feb 13 15:37:59.438649 kubelet[3490]: I0213 15:37:59.437959 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dadb99872011cc8df8c049ea9c525f3d-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-54\" (UID: \"dadb99872011cc8df8c049ea9c525f3d\") " pod="kube-system/kube-apiserver-ip-172-31-28-54" Feb 13 15:37:59.438649 kubelet[3490]: I0213 15:37:59.437987 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/458bb572f2f9b589e0c20a5e48cf170d-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-54\" (UID: \"458bb572f2f9b589e0c20a5e48cf170d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-54" Feb 13 15:37:59.946242 sudo[3506]: pam_unix(sudo:session): session closed for user root Feb 13 15:37:59.987660 kubelet[3490]: I0213 15:37:59.987600 3490 apiserver.go:52] "Watching apiserver" Feb 13 15:38:00.027997 kubelet[3490]: I0213 15:38:00.027899 3490 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:38:00.132920 kubelet[3490]: E0213 15:38:00.132883 3490 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-28-54\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-54" Feb 13 15:38:00.191064 kubelet[3490]: I0213 15:38:00.188818 3490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-54" podStartSLOduration=1.188799324 podStartE2EDuration="1.188799324s" podCreationTimestamp="2025-02-13 15:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:00.188789217 +0000 UTC m=+1.325429653" watchObservedRunningTime="2025-02-13 15:38:00.188799324 +0000 UTC m=+1.325439790" Feb 13 15:38:00.218335 kubelet[3490]: I0213 15:38:00.218181 3490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-54" podStartSLOduration=1.2181592270000001 podStartE2EDuration="1.218159227s" podCreationTimestamp="2025-02-13 15:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:00.217685336 +0000 UTC m=+1.354325790" watchObservedRunningTime="2025-02-13 15:38:00.218159227 +0000 UTC m=+1.354799677" Feb 13 15:38:00.244819 kubelet[3490]: I0213 15:38:00.242505 3490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-54" podStartSLOduration=2.24248429 podStartE2EDuration="2.24248429s" podCreationTimestamp="2025-02-13 15:37:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:00.236849906 +0000 UTC m=+1.373490340" watchObservedRunningTime="2025-02-13 15:38:00.24248429 +0000 UTC m=+1.379124744" Feb 13 15:38:02.801651 sudo[2247]: pam_unix(sudo:session): session closed for user root Feb 13 15:38:02.824572 sshd[2246]: Connection closed by 139.178.89.65 port 50264 Feb 13 15:38:02.826659 sshd-session[2244]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:02.832538 systemd[1]: sshd@8-172.31.28.54:22-139.178.89.65:50264.service: Deactivated successfully. Feb 13 15:38:02.834668 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:38:02.834879 systemd[1]: session-9.scope: Consumed 5.865s CPU time, 185.3M memory peak, 0B memory swap peak. Feb 13 15:38:02.839310 systemd-logind[1876]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:38:02.840997 systemd-logind[1876]: Removed session 9. Feb 13 15:38:11.387808 kubelet[3490]: I0213 15:38:11.387742 3490 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:38:11.389716 containerd[1895]: time="2025-02-13T15:38:11.389669849Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:38:11.390669 kubelet[3490]: I0213 15:38:11.389901 3490 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:38:11.546330 kubelet[3490]: I0213 15:38:11.545645 3490 topology_manager.go:215] "Topology Admit Handler" podUID="5d223cde-f3bf-4ee1-bc2f-a559aa73b93f" podNamespace="kube-system" podName="kube-proxy-sb55v" Feb 13 15:38:11.569838 kubelet[3490]: I0213 15:38:11.567442 3490 topology_manager.go:215] "Topology Admit Handler" podUID="9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" podNamespace="kube-system" podName="cilium-mj9zm" Feb 13 15:38:11.578486 systemd[1]: Created slice kubepods-besteffort-pod5d223cde_f3bf_4ee1_bc2f_a559aa73b93f.slice - libcontainer container kubepods-besteffort-pod5d223cde_f3bf_4ee1_bc2f_a559aa73b93f.slice. Feb 13 15:38:11.598850 systemd[1]: Created slice kubepods-burstable-pod9c371ca4_6e15_4e8a_92e1_58b1edc7bca8.slice - libcontainer container kubepods-burstable-pod9c371ca4_6e15_4e8a_92e1_58b1edc7bca8.slice. Feb 13 15:38:11.646766 kubelet[3490]: I0213 15:38:11.646288 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-xtables-lock\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.646766 kubelet[3490]: I0213 15:38:11.646324 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frbzc\" (UniqueName: \"kubernetes.io/projected/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-kube-api-access-frbzc\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.646766 kubelet[3490]: I0213 15:38:11.646343 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-host-proc-sys-net\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.646766 kubelet[3490]: I0213 15:38:11.646361 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d223cde-f3bf-4ee1-bc2f-a559aa73b93f-lib-modules\") pod \"kube-proxy-sb55v\" (UID: \"5d223cde-f3bf-4ee1-bc2f-a559aa73b93f\") " pod="kube-system/kube-proxy-sb55v" Feb 13 15:38:11.646766 kubelet[3490]: I0213 15:38:11.646377 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-host-proc-sys-kernel\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.647293 kubelet[3490]: I0213 15:38:11.646391 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cilium-cgroup\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.647293 kubelet[3490]: I0213 15:38:11.646405 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-etc-cni-netd\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.647293 kubelet[3490]: I0213 15:38:11.646420 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmb5s\" (UniqueName: \"kubernetes.io/projected/5d223cde-f3bf-4ee1-bc2f-a559aa73b93f-kube-api-access-bmb5s\") pod \"kube-proxy-sb55v\" (UID: \"5d223cde-f3bf-4ee1-bc2f-a559aa73b93f\") " pod="kube-system/kube-proxy-sb55v" Feb 13 15:38:11.647293 kubelet[3490]: I0213 15:38:11.646434 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d223cde-f3bf-4ee1-bc2f-a559aa73b93f-kube-proxy\") pod \"kube-proxy-sb55v\" (UID: \"5d223cde-f3bf-4ee1-bc2f-a559aa73b93f\") " pod="kube-system/kube-proxy-sb55v" Feb 13 15:38:11.647293 kubelet[3490]: I0213 15:38:11.646449 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-bpf-maps\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.647293 kubelet[3490]: I0213 15:38:11.646462 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-hubble-tls\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.647566 kubelet[3490]: I0213 15:38:11.646475 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-hostproc\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.647566 kubelet[3490]: I0213 15:38:11.646491 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-clustermesh-secrets\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.647566 kubelet[3490]: I0213 15:38:11.646505 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cilium-config-path\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.647566 kubelet[3490]: I0213 15:38:11.646521 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cni-path\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.647566 kubelet[3490]: I0213 15:38:11.646543 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-lib-modules\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.647566 kubelet[3490]: I0213 15:38:11.646561 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d223cde-f3bf-4ee1-bc2f-a559aa73b93f-xtables-lock\") pod \"kube-proxy-sb55v\" (UID: \"5d223cde-f3bf-4ee1-bc2f-a559aa73b93f\") " pod="kube-system/kube-proxy-sb55v" Feb 13 15:38:11.647967 kubelet[3490]: I0213 15:38:11.646578 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cilium-run\") pod \"cilium-mj9zm\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " pod="kube-system/cilium-mj9zm" Feb 13 15:38:11.890441 containerd[1895]: time="2025-02-13T15:38:11.890384685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sb55v,Uid:5d223cde-f3bf-4ee1-bc2f-a559aa73b93f,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:11.904995 containerd[1895]: time="2025-02-13T15:38:11.904880083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mj9zm,Uid:9c371ca4-6e15-4e8a-92e1-58b1edc7bca8,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:12.006329 containerd[1895]: time="2025-02-13T15:38:12.006197466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:12.006329 containerd[1895]: time="2025-02-13T15:38:12.006266992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:12.006329 containerd[1895]: time="2025-02-13T15:38:12.006290031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:12.007383 containerd[1895]: time="2025-02-13T15:38:12.006929958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:12.011064 containerd[1895]: time="2025-02-13T15:38:12.010606651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:12.011064 containerd[1895]: time="2025-02-13T15:38:12.010669211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:12.011064 containerd[1895]: time="2025-02-13T15:38:12.010692124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:12.011064 containerd[1895]: time="2025-02-13T15:38:12.010787245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:12.051276 systemd[1]: Started cri-containerd-1e2566a2e0e7bb1e06035397c628af475ea75898df1921034eb37352b1b88ebb.scope - libcontainer container 1e2566a2e0e7bb1e06035397c628af475ea75898df1921034eb37352b1b88ebb. Feb 13 15:38:12.058775 systemd[1]: Started cri-containerd-a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5.scope - libcontainer container a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5. Feb 13 15:38:12.129187 containerd[1895]: time="2025-02-13T15:38:12.127850681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mj9zm,Uid:9c371ca4-6e15-4e8a-92e1-58b1edc7bca8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\"" Feb 13 15:38:12.136092 containerd[1895]: time="2025-02-13T15:38:12.136050589Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:38:12.174513 containerd[1895]: time="2025-02-13T15:38:12.174240465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sb55v,Uid:5d223cde-f3bf-4ee1-bc2f-a559aa73b93f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e2566a2e0e7bb1e06035397c628af475ea75898df1921034eb37352b1b88ebb\"" Feb 13 15:38:12.180370 containerd[1895]: time="2025-02-13T15:38:12.180330464Z" level=info msg="CreateContainer within sandbox \"1e2566a2e0e7bb1e06035397c628af475ea75898df1921034eb37352b1b88ebb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:38:12.230763 containerd[1895]: time="2025-02-13T15:38:12.230710424Z" level=info msg="CreateContainer within sandbox \"1e2566a2e0e7bb1e06035397c628af475ea75898df1921034eb37352b1b88ebb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a83d059d0c73d61d3f837d898fd0cd5bdaabe7923f40897dadc4a4fca45c24df\"" Feb 13 15:38:12.233404 containerd[1895]: time="2025-02-13T15:38:12.232651741Z" level=info msg="StartContainer for \"a83d059d0c73d61d3f837d898fd0cd5bdaabe7923f40897dadc4a4fca45c24df\"" Feb 13 15:38:12.323250 systemd[1]: Started cri-containerd-a83d059d0c73d61d3f837d898fd0cd5bdaabe7923f40897dadc4a4fca45c24df.scope - libcontainer container a83d059d0c73d61d3f837d898fd0cd5bdaabe7923f40897dadc4a4fca45c24df. Feb 13 15:38:12.390749 containerd[1895]: time="2025-02-13T15:38:12.390706705Z" level=info msg="StartContainer for \"a83d059d0c73d61d3f837d898fd0cd5bdaabe7923f40897dadc4a4fca45c24df\" returns successfully" Feb 13 15:38:12.509114 kubelet[3490]: I0213 15:38:12.508745 3490 topology_manager.go:215] "Topology Admit Handler" podUID="b637a6ad-d5ef-4210-bf0d-674684d142f0" podNamespace="kube-system" podName="cilium-operator-599987898-tprsm" Feb 13 15:38:12.523705 systemd[1]: Created slice kubepods-besteffort-podb637a6ad_d5ef_4210_bf0d_674684d142f0.slice - libcontainer container kubepods-besteffort-podb637a6ad_d5ef_4210_bf0d_674684d142f0.slice. Feb 13 15:38:12.556786 kubelet[3490]: I0213 15:38:12.556751 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b637a6ad-d5ef-4210-bf0d-674684d142f0-cilium-config-path\") pod \"cilium-operator-599987898-tprsm\" (UID: \"b637a6ad-d5ef-4210-bf0d-674684d142f0\") " pod="kube-system/cilium-operator-599987898-tprsm" Feb 13 15:38:12.556971 kubelet[3490]: I0213 15:38:12.556834 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4ll2\" (UniqueName: \"kubernetes.io/projected/b637a6ad-d5ef-4210-bf0d-674684d142f0-kube-api-access-h4ll2\") pod \"cilium-operator-599987898-tprsm\" (UID: \"b637a6ad-d5ef-4210-bf0d-674684d142f0\") " pod="kube-system/cilium-operator-599987898-tprsm" Feb 13 15:38:12.831500 containerd[1895]: time="2025-02-13T15:38:12.831457173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-tprsm,Uid:b637a6ad-d5ef-4210-bf0d-674684d142f0,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:12.936755 containerd[1895]: time="2025-02-13T15:38:12.936479625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:12.936755 containerd[1895]: time="2025-02-13T15:38:12.936551746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:12.936755 containerd[1895]: time="2025-02-13T15:38:12.936563432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:12.936755 containerd[1895]: time="2025-02-13T15:38:12.936655148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:12.971345 systemd[1]: Started cri-containerd-bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f.scope - libcontainer container bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f. Feb 13 15:38:13.037340 containerd[1895]: time="2025-02-13T15:38:13.037115000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-tprsm,Uid:b637a6ad-d5ef-4210-bf0d-674684d142f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\"" Feb 13 15:38:19.903994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3066591321.mount: Deactivated successfully. Feb 13 15:38:23.090180 containerd[1895]: time="2025-02-13T15:38:23.090129277Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:23.094453 containerd[1895]: time="2025-02-13T15:38:23.094397321Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:38:23.095074 containerd[1895]: time="2025-02-13T15:38:23.094804794Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:23.097312 containerd[1895]: time="2025-02-13T15:38:23.097113533Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.961012203s" Feb 13 15:38:23.097312 containerd[1895]: time="2025-02-13T15:38:23.097157261Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:38:23.099491 containerd[1895]: time="2025-02-13T15:38:23.099438038Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:38:23.100709 containerd[1895]: time="2025-02-13T15:38:23.100680492Z" level=info msg="CreateContainer within sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:38:23.236979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296099671.mount: Deactivated successfully. Feb 13 15:38:23.258763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2847643035.mount: Deactivated successfully. Feb 13 15:38:23.294661 containerd[1895]: time="2025-02-13T15:38:23.294621620Z" level=info msg="CreateContainer within sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152\"" Feb 13 15:38:23.295551 containerd[1895]: time="2025-02-13T15:38:23.295522935Z" level=info msg="StartContainer for \"2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152\"" Feb 13 15:38:23.454937 systemd[1]: Started cri-containerd-2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152.scope - libcontainer container 2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152. Feb 13 15:38:23.498562 containerd[1895]: time="2025-02-13T15:38:23.498300088Z" level=info msg="StartContainer for \"2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152\" returns successfully" Feb 13 15:38:23.509759 systemd[1]: cri-containerd-2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152.scope: Deactivated successfully. Feb 13 15:38:23.670021 containerd[1895]: time="2025-02-13T15:38:23.653445990Z" level=info msg="shim disconnected" id=2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152 namespace=k8s.io Feb 13 15:38:23.670290 containerd[1895]: time="2025-02-13T15:38:23.670024490Z" level=warning msg="cleaning up after shim disconnected" id=2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152 namespace=k8s.io Feb 13 15:38:23.670290 containerd[1895]: time="2025-02-13T15:38:23.670053846Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:23.694632 containerd[1895]: time="2025-02-13T15:38:23.694579292Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:38:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:38:24.228350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152-rootfs.mount: Deactivated successfully. Feb 13 15:38:24.286546 kubelet[3490]: I0213 15:38:24.275805 3490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sb55v" podStartSLOduration=13.275764763 podStartE2EDuration="13.275764763s" podCreationTimestamp="2025-02-13 15:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:13.187707631 +0000 UTC m=+14.324348084" watchObservedRunningTime="2025-02-13 15:38:24.275764763 +0000 UTC m=+25.412405213" Feb 13 15:38:24.312896 containerd[1895]: time="2025-02-13T15:38:24.312856359Z" level=info msg="CreateContainer within sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:38:24.345179 containerd[1895]: time="2025-02-13T15:38:24.344795328Z" level=info msg="CreateContainer within sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4\"" Feb 13 15:38:24.347290 containerd[1895]: time="2025-02-13T15:38:24.347207654Z" level=info msg="StartContainer for \"5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4\"" Feb 13 15:38:24.409592 systemd[1]: run-containerd-runc-k8s.io-5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4-runc.rGdu1z.mount: Deactivated successfully. Feb 13 15:38:24.418283 systemd[1]: Started cri-containerd-5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4.scope - libcontainer container 5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4. Feb 13 15:38:24.462479 containerd[1895]: time="2025-02-13T15:38:24.462428385Z" level=info msg="StartContainer for \"5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4\" returns successfully" Feb 13 15:38:24.485791 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:38:24.486264 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:24.486350 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:38:24.494583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:38:24.494894 systemd[1]: cri-containerd-5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4.scope: Deactivated successfully. Feb 13 15:38:24.559611 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:24.588116 containerd[1895]: time="2025-02-13T15:38:24.588025976Z" level=info msg="shim disconnected" id=5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4 namespace=k8s.io Feb 13 15:38:24.588116 containerd[1895]: time="2025-02-13T15:38:24.588108208Z" level=warning msg="cleaning up after shim disconnected" id=5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4 namespace=k8s.io Feb 13 15:38:24.588116 containerd[1895]: time="2025-02-13T15:38:24.588121776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:25.234450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4-rootfs.mount: Deactivated successfully. Feb 13 15:38:25.272525 containerd[1895]: time="2025-02-13T15:38:25.272103667Z" level=info msg="CreateContainer within sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:38:25.366505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408533631.mount: Deactivated successfully. Feb 13 15:38:25.374407 containerd[1895]: time="2025-02-13T15:38:25.374349317Z" level=info msg="CreateContainer within sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9\"" Feb 13 15:38:25.375488 containerd[1895]: time="2025-02-13T15:38:25.375450730Z" level=info msg="StartContainer for \"e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9\"" Feb 13 15:38:25.446462 systemd[1]: Started cri-containerd-e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9.scope - libcontainer container e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9. Feb 13 15:38:25.498517 containerd[1895]: time="2025-02-13T15:38:25.497895296Z" level=info msg="StartContainer for \"e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9\" returns successfully" Feb 13 15:38:25.499912 systemd[1]: cri-containerd-e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9.scope: Deactivated successfully. Feb 13 15:38:25.555919 containerd[1895]: time="2025-02-13T15:38:25.555853533Z" level=info msg="shim disconnected" id=e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9 namespace=k8s.io Feb 13 15:38:25.555919 containerd[1895]: time="2025-02-13T15:38:25.555913118Z" level=warning msg="cleaning up after shim disconnected" id=e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9 namespace=k8s.io Feb 13 15:38:25.555919 containerd[1895]: time="2025-02-13T15:38:25.555924707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:26.227235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9-rootfs.mount: Deactivated successfully. Feb 13 15:38:26.261737 containerd[1895]: time="2025-02-13T15:38:26.261475994Z" level=info msg="CreateContainer within sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:38:26.302172 containerd[1895]: time="2025-02-13T15:38:26.302128089Z" level=info msg="CreateContainer within sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2\"" Feb 13 15:38:26.304273 containerd[1895]: time="2025-02-13T15:38:26.303661311Z" level=info msg="StartContainer for \"662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2\"" Feb 13 15:38:26.370366 systemd[1]: Started cri-containerd-662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2.scope - libcontainer container 662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2. Feb 13 15:38:26.410863 systemd[1]: cri-containerd-662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2.scope: Deactivated successfully. Feb 13 15:38:26.422948 containerd[1895]: time="2025-02-13T15:38:26.422828164Z" level=info msg="StartContainer for \"662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2\" returns successfully" Feb 13 15:38:26.461102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2-rootfs.mount: Deactivated successfully. Feb 13 15:38:26.470326 containerd[1895]: time="2025-02-13T15:38:26.470266956Z" level=info msg="shim disconnected" id=662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2 namespace=k8s.io Feb 13 15:38:26.470326 containerd[1895]: time="2025-02-13T15:38:26.470324921Z" level=warning msg="cleaning up after shim disconnected" id=662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2 namespace=k8s.io Feb 13 15:38:26.470719 containerd[1895]: time="2025-02-13T15:38:26.470336217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:27.269467 containerd[1895]: time="2025-02-13T15:38:27.269370351Z" level=info msg="CreateContainer within sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:38:27.348586 containerd[1895]: time="2025-02-13T15:38:27.348487084Z" level=info msg="CreateContainer within sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\"" Feb 13 15:38:27.354007 containerd[1895]: time="2025-02-13T15:38:27.350720923Z" level=info msg="StartContainer for \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\"" Feb 13 15:38:27.405276 systemd[1]: Started cri-containerd-e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32.scope - libcontainer container e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32. Feb 13 15:38:27.452147 containerd[1895]: time="2025-02-13T15:38:27.452085767Z" level=info msg="StartContainer for \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\" returns successfully" Feb 13 15:38:27.740720 kubelet[3490]: I0213 15:38:27.740636 3490 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:38:27.847875 kubelet[3490]: I0213 15:38:27.847799 3490 topology_manager.go:215] "Topology Admit Handler" podUID="94d08a61-6e75-4a4f-bc16-dbca7d74e47d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-btgwp" Feb 13 15:38:27.863867 systemd[1]: Created slice kubepods-burstable-pod94d08a61_6e75_4a4f_bc16_dbca7d74e47d.slice - libcontainer container kubepods-burstable-pod94d08a61_6e75_4a4f_bc16_dbca7d74e47d.slice. Feb 13 15:38:27.876228 kubelet[3490]: I0213 15:38:27.872931 3490 topology_manager.go:215] "Topology Admit Handler" podUID="bdbf5c34-c43b-4a1b-8fdb-12d4df9c8784" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v6vct" Feb 13 15:38:27.888914 systemd[1]: Created slice kubepods-burstable-podbdbf5c34_c43b_4a1b_8fdb_12d4df9c8784.slice - libcontainer container kubepods-burstable-podbdbf5c34_c43b_4a1b_8fdb_12d4df9c8784.slice. Feb 13 15:38:27.944080 kubelet[3490]: I0213 15:38:27.943575 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwf9t\" (UniqueName: \"kubernetes.io/projected/94d08a61-6e75-4a4f-bc16-dbca7d74e47d-kube-api-access-mwf9t\") pod \"coredns-7db6d8ff4d-btgwp\" (UID: \"94d08a61-6e75-4a4f-bc16-dbca7d74e47d\") " pod="kube-system/coredns-7db6d8ff4d-btgwp" Feb 13 15:38:27.944080 kubelet[3490]: I0213 15:38:27.943651 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zk48\" (UniqueName: \"kubernetes.io/projected/bdbf5c34-c43b-4a1b-8fdb-12d4df9c8784-kube-api-access-5zk48\") pod \"coredns-7db6d8ff4d-v6vct\" (UID: \"bdbf5c34-c43b-4a1b-8fdb-12d4df9c8784\") " pod="kube-system/coredns-7db6d8ff4d-v6vct" Feb 13 15:38:27.944080 kubelet[3490]: I0213 15:38:27.943687 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdbf5c34-c43b-4a1b-8fdb-12d4df9c8784-config-volume\") pod \"coredns-7db6d8ff4d-v6vct\" (UID: \"bdbf5c34-c43b-4a1b-8fdb-12d4df9c8784\") " pod="kube-system/coredns-7db6d8ff4d-v6vct" Feb 13 15:38:27.944080 kubelet[3490]: I0213 15:38:27.943720 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94d08a61-6e75-4a4f-bc16-dbca7d74e47d-config-volume\") pod \"coredns-7db6d8ff4d-btgwp\" (UID: \"94d08a61-6e75-4a4f-bc16-dbca7d74e47d\") " pod="kube-system/coredns-7db6d8ff4d-btgwp" Feb 13 15:38:28.173430 containerd[1895]: time="2025-02-13T15:38:28.173390198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-btgwp,Uid:94d08a61-6e75-4a4f-bc16-dbca7d74e47d,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:28.200939 containerd[1895]: time="2025-02-13T15:38:28.200770791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v6vct,Uid:bdbf5c34-c43b-4a1b-8fdb-12d4df9c8784,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:28.454898 kubelet[3490]: I0213 15:38:28.452897 3490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mj9zm" podStartSLOduration=6.4885282459999996 podStartE2EDuration="17.452875712s" podCreationTimestamp="2025-02-13 15:38:11 +0000 UTC" firstStartedPulling="2025-02-13 15:38:12.134104252 +0000 UTC m=+13.270744695" lastFinishedPulling="2025-02-13 15:38:23.098451727 +0000 UTC m=+24.235092161" observedRunningTime="2025-02-13 15:38:28.443425634 +0000 UTC m=+29.580066088" watchObservedRunningTime="2025-02-13 15:38:28.452875712 +0000 UTC m=+29.589516157" Feb 13 15:38:33.374583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2310215310.mount: Deactivated successfully. Feb 13 15:38:34.268802 containerd[1895]: time="2025-02-13T15:38:34.268752308Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:34.269875 containerd[1895]: time="2025-02-13T15:38:34.269826721Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:38:34.272379 containerd[1895]: time="2025-02-13T15:38:34.270774556Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:34.272379 containerd[1895]: time="2025-02-13T15:38:34.272231290Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 11.172755106s" Feb 13 15:38:34.272379 containerd[1895]: time="2025-02-13T15:38:34.272269087Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:38:34.275611 containerd[1895]: time="2025-02-13T15:38:34.275582088Z" level=info msg="CreateContainer within sandbox \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:38:34.300433 containerd[1895]: time="2025-02-13T15:38:34.300389129Z" level=info msg="CreateContainer within sandbox \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\"" Feb 13 15:38:34.302642 containerd[1895]: time="2025-02-13T15:38:34.301462841Z" level=info msg="StartContainer for \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\"" Feb 13 15:38:34.348338 systemd[1]: Started cri-containerd-6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542.scope - libcontainer container 6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542. Feb 13 15:38:34.391863 containerd[1895]: time="2025-02-13T15:38:34.391797805Z" level=info msg="StartContainer for \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\" returns successfully" Feb 13 15:38:38.170374 systemd-networkd[1742]: cilium_host: Link UP Feb 13 15:38:38.171180 systemd-networkd[1742]: cilium_net: Link UP Feb 13 15:38:38.171443 systemd-networkd[1742]: cilium_net: Gained carrier Feb 13 15:38:38.171780 systemd-networkd[1742]: cilium_host: Gained carrier Feb 13 15:38:38.175032 (udev-worker)[4311]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:38:38.175777 (udev-worker)[4313]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:38:38.479097 systemd-networkd[1742]: cilium_vxlan: Link UP Feb 13 15:38:38.479920 systemd-networkd[1742]: cilium_vxlan: Gained carrier Feb 13 15:38:38.701009 systemd-networkd[1742]: cilium_host: Gained IPv6LL Feb 13 15:38:38.893150 systemd-networkd[1742]: cilium_net: Gained IPv6LL Feb 13 15:38:39.293495 kernel: NET: Registered PF_ALG protocol family Feb 13 15:38:39.852411 systemd-networkd[1742]: cilium_vxlan: Gained IPv6LL Feb 13 15:38:40.420187 systemd-networkd[1742]: lxc_health: Link UP Feb 13 15:38:40.427899 systemd-networkd[1742]: lxc_health: Gained carrier Feb 13 15:38:40.938102 (udev-worker)[4338]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:38:40.945290 systemd-networkd[1742]: lxcbf04735de200: Link UP Feb 13 15:38:40.959236 systemd-networkd[1742]: lxc1aea2235b227: Link UP Feb 13 15:38:40.965096 kernel: eth0: renamed from tmp5c0f2 Feb 13 15:38:40.973759 kernel: eth0: renamed from tmp2ee4a Feb 13 15:38:40.980162 (udev-worker)[4337]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:38:40.981569 systemd-networkd[1742]: lxc1aea2235b227: Gained carrier Feb 13 15:38:40.981893 systemd-networkd[1742]: lxcbf04735de200: Gained carrier Feb 13 15:38:41.943069 kubelet[3490]: I0213 15:38:41.941731 3490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-tprsm" podStartSLOduration=8.7068615 podStartE2EDuration="29.941708641s" podCreationTimestamp="2025-02-13 15:38:12 +0000 UTC" firstStartedPulling="2025-02-13 15:38:13.038675906 +0000 UTC m=+14.175316353" lastFinishedPulling="2025-02-13 15:38:34.273523059 +0000 UTC m=+35.410163494" observedRunningTime="2025-02-13 15:38:35.531035221 +0000 UTC m=+36.667675676" watchObservedRunningTime="2025-02-13 15:38:41.941708641 +0000 UTC m=+43.078349093" Feb 13 15:38:42.156298 systemd-networkd[1742]: lxcbf04735de200: Gained IPv6LL Feb 13 15:38:42.220261 systemd-networkd[1742]: lxc_health: Gained IPv6LL Feb 13 15:38:42.988551 systemd-networkd[1742]: lxc1aea2235b227: Gained IPv6LL Feb 13 15:38:45.058540 ntpd[1869]: Listen normally on 8 cilium_host 192.168.0.4:123 Feb 13 15:38:45.059396 ntpd[1869]: 13 Feb 15:38:45 ntpd[1869]: Listen normally on 8 cilium_host 192.168.0.4:123 Feb 13 15:38:45.059396 ntpd[1869]: 13 Feb 15:38:45 ntpd[1869]: Listen normally on 9 cilium_net [fe80::ecf2:cdff:fe4d:a6fe%4]:123 Feb 13 15:38:45.059396 ntpd[1869]: 13 Feb 15:38:45 ntpd[1869]: Listen normally on 10 cilium_host [fe80::94da:28ff:fec8:5273%5]:123 Feb 13 15:38:45.059396 ntpd[1869]: 13 Feb 15:38:45 ntpd[1869]: Listen normally on 11 cilium_vxlan [fe80::54e2:8dff:fe01:e42a%6]:123 Feb 13 15:38:45.059396 ntpd[1869]: 13 Feb 15:38:45 ntpd[1869]: Listen normally on 12 lxc_health [fe80::45f:21ff:feba:213c%8]:123 Feb 13 15:38:45.059396 ntpd[1869]: 13 Feb 15:38:45 ntpd[1869]: Listen normally on 13 lxcbf04735de200 [fe80::f88d:ccff:fe0d:d520%10]:123 Feb 13 15:38:45.059396 ntpd[1869]: 13 Feb 15:38:45 ntpd[1869]: Listen normally on 14 lxc1aea2235b227 [fe80::e435:35ff:fe65:631e%12]:123 Feb 13 15:38:45.058665 ntpd[1869]: Listen normally on 9 cilium_net [fe80::ecf2:cdff:fe4d:a6fe%4]:123 Feb 13 15:38:45.058735 ntpd[1869]: Listen normally on 10 cilium_host [fe80::94da:28ff:fec8:5273%5]:123 Feb 13 15:38:45.058779 ntpd[1869]: Listen normally on 11 cilium_vxlan [fe80::54e2:8dff:fe01:e42a%6]:123 Feb 13 15:38:45.058818 ntpd[1869]: Listen normally on 12 lxc_health [fe80::45f:21ff:feba:213c%8]:123 Feb 13 15:38:45.058859 ntpd[1869]: Listen normally on 13 lxcbf04735de200 [fe80::f88d:ccff:fe0d:d520%10]:123 Feb 13 15:38:45.058898 ntpd[1869]: Listen normally on 14 lxc1aea2235b227 [fe80::e435:35ff:fe65:631e%12]:123 Feb 13 15:38:45.270169 systemd[1]: Started sshd@9-172.31.28.54:22-139.178.89.65:52234.service - OpenSSH per-connection server daemon (139.178.89.65:52234). Feb 13 15:38:45.521098 sshd[4682]: Accepted publickey for core from 139.178.89.65 port 52234 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:38:45.524002 sshd-session[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:45.559091 systemd-logind[1876]: New session 10 of user core. Feb 13 15:38:45.564430 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:38:46.777250 sshd[4684]: Connection closed by 139.178.89.65 port 52234 Feb 13 15:38:46.785377 sshd-session[4682]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:46.790088 systemd-logind[1876]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:38:46.791307 systemd[1]: sshd@9-172.31.28.54:22-139.178.89.65:52234.service: Deactivated successfully. Feb 13 15:38:46.795882 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:38:46.800823 systemd-logind[1876]: Removed session 10. Feb 13 15:38:47.713751 containerd[1895]: time="2025-02-13T15:38:47.713605679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:47.713751 containerd[1895]: time="2025-02-13T15:38:47.713699396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:47.717902 containerd[1895]: time="2025-02-13T15:38:47.715503440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:47.723188 containerd[1895]: time="2025-02-13T15:38:47.723094933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:47.734140 containerd[1895]: time="2025-02-13T15:38:47.733641329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:38:47.734140 containerd[1895]: time="2025-02-13T15:38:47.733743378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:38:47.738316 containerd[1895]: time="2025-02-13T15:38:47.735110013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:47.738316 containerd[1895]: time="2025-02-13T15:38:47.735269353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:38:47.860569 systemd[1]: run-containerd-runc-k8s.io-2ee4ab89dd62eda9ccf96171c23f028bb6325f8c307330643b7cdb074bc1e424-runc.2G18xW.mount: Deactivated successfully. Feb 13 15:38:47.900511 systemd[1]: Started cri-containerd-2ee4ab89dd62eda9ccf96171c23f028bb6325f8c307330643b7cdb074bc1e424.scope - libcontainer container 2ee4ab89dd62eda9ccf96171c23f028bb6325f8c307330643b7cdb074bc1e424. Feb 13 15:38:47.922834 systemd[1]: Started cri-containerd-5c0f28d69376a23b2eacb002bd74f8296897533fe4962be6f646e46c0d361557.scope - libcontainer container 5c0f28d69376a23b2eacb002bd74f8296897533fe4962be6f646e46c0d361557. Feb 13 15:38:48.086328 containerd[1895]: time="2025-02-13T15:38:48.086079522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v6vct,Uid:bdbf5c34-c43b-4a1b-8fdb-12d4df9c8784,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c0f28d69376a23b2eacb002bd74f8296897533fe4962be6f646e46c0d361557\"" Feb 13 15:38:48.095899 containerd[1895]: time="2025-02-13T15:38:48.095585225Z" level=info msg="CreateContainer within sandbox \"5c0f28d69376a23b2eacb002bd74f8296897533fe4962be6f646e46c0d361557\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:38:48.152063 containerd[1895]: time="2025-02-13T15:38:48.151342190Z" level=info msg="CreateContainer within sandbox \"5c0f28d69376a23b2eacb002bd74f8296897533fe4962be6f646e46c0d361557\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"904506682c6f2e780c06aaa531929040fadc8c0600e102b3670fd4a591fbb697\"" Feb 13 15:38:48.152559 containerd[1895]: time="2025-02-13T15:38:48.152524040Z" level=info msg="StartContainer for \"904506682c6f2e780c06aaa531929040fadc8c0600e102b3670fd4a591fbb697\"" Feb 13 15:38:48.163506 containerd[1895]: time="2025-02-13T15:38:48.163432122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-btgwp,Uid:94d08a61-6e75-4a4f-bc16-dbca7d74e47d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ee4ab89dd62eda9ccf96171c23f028bb6325f8c307330643b7cdb074bc1e424\"" Feb 13 15:38:48.171859 containerd[1895]: time="2025-02-13T15:38:48.171009527Z" level=info msg="CreateContainer within sandbox \"2ee4ab89dd62eda9ccf96171c23f028bb6325f8c307330643b7cdb074bc1e424\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:38:48.199465 containerd[1895]: time="2025-02-13T15:38:48.199420180Z" level=info msg="CreateContainer within sandbox \"2ee4ab89dd62eda9ccf96171c23f028bb6325f8c307330643b7cdb074bc1e424\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d0f335dcb1dc20e649185cfd586b71f94b127a7537cd726414f048d16c10835\"" Feb 13 15:38:48.200703 containerd[1895]: time="2025-02-13T15:38:48.200578381Z" level=info msg="StartContainer for \"9d0f335dcb1dc20e649185cfd586b71f94b127a7537cd726414f048d16c10835\"" Feb 13 15:38:48.222275 systemd[1]: Started cri-containerd-904506682c6f2e780c06aaa531929040fadc8c0600e102b3670fd4a591fbb697.scope - libcontainer container 904506682c6f2e780c06aaa531929040fadc8c0600e102b3670fd4a591fbb697. Feb 13 15:38:48.272306 systemd[1]: Started cri-containerd-9d0f335dcb1dc20e649185cfd586b71f94b127a7537cd726414f048d16c10835.scope - libcontainer container 9d0f335dcb1dc20e649185cfd586b71f94b127a7537cd726414f048d16c10835. Feb 13 15:38:48.302398 containerd[1895]: time="2025-02-13T15:38:48.301622690Z" level=info msg="StartContainer for \"904506682c6f2e780c06aaa531929040fadc8c0600e102b3670fd4a591fbb697\" returns successfully" Feb 13 15:38:48.331738 containerd[1895]: time="2025-02-13T15:38:48.331628806Z" level=info msg="StartContainer for \"9d0f335dcb1dc20e649185cfd586b71f94b127a7537cd726414f048d16c10835\" returns successfully" Feb 13 15:38:48.454036 kubelet[3490]: I0213 15:38:48.453904 3490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v6vct" podStartSLOduration=36.453885808 podStartE2EDuration="36.453885808s" podCreationTimestamp="2025-02-13 15:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:48.452796321 +0000 UTC m=+49.589436786" watchObservedRunningTime="2025-02-13 15:38:48.453885808 +0000 UTC m=+49.590526263" Feb 13 15:38:48.473875 kubelet[3490]: I0213 15:38:48.472993 3490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-btgwp" podStartSLOduration=36.472970903 podStartE2EDuration="36.472970903s" podCreationTimestamp="2025-02-13 15:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:48.472310886 +0000 UTC m=+49.608951341" watchObservedRunningTime="2025-02-13 15:38:48.472970903 +0000 UTC m=+49.609611361" Feb 13 15:38:51.829514 systemd[1]: Started sshd@10-172.31.28.54:22-139.178.89.65:52246.service - OpenSSH per-connection server daemon (139.178.89.65:52246). Feb 13 15:38:52.124166 sshd[4876]: Accepted publickey for core from 139.178.89.65 port 52246 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:38:52.125733 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:52.132864 systemd-logind[1876]: New session 11 of user core. Feb 13 15:38:52.137277 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:38:52.621387 sshd[4878]: Connection closed by 139.178.89.65 port 52246 Feb 13 15:38:52.622335 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:52.625574 systemd[1]: sshd@10-172.31.28.54:22-139.178.89.65:52246.service: Deactivated successfully. Feb 13 15:38:52.628345 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:38:52.630817 systemd-logind[1876]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:38:52.632308 systemd-logind[1876]: Removed session 11. Feb 13 15:38:57.656432 systemd[1]: Started sshd@11-172.31.28.54:22-139.178.89.65:36236.service - OpenSSH per-connection server daemon (139.178.89.65:36236). Feb 13 15:38:57.832669 sshd[4890]: Accepted publickey for core from 139.178.89.65 port 36236 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:38:57.834188 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:57.841293 systemd-logind[1876]: New session 12 of user core. Feb 13 15:38:57.847262 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:38:58.055084 sshd[4892]: Connection closed by 139.178.89.65 port 36236 Feb 13 15:38:58.056730 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:58.061319 systemd[1]: sshd@11-172.31.28.54:22-139.178.89.65:36236.service: Deactivated successfully. Feb 13 15:38:58.064226 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:38:58.065322 systemd-logind[1876]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:38:58.066485 systemd-logind[1876]: Removed session 12. Feb 13 15:39:03.099895 systemd[1]: Started sshd@12-172.31.28.54:22-139.178.89.65:36244.service - OpenSSH per-connection server daemon (139.178.89.65:36244). Feb 13 15:39:03.295625 sshd[4906]: Accepted publickey for core from 139.178.89.65 port 36244 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:03.297429 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:03.303126 systemd-logind[1876]: New session 13 of user core. Feb 13 15:39:03.307249 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:39:03.533891 sshd[4908]: Connection closed by 139.178.89.65 port 36244 Feb 13 15:39:03.535591 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:03.539999 systemd-logind[1876]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:39:03.540822 systemd[1]: sshd@12-172.31.28.54:22-139.178.89.65:36244.service: Deactivated successfully. Feb 13 15:39:03.544164 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:39:03.545857 systemd-logind[1876]: Removed session 13. Feb 13 15:39:03.572524 systemd[1]: Started sshd@13-172.31.28.54:22-139.178.89.65:36252.service - OpenSSH per-connection server daemon (139.178.89.65:36252). Feb 13 15:39:03.753539 sshd[4920]: Accepted publickey for core from 139.178.89.65 port 36252 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:03.755502 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:03.763299 systemd-logind[1876]: New session 14 of user core. Feb 13 15:39:03.768257 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:39:04.173364 sshd[4922]: Connection closed by 139.178.89.65 port 36252 Feb 13 15:39:04.174925 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:04.183231 systemd-logind[1876]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:39:04.186266 systemd[1]: sshd@13-172.31.28.54:22-139.178.89.65:36252.service: Deactivated successfully. Feb 13 15:39:04.193254 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:39:04.217840 systemd-logind[1876]: Removed session 14. Feb 13 15:39:04.224480 systemd[1]: Started sshd@14-172.31.28.54:22-139.178.89.65:36256.service - OpenSSH per-connection server daemon (139.178.89.65:36256). Feb 13 15:39:04.421474 sshd[4930]: Accepted publickey for core from 139.178.89.65 port 36256 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:04.426258 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:04.442156 systemd-logind[1876]: New session 15 of user core. Feb 13 15:39:04.448346 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:39:04.713910 sshd[4932]: Connection closed by 139.178.89.65 port 36256 Feb 13 15:39:04.715544 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:04.731460 systemd[1]: sshd@14-172.31.28.54:22-139.178.89.65:36256.service: Deactivated successfully. Feb 13 15:39:04.734579 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:39:04.735882 systemd-logind[1876]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:39:04.737564 systemd-logind[1876]: Removed session 15. Feb 13 15:39:09.752394 systemd[1]: Started sshd@15-172.31.28.54:22-139.178.89.65:33368.service - OpenSSH per-connection server daemon (139.178.89.65:33368). Feb 13 15:39:09.941010 sshd[4944]: Accepted publickey for core from 139.178.89.65 port 33368 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:09.943393 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:09.950564 systemd-logind[1876]: New session 16 of user core. Feb 13 15:39:09.959272 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:39:10.188814 sshd[4946]: Connection closed by 139.178.89.65 port 33368 Feb 13 15:39:10.190524 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:10.195588 systemd[1]: sshd@15-172.31.28.54:22-139.178.89.65:33368.service: Deactivated successfully. Feb 13 15:39:10.200137 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:39:10.201676 systemd-logind[1876]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:39:10.203656 systemd-logind[1876]: Removed session 16. Feb 13 15:39:15.238438 systemd[1]: Started sshd@16-172.31.28.54:22-139.178.89.65:34902.service - OpenSSH per-connection server daemon (139.178.89.65:34902). Feb 13 15:39:15.418931 sshd[4962]: Accepted publickey for core from 139.178.89.65 port 34902 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:15.420909 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:15.435558 systemd-logind[1876]: New session 17 of user core. Feb 13 15:39:15.444984 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:39:15.654030 sshd[4964]: Connection closed by 139.178.89.65 port 34902 Feb 13 15:39:15.655646 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:15.659977 systemd[1]: sshd@16-172.31.28.54:22-139.178.89.65:34902.service: Deactivated successfully. Feb 13 15:39:15.663546 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:39:15.664636 systemd-logind[1876]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:39:15.667227 systemd-logind[1876]: Removed session 17. Feb 13 15:39:20.690032 systemd[1]: Started sshd@17-172.31.28.54:22-139.178.89.65:34916.service - OpenSSH per-connection server daemon (139.178.89.65:34916). Feb 13 15:39:20.881546 sshd[4975]: Accepted publickey for core from 139.178.89.65 port 34916 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:20.882708 sshd-session[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:20.889607 systemd-logind[1876]: New session 18 of user core. Feb 13 15:39:20.895235 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:39:21.108489 sshd[4977]: Connection closed by 139.178.89.65 port 34916 Feb 13 15:39:21.110268 sshd-session[4975]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:21.113756 systemd[1]: sshd@17-172.31.28.54:22-139.178.89.65:34916.service: Deactivated successfully. Feb 13 15:39:21.116614 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:39:21.119536 systemd-logind[1876]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:39:21.121101 systemd-logind[1876]: Removed session 18. Feb 13 15:39:21.139588 systemd[1]: Started sshd@18-172.31.28.54:22-139.178.89.65:34924.service - OpenSSH per-connection server daemon (139.178.89.65:34924). Feb 13 15:39:21.312431 sshd[4988]: Accepted publickey for core from 139.178.89.65 port 34924 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:21.316842 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:21.326158 systemd-logind[1876]: New session 19 of user core. Feb 13 15:39:21.335312 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:39:21.954180 sshd[4990]: Connection closed by 139.178.89.65 port 34924 Feb 13 15:39:21.956006 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:21.961517 systemd[1]: sshd@18-172.31.28.54:22-139.178.89.65:34924.service: Deactivated successfully. Feb 13 15:39:21.964113 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:39:21.966122 systemd-logind[1876]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:39:21.967696 systemd-logind[1876]: Removed session 19. Feb 13 15:39:21.991424 systemd[1]: Started sshd@19-172.31.28.54:22-139.178.89.65:34930.service - OpenSSH per-connection server daemon (139.178.89.65:34930). Feb 13 15:39:22.213644 sshd[4999]: Accepted publickey for core from 139.178.89.65 port 34930 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:22.215481 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:22.222254 systemd-logind[1876]: New session 20 of user core. Feb 13 15:39:22.227974 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:39:24.527731 sshd[5001]: Connection closed by 139.178.89.65 port 34930 Feb 13 15:39:24.529254 sshd-session[4999]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:24.542542 systemd[1]: sshd@19-172.31.28.54:22-139.178.89.65:34930.service: Deactivated successfully. Feb 13 15:39:24.555885 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:39:24.567457 systemd-logind[1876]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:39:24.613595 systemd[1]: Started sshd@20-172.31.28.54:22-139.178.89.65:54492.service - OpenSSH per-connection server daemon (139.178.89.65:54492). Feb 13 15:39:24.615175 systemd-logind[1876]: Removed session 20. Feb 13 15:39:24.799212 sshd[5017]: Accepted publickey for core from 139.178.89.65 port 54492 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:24.801070 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:24.808147 systemd-logind[1876]: New session 21 of user core. Feb 13 15:39:24.814288 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:39:25.478714 sshd[5019]: Connection closed by 139.178.89.65 port 54492 Feb 13 15:39:25.480801 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:25.487516 systemd-logind[1876]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:39:25.488562 systemd[1]: sshd@20-172.31.28.54:22-139.178.89.65:54492.service: Deactivated successfully. Feb 13 15:39:25.493316 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:39:25.495715 systemd-logind[1876]: Removed session 21. Feb 13 15:39:25.517445 systemd[1]: Started sshd@21-172.31.28.54:22-139.178.89.65:54502.service - OpenSSH per-connection server daemon (139.178.89.65:54502). Feb 13 15:39:25.705841 sshd[5028]: Accepted publickey for core from 139.178.89.65 port 54502 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:25.707815 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:25.713317 systemd-logind[1876]: New session 22 of user core. Feb 13 15:39:25.718225 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:39:25.965057 sshd[5030]: Connection closed by 139.178.89.65 port 54502 Feb 13 15:39:25.968859 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:25.973440 systemd[1]: sshd@21-172.31.28.54:22-139.178.89.65:54502.service: Deactivated successfully. Feb 13 15:39:25.976558 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:39:25.978164 systemd-logind[1876]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:39:25.979580 systemd-logind[1876]: Removed session 22. Feb 13 15:39:31.001621 systemd[1]: Started sshd@22-172.31.28.54:22-139.178.89.65:54510.service - OpenSSH per-connection server daemon (139.178.89.65:54510). Feb 13 15:39:31.193588 sshd[5041]: Accepted publickey for core from 139.178.89.65 port 54510 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:31.199368 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:31.214714 systemd-logind[1876]: New session 23 of user core. Feb 13 15:39:31.229256 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:39:31.413682 sshd[5043]: Connection closed by 139.178.89.65 port 54510 Feb 13 15:39:31.415307 sshd-session[5041]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:31.419692 systemd-logind[1876]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:39:31.420519 systemd[1]: sshd@22-172.31.28.54:22-139.178.89.65:54510.service: Deactivated successfully. Feb 13 15:39:31.423574 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:39:31.424649 systemd-logind[1876]: Removed session 23. Feb 13 15:39:36.452512 systemd[1]: Started sshd@23-172.31.28.54:22-139.178.89.65:47480.service - OpenSSH per-connection server daemon (139.178.89.65:47480). Feb 13 15:39:36.621523 sshd[5057]: Accepted publickey for core from 139.178.89.65 port 47480 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:36.623462 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:36.628001 systemd-logind[1876]: New session 24 of user core. Feb 13 15:39:36.632249 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:39:36.845817 sshd[5059]: Connection closed by 139.178.89.65 port 47480 Feb 13 15:39:36.847597 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:36.851706 systemd-logind[1876]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:39:36.852456 systemd[1]: sshd@23-172.31.28.54:22-139.178.89.65:47480.service: Deactivated successfully. Feb 13 15:39:36.855744 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:39:36.857503 systemd-logind[1876]: Removed session 24. Feb 13 15:39:41.882427 systemd[1]: Started sshd@24-172.31.28.54:22-139.178.89.65:47484.service - OpenSSH per-connection server daemon (139.178.89.65:47484). Feb 13 15:39:42.061684 sshd[5070]: Accepted publickey for core from 139.178.89.65 port 47484 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:42.063350 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:42.068939 systemd-logind[1876]: New session 25 of user core. Feb 13 15:39:42.081754 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:39:42.316598 sshd[5072]: Connection closed by 139.178.89.65 port 47484 Feb 13 15:39:42.319516 sshd-session[5070]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:42.324230 systemd[1]: sshd@24-172.31.28.54:22-139.178.89.65:47484.service: Deactivated successfully. Feb 13 15:39:42.327178 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:39:42.328024 systemd-logind[1876]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:39:42.330534 systemd-logind[1876]: Removed session 25. Feb 13 15:39:47.359490 systemd[1]: Started sshd@25-172.31.28.54:22-139.178.89.65:41134.service - OpenSSH per-connection server daemon (139.178.89.65:41134). Feb 13 15:39:47.525773 sshd[5085]: Accepted publickey for core from 139.178.89.65 port 41134 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:47.527234 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:47.532872 systemd-logind[1876]: New session 26 of user core. Feb 13 15:39:47.540289 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:39:47.780355 sshd[5087]: Connection closed by 139.178.89.65 port 41134 Feb 13 15:39:47.781428 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:47.793702 systemd[1]: sshd@25-172.31.28.54:22-139.178.89.65:41134.service: Deactivated successfully. Feb 13 15:39:47.799116 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:39:47.800286 systemd-logind[1876]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:39:47.801728 systemd-logind[1876]: Removed session 26. Feb 13 15:39:47.813519 systemd[1]: Started sshd@26-172.31.28.54:22-139.178.89.65:41136.service - OpenSSH per-connection server daemon (139.178.89.65:41136). Feb 13 15:39:48.011613 sshd[5099]: Accepted publickey for core from 139.178.89.65 port 41136 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:48.013328 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:48.019326 systemd-logind[1876]: New session 27 of user core. Feb 13 15:39:48.022289 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:39:50.109126 containerd[1895]: time="2025-02-13T15:39:50.109049668Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:39:50.135113 containerd[1895]: time="2025-02-13T15:39:50.135073485Z" level=info msg="StopContainer for \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\" with timeout 30 (s)" Feb 13 15:39:50.135757 containerd[1895]: time="2025-02-13T15:39:50.135720973Z" level=info msg="StopContainer for \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\" with timeout 2 (s)" Feb 13 15:39:50.138158 containerd[1895]: time="2025-02-13T15:39:50.138125555Z" level=info msg="Stop container \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\" with signal terminated" Feb 13 15:39:50.138953 containerd[1895]: time="2025-02-13T15:39:50.138789138Z" level=info msg="Stop container \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\" with signal terminated" Feb 13 15:39:50.159630 systemd-networkd[1742]: lxc_health: Link DOWN Feb 13 15:39:50.159641 systemd-networkd[1742]: lxc_health: Lost carrier Feb 13 15:39:50.181165 systemd[1]: cri-containerd-6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542.scope: Deactivated successfully. Feb 13 15:39:50.203080 systemd[1]: cri-containerd-e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32.scope: Deactivated successfully. Feb 13 15:39:50.203357 systemd[1]: cri-containerd-e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32.scope: Consumed 8.931s CPU time. Feb 13 15:39:50.237001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542-rootfs.mount: Deactivated successfully. Feb 13 15:39:50.266953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32-rootfs.mount: Deactivated successfully. Feb 13 15:39:50.270697 containerd[1895]: time="2025-02-13T15:39:50.270440305Z" level=info msg="shim disconnected" id=6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542 namespace=k8s.io Feb 13 15:39:50.271007 containerd[1895]: time="2025-02-13T15:39:50.270897097Z" level=warning msg="cleaning up after shim disconnected" id=6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542 namespace=k8s.io Feb 13 15:39:50.271007 containerd[1895]: time="2025-02-13T15:39:50.270921017Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:50.275292 containerd[1895]: time="2025-02-13T15:39:50.275114547Z" level=info msg="shim disconnected" id=e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32 namespace=k8s.io Feb 13 15:39:50.275292 containerd[1895]: time="2025-02-13T15:39:50.275230249Z" level=warning msg="cleaning up after shim disconnected" id=e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32 namespace=k8s.io Feb 13 15:39:50.275292 containerd[1895]: time="2025-02-13T15:39:50.275244073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:50.305417 containerd[1895]: time="2025-02-13T15:39:50.305368082Z" level=info msg="StopContainer for \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\" returns successfully" Feb 13 15:39:50.306383 containerd[1895]: time="2025-02-13T15:39:50.306344865Z" level=info msg="StopContainer for \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\" returns successfully" Feb 13 15:39:50.311509 containerd[1895]: time="2025-02-13T15:39:50.311476118Z" level=info msg="StopPodSandbox for \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\"" Feb 13 15:39:50.311730 containerd[1895]: time="2025-02-13T15:39:50.311693857Z" level=info msg="StopPodSandbox for \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\"" Feb 13 15:39:50.313702 containerd[1895]: time="2025-02-13T15:39:50.313082708Z" level=info msg="Container to stop \"5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:39:50.313702 containerd[1895]: time="2025-02-13T15:39:50.313298361Z" level=info msg="Container to stop \"e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:39:50.313702 containerd[1895]: time="2025-02-13T15:39:50.313317129Z" level=info msg="Container to stop \"662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:39:50.313702 containerd[1895]: time="2025-02-13T15:39:50.313326510Z" level=info msg="Container to stop \"2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:39:50.313702 containerd[1895]: time="2025-02-13T15:39:50.313335299Z" level=info msg="Container to stop \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:39:50.313702 containerd[1895]: time="2025-02-13T15:39:50.313103900Z" level=info msg="Container to stop \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:39:50.316468 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f-shm.mount: Deactivated successfully. Feb 13 15:39:50.316646 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5-shm.mount: Deactivated successfully. Feb 13 15:39:50.334033 systemd[1]: cri-containerd-a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5.scope: Deactivated successfully. Feb 13 15:39:50.338387 systemd[1]: cri-containerd-bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f.scope: Deactivated successfully. Feb 13 15:39:50.378360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5-rootfs.mount: Deactivated successfully. Feb 13 15:39:50.395549 containerd[1895]: time="2025-02-13T15:39:50.395256956Z" level=info msg="shim disconnected" id=a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5 namespace=k8s.io Feb 13 15:39:50.396298 containerd[1895]: time="2025-02-13T15:39:50.395335655Z" level=warning msg="cleaning up after shim disconnected" id=a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5 namespace=k8s.io Feb 13 15:39:50.396400 containerd[1895]: time="2025-02-13T15:39:50.396303310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:50.397160 containerd[1895]: time="2025-02-13T15:39:50.395518770Z" level=info msg="shim disconnected" id=bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f namespace=k8s.io Feb 13 15:39:50.397397 containerd[1895]: time="2025-02-13T15:39:50.397255433Z" level=warning msg="cleaning up after shim disconnected" id=bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f namespace=k8s.io Feb 13 15:39:50.397397 containerd[1895]: time="2025-02-13T15:39:50.397271977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:50.433442 containerd[1895]: time="2025-02-13T15:39:50.433399059Z" level=info msg="TearDown network for sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" successfully" Feb 13 15:39:50.433442 containerd[1895]: time="2025-02-13T15:39:50.433439517Z" level=info msg="StopPodSandbox for \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" returns successfully" Feb 13 15:39:50.436698 containerd[1895]: time="2025-02-13T15:39:50.436658452Z" level=info msg="TearDown network for sandbox \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\" successfully" Feb 13 15:39:50.436698 containerd[1895]: time="2025-02-13T15:39:50.436691018Z" level=info msg="StopPodSandbox for \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\" returns successfully" Feb 13 15:39:50.614139 kubelet[3490]: I0213 15:39:50.613375 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cilium-cgroup\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.614139 kubelet[3490]: I0213 15:39:50.613471 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-hubble-tls\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.614139 kubelet[3490]: I0213 15:39:50.613494 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-hostproc\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.614139 kubelet[3490]: I0213 15:39:50.613517 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-lib-modules\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.614139 kubelet[3490]: I0213 15:39:50.613544 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b637a6ad-d5ef-4210-bf0d-674684d142f0-cilium-config-path\") pod \"b637a6ad-d5ef-4210-bf0d-674684d142f0\" (UID: \"b637a6ad-d5ef-4210-bf0d-674684d142f0\") " Feb 13 15:39:50.614139 kubelet[3490]: I0213 15:39:50.613570 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-clustermesh-secrets\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.616523 kubelet[3490]: I0213 15:39:50.613595 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cilium-config-path\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.616523 kubelet[3490]: I0213 15:39:50.613620 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frbzc\" (UniqueName: \"kubernetes.io/projected/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-kube-api-access-frbzc\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.616523 kubelet[3490]: I0213 15:39:50.613645 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-host-proc-sys-kernel\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.616523 kubelet[3490]: I0213 15:39:50.613667 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-bpf-maps\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.616523 kubelet[3490]: I0213 15:39:50.613689 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-etc-cni-netd\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.616523 kubelet[3490]: I0213 15:39:50.613712 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cilium-run\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.616814 kubelet[3490]: I0213 15:39:50.613733 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-xtables-lock\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.616814 kubelet[3490]: I0213 15:39:50.613871 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-host-proc-sys-net\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.616814 kubelet[3490]: I0213 15:39:50.613908 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4ll2\" (UniqueName: \"kubernetes.io/projected/b637a6ad-d5ef-4210-bf0d-674684d142f0-kube-api-access-h4ll2\") pod \"b637a6ad-d5ef-4210-bf0d-674684d142f0\" (UID: \"b637a6ad-d5ef-4210-bf0d-674684d142f0\") " Feb 13 15:39:50.616814 kubelet[3490]: I0213 15:39:50.613933 3490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cni-path\") pod \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\" (UID: \"9c371ca4-6e15-4e8a-92e1-58b1edc7bca8\") " Feb 13 15:39:50.619953 kubelet[3490]: I0213 15:39:50.614118 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cni-path" (OuterVolumeSpecName: "cni-path") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:39:50.619953 kubelet[3490]: I0213 15:39:50.612713 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:39:50.619953 kubelet[3490]: I0213 15:39:50.619656 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:39:50.619953 kubelet[3490]: I0213 15:39:50.619688 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:39:50.619953 kubelet[3490]: I0213 15:39:50.619708 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:39:50.620403 kubelet[3490]: I0213 15:39:50.619727 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:39:50.620403 kubelet[3490]: I0213 15:39:50.619810 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:39:50.620403 kubelet[3490]: I0213 15:39:50.619839 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:39:50.647861 kubelet[3490]: I0213 15:39:50.647686 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-hostproc" (OuterVolumeSpecName: "hostproc") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:39:50.651966 kubelet[3490]: I0213 15:39:50.651728 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:39:50.656488 kubelet[3490]: I0213 15:39:50.655650 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b637a6ad-d5ef-4210-bf0d-674684d142f0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b637a6ad-d5ef-4210-bf0d-674684d142f0" (UID: "b637a6ad-d5ef-4210-bf0d-674684d142f0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:39:50.659439 kubelet[3490]: I0213 15:39:50.659398 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:39:50.662738 kubelet[3490]: I0213 15:39:50.662298 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:39:50.662738 kubelet[3490]: I0213 15:39:50.662656 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b637a6ad-d5ef-4210-bf0d-674684d142f0-kube-api-access-h4ll2" (OuterVolumeSpecName: "kube-api-access-h4ll2") pod "b637a6ad-d5ef-4210-bf0d-674684d142f0" (UID: "b637a6ad-d5ef-4210-bf0d-674684d142f0"). InnerVolumeSpecName "kube-api-access-h4ll2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:39:50.662738 kubelet[3490]: I0213 15:39:50.662692 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:39:50.662738 kubelet[3490]: I0213 15:39:50.662709 3490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-kube-api-access-frbzc" (OuterVolumeSpecName: "kube-api-access-frbzc") pod "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" (UID: "9c371ca4-6e15-4e8a-92e1-58b1edc7bca8"). InnerVolumeSpecName "kube-api-access-frbzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:39:50.682015 systemd[1]: Removed slice kubepods-burstable-pod9c371ca4_6e15_4e8a_92e1_58b1edc7bca8.slice - libcontainer container kubepods-burstable-pod9c371ca4_6e15_4e8a_92e1_58b1edc7bca8.slice. Feb 13 15:39:50.682428 systemd[1]: kubepods-burstable-pod9c371ca4_6e15_4e8a_92e1_58b1edc7bca8.slice: Consumed 9.030s CPU time. Feb 13 15:39:50.698069 kubelet[3490]: I0213 15:39:50.696850 3490 scope.go:117] "RemoveContainer" containerID="e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32" Feb 13 15:39:50.719611 containerd[1895]: time="2025-02-13T15:39:50.719539944Z" level=info msg="RemoveContainer for \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\"" Feb 13 15:39:50.720065 kubelet[3490]: I0213 15:39:50.719653 3490 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h4ll2\" (UniqueName: \"kubernetes.io/projected/b637a6ad-d5ef-4210-bf0d-674684d142f0-kube-api-access-h4ll2\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720065 kubelet[3490]: I0213 15:39:50.719683 3490 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cni-path\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720065 kubelet[3490]: I0213 15:39:50.719698 3490 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cilium-cgroup\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720065 kubelet[3490]: I0213 15:39:50.719710 3490 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-hubble-tls\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720065 kubelet[3490]: I0213 15:39:50.719721 3490 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-hostproc\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720065 kubelet[3490]: I0213 15:39:50.719732 3490 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-lib-modules\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720065 kubelet[3490]: I0213 15:39:50.719742 3490 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b637a6ad-d5ef-4210-bf0d-674684d142f0-cilium-config-path\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720065 kubelet[3490]: I0213 15:39:50.719866 3490 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-clustermesh-secrets\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720478 kubelet[3490]: I0213 15:39:50.719892 3490 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cilium-config-path\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720478 kubelet[3490]: I0213 15:39:50.719906 3490 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-frbzc\" (UniqueName: \"kubernetes.io/projected/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-kube-api-access-frbzc\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720478 kubelet[3490]: I0213 15:39:50.719920 3490 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-host-proc-sys-kernel\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720478 kubelet[3490]: I0213 15:39:50.719934 3490 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-bpf-maps\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720478 kubelet[3490]: I0213 15:39:50.719945 3490 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-etc-cni-netd\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720478 kubelet[3490]: I0213 15:39:50.719957 3490 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-cilium-run\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720478 kubelet[3490]: I0213 15:39:50.719969 3490 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-xtables-lock\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720478 kubelet[3490]: I0213 15:39:50.719980 3490 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8-host-proc-sys-net\") on node \"ip-172-31-28-54\" DevicePath \"\"" Feb 13 15:39:50.720314 systemd[1]: Removed slice kubepods-besteffort-podb637a6ad_d5ef_4210_bf0d_674684d142f0.slice - libcontainer container kubepods-besteffort-podb637a6ad_d5ef_4210_bf0d_674684d142f0.slice. Feb 13 15:39:50.733378 containerd[1895]: time="2025-02-13T15:39:50.733294417Z" level=info msg="RemoveContainer for \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\" returns successfully" Feb 13 15:39:50.734341 kubelet[3490]: I0213 15:39:50.733610 3490 scope.go:117] "RemoveContainer" containerID="662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2" Feb 13 15:39:50.735083 containerd[1895]: time="2025-02-13T15:39:50.735056264Z" level=info msg="RemoveContainer for \"662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2\"" Feb 13 15:39:50.740334 containerd[1895]: time="2025-02-13T15:39:50.740306463Z" level=info msg="RemoveContainer for \"662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2\" returns successfully" Feb 13 15:39:50.740747 kubelet[3490]: I0213 15:39:50.740651 3490 scope.go:117] "RemoveContainer" containerID="e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9" Feb 13 15:39:50.743218 containerd[1895]: time="2025-02-13T15:39:50.743182865Z" level=info msg="RemoveContainer for \"e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9\"" Feb 13 15:39:50.752759 containerd[1895]: time="2025-02-13T15:39:50.752656993Z" level=info msg="RemoveContainer for \"e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9\" returns successfully" Feb 13 15:39:50.753599 kubelet[3490]: I0213 15:39:50.753030 3490 scope.go:117] "RemoveContainer" containerID="5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4" Feb 13 15:39:50.755690 containerd[1895]: time="2025-02-13T15:39:50.755587018Z" level=info msg="RemoveContainer for \"5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4\"" Feb 13 15:39:50.764180 containerd[1895]: time="2025-02-13T15:39:50.764127065Z" level=info msg="RemoveContainer for \"5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4\" returns successfully" Feb 13 15:39:50.764944 kubelet[3490]: I0213 15:39:50.764915 3490 scope.go:117] "RemoveContainer" containerID="2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152" Feb 13 15:39:50.768706 containerd[1895]: time="2025-02-13T15:39:50.768664278Z" level=info msg="RemoveContainer for \"2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152\"" Feb 13 15:39:50.775247 containerd[1895]: time="2025-02-13T15:39:50.775205470Z" level=info msg="RemoveContainer for \"2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152\" returns successfully" Feb 13 15:39:50.775595 kubelet[3490]: I0213 15:39:50.775503 3490 scope.go:117] "RemoveContainer" containerID="e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32" Feb 13 15:39:50.775844 containerd[1895]: time="2025-02-13T15:39:50.775730134Z" level=error msg="ContainerStatus for \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\": not found" Feb 13 15:39:50.779638 kubelet[3490]: E0213 15:39:50.779552 3490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\": not found" containerID="e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32" Feb 13 15:39:50.788911 kubelet[3490]: I0213 15:39:50.785398 3490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32"} err="failed to get container status \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\": rpc error: code = NotFound desc = an error occurred when try to find container \"e649a208488725e7e42826449ac3d4dc4bfbd2720319d01f23da60a0c0666b32\": not found" Feb 13 15:39:50.788911 kubelet[3490]: I0213 15:39:50.788903 3490 scope.go:117] "RemoveContainer" containerID="662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2" Feb 13 15:39:50.789533 containerd[1895]: time="2025-02-13T15:39:50.789430483Z" level=error msg="ContainerStatus for \"662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2\": not found" Feb 13 15:39:50.789750 kubelet[3490]: E0213 15:39:50.789684 3490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2\": not found" containerID="662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2" Feb 13 15:39:50.789825 kubelet[3490]: I0213 15:39:50.789737 3490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2"} err="failed to get container status \"662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2\": rpc error: code = NotFound desc = an error occurred when try to find container \"662408a4018479c48a26524618ba45d2cee0a3d38c41d83f9fac3eee9611bfa2\": not found" Feb 13 15:39:50.789825 kubelet[3490]: I0213 15:39:50.789768 3490 scope.go:117] "RemoveContainer" containerID="e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9" Feb 13 15:39:50.790138 containerd[1895]: time="2025-02-13T15:39:50.790064569Z" level=error msg="ContainerStatus for \"e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9\": not found" Feb 13 15:39:50.790230 kubelet[3490]: E0213 15:39:50.790181 3490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9\": not found" containerID="e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9" Feb 13 15:39:50.790230 kubelet[3490]: I0213 15:39:50.790210 3490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9"} err="failed to get container status \"e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e036d8affa4c258d9c53f51381fdca204c7ac170e4fe1de28997a4e0b75122f9\": not found" Feb 13 15:39:50.790325 kubelet[3490]: I0213 15:39:50.790229 3490 scope.go:117] "RemoveContainer" containerID="5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4" Feb 13 15:39:50.790534 containerd[1895]: time="2025-02-13T15:39:50.790465926Z" level=error msg="ContainerStatus for \"5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4\": not found" Feb 13 15:39:50.790680 kubelet[3490]: E0213 15:39:50.790651 3490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4\": not found" containerID="5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4" Feb 13 15:39:50.791254 kubelet[3490]: I0213 15:39:50.790691 3490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4"} err="failed to get container status \"5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"5017e2b36ce9d75817a78491847d321e3625e7909d2c6ed4e32a64d6068128b4\": not found" Feb 13 15:39:50.791254 kubelet[3490]: I0213 15:39:50.790710 3490 scope.go:117] "RemoveContainer" containerID="2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152" Feb 13 15:39:50.791254 kubelet[3490]: E0213 15:39:50.790982 3490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152\": not found" containerID="2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152" Feb 13 15:39:50.791254 kubelet[3490]: I0213 15:39:50.790999 3490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152"} err="failed to get container status \"2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152\": rpc error: code = NotFound desc = an error occurred when try to find container \"2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152\": not found" Feb 13 15:39:50.791254 kubelet[3490]: I0213 15:39:50.791096 3490 scope.go:117] "RemoveContainer" containerID="6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542" Feb 13 15:39:50.791638 containerd[1895]: time="2025-02-13T15:39:50.790873983Z" level=error msg="ContainerStatus for \"2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2115e0ef050b11b0b7bceb9604b530782a197e4f517f31c420c0ce1dbd5c5152\": not found" Feb 13 15:39:50.792775 containerd[1895]: time="2025-02-13T15:39:50.792394716Z" level=info msg="RemoveContainer for \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\"" Feb 13 15:39:50.797980 containerd[1895]: time="2025-02-13T15:39:50.797938535Z" level=info msg="RemoveContainer for \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\" returns successfully" Feb 13 15:39:50.798411 kubelet[3490]: I0213 15:39:50.798241 3490 scope.go:117] "RemoveContainer" containerID="6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542" Feb 13 15:39:50.798690 containerd[1895]: time="2025-02-13T15:39:50.798650369Z" level=error msg="ContainerStatus for \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\": not found" Feb 13 15:39:50.798853 kubelet[3490]: E0213 15:39:50.798821 3490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\": not found" containerID="6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542" Feb 13 15:39:50.799211 kubelet[3490]: I0213 15:39:50.798862 3490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542"} err="failed to get container status \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ed3a87595549144832a9f59a3a94a562f0e5bd5508fab2d5e9fb704c5cc2542\": not found" Feb 13 15:39:51.075085 kubelet[3490]: I0213 15:39:51.073737 3490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" path="/var/lib/kubelet/pods/9c371ca4-6e15-4e8a-92e1-58b1edc7bca8/volumes" Feb 13 15:39:51.075085 kubelet[3490]: I0213 15:39:51.075075 3490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b637a6ad-d5ef-4210-bf0d-674684d142f0" path="/var/lib/kubelet/pods/b637a6ad-d5ef-4210-bf0d-674684d142f0/volumes" Feb 13 15:39:51.097529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f-rootfs.mount: Deactivated successfully. Feb 13 15:39:51.097916 systemd[1]: var-lib-kubelet-pods-b637a6ad\x2dd5ef\x2d4210\x2dbf0d\x2d674684d142f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh4ll2.mount: Deactivated successfully. Feb 13 15:39:51.098304 systemd[1]: var-lib-kubelet-pods-9c371ca4\x2d6e15\x2d4e8a\x2d92e1\x2d58b1edc7bca8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfrbzc.mount: Deactivated successfully. Feb 13 15:39:51.098678 systemd[1]: var-lib-kubelet-pods-9c371ca4\x2d6e15\x2d4e8a\x2d92e1\x2d58b1edc7bca8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:39:51.104510 systemd[1]: var-lib-kubelet-pods-9c371ca4\x2d6e15\x2d4e8a\x2d92e1\x2d58b1edc7bca8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:39:51.955467 sshd[5102]: Connection closed by 139.178.89.65 port 41136 Feb 13 15:39:51.956397 sshd-session[5099]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:51.960246 systemd[1]: sshd@26-172.31.28.54:22-139.178.89.65:41136.service: Deactivated successfully. Feb 13 15:39:51.964802 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:39:51.966726 systemd-logind[1876]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:39:51.967872 systemd-logind[1876]: Removed session 27. Feb 13 15:39:51.988414 systemd[1]: Started sshd@27-172.31.28.54:22-139.178.89.65:41140.service - OpenSSH per-connection server daemon (139.178.89.65:41140). Feb 13 15:39:52.185220 sshd[5267]: Accepted publickey for core from 139.178.89.65 port 41140 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:52.185910 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:52.191403 systemd-logind[1876]: New session 28 of user core. Feb 13 15:39:52.196219 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:39:52.939970 sshd[5269]: Connection closed by 139.178.89.65 port 41140 Feb 13 15:39:52.948257 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:52.960242 systemd[1]: sshd@27-172.31.28.54:22-139.178.89.65:41140.service: Deactivated successfully. Feb 13 15:39:52.977146 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:39:52.983557 systemd-logind[1876]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:39:53.008460 systemd[1]: Started sshd@28-172.31.28.54:22-139.178.89.65:41142.service - OpenSSH per-connection server daemon (139.178.89.65:41142). Feb 13 15:39:53.011739 kubelet[3490]: I0213 15:39:53.004372 3490 topology_manager.go:215] "Topology Admit Handler" podUID="bbb531ba-360e-427a-8476-6525813b8db3" podNamespace="kube-system" podName="cilium-mb8l7" Feb 13 15:39:53.012740 systemd-logind[1876]: Removed session 28. Feb 13 15:39:53.026661 kubelet[3490]: E0213 15:39:53.026617 3490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" containerName="mount-cgroup" Feb 13 15:39:53.026661 kubelet[3490]: E0213 15:39:53.026655 3490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" containerName="apply-sysctl-overwrites" Feb 13 15:39:53.026661 kubelet[3490]: E0213 15:39:53.026664 3490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" containerName="clean-cilium-state" Feb 13 15:39:53.026661 kubelet[3490]: E0213 15:39:53.026675 3490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" containerName="mount-bpf-fs" Feb 13 15:39:53.026929 kubelet[3490]: E0213 15:39:53.026683 3490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" containerName="cilium-agent" Feb 13 15:39:53.026929 kubelet[3490]: E0213 15:39:53.026691 3490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b637a6ad-d5ef-4210-bf0d-674684d142f0" containerName="cilium-operator" Feb 13 15:39:53.026929 kubelet[3490]: I0213 15:39:53.026724 3490 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c371ca4-6e15-4e8a-92e1-58b1edc7bca8" containerName="cilium-agent" Feb 13 15:39:53.026929 kubelet[3490]: I0213 15:39:53.026732 3490 memory_manager.go:354] "RemoveStaleState removing state" podUID="b637a6ad-d5ef-4210-bf0d-674684d142f0" containerName="cilium-operator" Feb 13 15:39:53.060402 ntpd[1869]: Deleting interface #12 lxc_health, fe80::45f:21ff:feba:213c%8#123, interface stats: received=0, sent=0, dropped=0, active_time=68 secs Feb 13 15:39:53.062158 ntpd[1869]: 13 Feb 15:39:53 ntpd[1869]: Deleting interface #12 lxc_health, fe80::45f:21ff:feba:213c%8#123, interface stats: received=0, sent=0, dropped=0, active_time=68 secs Feb 13 15:39:53.110308 systemd[1]: Created slice kubepods-burstable-podbbb531ba_360e_427a_8476_6525813b8db3.slice - libcontainer container kubepods-burstable-podbbb531ba_360e_427a_8476_6525813b8db3.slice. Feb 13 15:39:53.144651 kubelet[3490]: I0213 15:39:53.144431 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bbb531ba-360e-427a-8476-6525813b8db3-cilium-cgroup\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.144651 kubelet[3490]: I0213 15:39:53.144501 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bbb531ba-360e-427a-8476-6525813b8db3-host-proc-sys-kernel\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.144651 kubelet[3490]: I0213 15:39:53.144540 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bbb531ba-360e-427a-8476-6525813b8db3-host-proc-sys-net\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.144651 kubelet[3490]: I0213 15:39:53.144562 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bbb531ba-360e-427a-8476-6525813b8db3-cni-path\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.144651 kubelet[3490]: I0213 15:39:53.144577 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbb531ba-360e-427a-8476-6525813b8db3-xtables-lock\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.144651 kubelet[3490]: I0213 15:39:53.144593 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bbb531ba-360e-427a-8476-6525813b8db3-cilium-ipsec-secrets\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.145026 kubelet[3490]: I0213 15:39:53.144618 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bbb531ba-360e-427a-8476-6525813b8db3-bpf-maps\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.145026 kubelet[3490]: I0213 15:39:53.144656 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kt5z\" (UniqueName: \"kubernetes.io/projected/bbb531ba-360e-427a-8476-6525813b8db3-kube-api-access-7kt5z\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.145026 kubelet[3490]: I0213 15:39:53.144699 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bbb531ba-360e-427a-8476-6525813b8db3-cilium-run\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.145026 kubelet[3490]: I0213 15:39:53.144730 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbb531ba-360e-427a-8476-6525813b8db3-cilium-config-path\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.145026 kubelet[3490]: I0213 15:39:53.144753 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bbb531ba-360e-427a-8476-6525813b8db3-hostproc\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.145248 kubelet[3490]: I0213 15:39:53.145074 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bbb531ba-360e-427a-8476-6525813b8db3-clustermesh-secrets\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.145248 kubelet[3490]: I0213 15:39:53.145152 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bbb531ba-360e-427a-8476-6525813b8db3-hubble-tls\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.145248 kubelet[3490]: I0213 15:39:53.145184 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbb531ba-360e-427a-8476-6525813b8db3-etc-cni-netd\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.145248 kubelet[3490]: I0213 15:39:53.145222 3490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbb531ba-360e-427a-8476-6525813b8db3-lib-modules\") pod \"cilium-mb8l7\" (UID: \"bbb531ba-360e-427a-8476-6525813b8db3\") " pod="kube-system/cilium-mb8l7" Feb 13 15:39:53.229461 sshd[5279]: Accepted publickey for core from 139.178.89.65 port 41142 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:53.230956 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:53.240536 systemd-logind[1876]: New session 29 of user core. Feb 13 15:39:53.243265 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:39:53.367780 sshd[5283]: Connection closed by 139.178.89.65 port 41142 Feb 13 15:39:53.369295 sshd-session[5279]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:53.372641 systemd[1]: sshd@28-172.31.28.54:22-139.178.89.65:41142.service: Deactivated successfully. Feb 13 15:39:53.375327 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:39:53.377650 systemd-logind[1876]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:39:53.379227 systemd-logind[1876]: Removed session 29. Feb 13 15:39:53.418988 containerd[1895]: time="2025-02-13T15:39:53.418736225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mb8l7,Uid:bbb531ba-360e-427a-8476-6525813b8db3,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:53.423824 systemd[1]: Started sshd@29-172.31.28.54:22-139.178.89.65:41154.service - OpenSSH per-connection server daemon (139.178.89.65:41154). Feb 13 15:39:53.472126 containerd[1895]: time="2025-02-13T15:39:53.472008289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:53.474110 containerd[1895]: time="2025-02-13T15:39:53.472822665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:53.474110 containerd[1895]: time="2025-02-13T15:39:53.472849794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:53.474110 containerd[1895]: time="2025-02-13T15:39:53.472956257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:53.499326 systemd[1]: Started cri-containerd-8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d.scope - libcontainer container 8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d. Feb 13 15:39:53.539985 containerd[1895]: time="2025-02-13T15:39:53.539938340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mb8l7,Uid:bbb531ba-360e-427a-8476-6525813b8db3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d\"" Feb 13 15:39:53.545023 containerd[1895]: time="2025-02-13T15:39:53.544984035Z" level=info msg="CreateContainer within sandbox \"8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:39:53.565057 containerd[1895]: time="2025-02-13T15:39:53.564997707Z" level=info msg="CreateContainer within sandbox \"8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f10734cdd48892dee1ce907344e834bc7d347e4e1129460f80ac82aa8380a9f\"" Feb 13 15:39:53.568885 containerd[1895]: time="2025-02-13T15:39:53.567638407Z" level=info msg="StartContainer for \"1f10734cdd48892dee1ce907344e834bc7d347e4e1129460f80ac82aa8380a9f\"" Feb 13 15:39:53.603288 systemd[1]: Started cri-containerd-1f10734cdd48892dee1ce907344e834bc7d347e4e1129460f80ac82aa8380a9f.scope - libcontainer container 1f10734cdd48892dee1ce907344e834bc7d347e4e1129460f80ac82aa8380a9f. Feb 13 15:39:53.638184 sshd[5293]: Accepted publickey for core from 139.178.89.65 port 41154 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:39:53.639585 containerd[1895]: time="2025-02-13T15:39:53.639539597Z" level=info msg="StartContainer for \"1f10734cdd48892dee1ce907344e834bc7d347e4e1129460f80ac82aa8380a9f\" returns successfully" Feb 13 15:39:53.641421 sshd-session[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:53.648546 systemd-logind[1876]: New session 30 of user core. Feb 13 15:39:53.664003 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 15:39:53.670715 systemd[1]: cri-containerd-1f10734cdd48892dee1ce907344e834bc7d347e4e1129460f80ac82aa8380a9f.scope: Deactivated successfully. Feb 13 15:39:53.754958 containerd[1895]: time="2025-02-13T15:39:53.754802201Z" level=info msg="shim disconnected" id=1f10734cdd48892dee1ce907344e834bc7d347e4e1129460f80ac82aa8380a9f namespace=k8s.io Feb 13 15:39:53.754958 containerd[1895]: time="2025-02-13T15:39:53.754859899Z" level=warning msg="cleaning up after shim disconnected" id=1f10734cdd48892dee1ce907344e834bc7d347e4e1129460f80ac82aa8380a9f namespace=k8s.io Feb 13 15:39:53.754958 containerd[1895]: time="2025-02-13T15:39:53.754871433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:54.229586 kubelet[3490]: E0213 15:39:54.229542 3490 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:39:54.739188 containerd[1895]: time="2025-02-13T15:39:54.739142089Z" level=info msg="CreateContainer within sandbox \"8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:39:54.767681 containerd[1895]: time="2025-02-13T15:39:54.767632379Z" level=info msg="CreateContainer within sandbox \"8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b37268fbadbebec4cd8a99303f8824b7fcf8652da5fc278e729afad4262921c\"" Feb 13 15:39:54.768245 containerd[1895]: time="2025-02-13T15:39:54.768208961Z" level=info msg="StartContainer for \"5b37268fbadbebec4cd8a99303f8824b7fcf8652da5fc278e729afad4262921c\"" Feb 13 15:39:54.813272 systemd[1]: Started cri-containerd-5b37268fbadbebec4cd8a99303f8824b7fcf8652da5fc278e729afad4262921c.scope - libcontainer container 5b37268fbadbebec4cd8a99303f8824b7fcf8652da5fc278e729afad4262921c. Feb 13 15:39:54.846298 containerd[1895]: time="2025-02-13T15:39:54.846244306Z" level=info msg="StartContainer for \"5b37268fbadbebec4cd8a99303f8824b7fcf8652da5fc278e729afad4262921c\" returns successfully" Feb 13 15:39:54.858153 systemd[1]: cri-containerd-5b37268fbadbebec4cd8a99303f8824b7fcf8652da5fc278e729afad4262921c.scope: Deactivated successfully. Feb 13 15:39:54.902490 containerd[1895]: time="2025-02-13T15:39:54.902213495Z" level=info msg="shim disconnected" id=5b37268fbadbebec4cd8a99303f8824b7fcf8652da5fc278e729afad4262921c namespace=k8s.io Feb 13 15:39:54.902490 containerd[1895]: time="2025-02-13T15:39:54.902289131Z" level=warning msg="cleaning up after shim disconnected" id=5b37268fbadbebec4cd8a99303f8824b7fcf8652da5fc278e729afad4262921c namespace=k8s.io Feb 13 15:39:54.902490 containerd[1895]: time="2025-02-13T15:39:54.902317849Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:55.264454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b37268fbadbebec4cd8a99303f8824b7fcf8652da5fc278e729afad4262921c-rootfs.mount: Deactivated successfully. Feb 13 15:39:55.749906 containerd[1895]: time="2025-02-13T15:39:55.749859532Z" level=info msg="CreateContainer within sandbox \"8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:39:55.790886 containerd[1895]: time="2025-02-13T15:39:55.790838687Z" level=info msg="CreateContainer within sandbox \"8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"13cf51719305ebbcb6253c1e240d1d63de2cad283ab930b886b8efae9352f2e5\"" Feb 13 15:39:55.791592 containerd[1895]: time="2025-02-13T15:39:55.791562658Z" level=info msg="StartContainer for \"13cf51719305ebbcb6253c1e240d1d63de2cad283ab930b886b8efae9352f2e5\"" Feb 13 15:39:55.836267 systemd[1]: Started cri-containerd-13cf51719305ebbcb6253c1e240d1d63de2cad283ab930b886b8efae9352f2e5.scope - libcontainer container 13cf51719305ebbcb6253c1e240d1d63de2cad283ab930b886b8efae9352f2e5. Feb 13 15:39:55.875914 containerd[1895]: time="2025-02-13T15:39:55.875864285Z" level=info msg="StartContainer for \"13cf51719305ebbcb6253c1e240d1d63de2cad283ab930b886b8efae9352f2e5\" returns successfully" Feb 13 15:39:55.884312 systemd[1]: cri-containerd-13cf51719305ebbcb6253c1e240d1d63de2cad283ab930b886b8efae9352f2e5.scope: Deactivated successfully. Feb 13 15:39:55.924077 containerd[1895]: time="2025-02-13T15:39:55.923984921Z" level=info msg="shim disconnected" id=13cf51719305ebbcb6253c1e240d1d63de2cad283ab930b886b8efae9352f2e5 namespace=k8s.io Feb 13 15:39:55.924077 containerd[1895]: time="2025-02-13T15:39:55.924077563Z" level=warning msg="cleaning up after shim disconnected" id=13cf51719305ebbcb6253c1e240d1d63de2cad283ab930b886b8efae9352f2e5 namespace=k8s.io Feb 13 15:39:55.924362 containerd[1895]: time="2025-02-13T15:39:55.924091150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:56.264195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13cf51719305ebbcb6253c1e240d1d63de2cad283ab930b886b8efae9352f2e5-rootfs.mount: Deactivated successfully. Feb 13 15:39:56.745667 containerd[1895]: time="2025-02-13T15:39:56.745623732Z" level=info msg="CreateContainer within sandbox \"8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:39:56.773312 containerd[1895]: time="2025-02-13T15:39:56.773263920Z" level=info msg="CreateContainer within sandbox \"8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e41469328d1620e497d0bc900b05191e01a0a0ecbc4f3824a1b9498a1397a744\"" Feb 13 15:39:56.775676 containerd[1895]: time="2025-02-13T15:39:56.775137685Z" level=info msg="StartContainer for \"e41469328d1620e497d0bc900b05191e01a0a0ecbc4f3824a1b9498a1397a744\"" Feb 13 15:39:56.832481 systemd[1]: Started cri-containerd-e41469328d1620e497d0bc900b05191e01a0a0ecbc4f3824a1b9498a1397a744.scope - libcontainer container e41469328d1620e497d0bc900b05191e01a0a0ecbc4f3824a1b9498a1397a744. Feb 13 15:39:56.865206 systemd[1]: cri-containerd-e41469328d1620e497d0bc900b05191e01a0a0ecbc4f3824a1b9498a1397a744.scope: Deactivated successfully. Feb 13 15:39:56.872518 containerd[1895]: time="2025-02-13T15:39:56.871920817Z" level=info msg="StartContainer for \"e41469328d1620e497d0bc900b05191e01a0a0ecbc4f3824a1b9498a1397a744\" returns successfully" Feb 13 15:39:56.873937 containerd[1895]: time="2025-02-13T15:39:56.867309525Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbb531ba_360e_427a_8476_6525813b8db3.slice/cri-containerd-e41469328d1620e497d0bc900b05191e01a0a0ecbc4f3824a1b9498a1397a744.scope/memory.events\": no such file or directory" Feb 13 15:39:56.928257 containerd[1895]: time="2025-02-13T15:39:56.928183576Z" level=info msg="shim disconnected" id=e41469328d1620e497d0bc900b05191e01a0a0ecbc4f3824a1b9498a1397a744 namespace=k8s.io Feb 13 15:39:56.928257 containerd[1895]: time="2025-02-13T15:39:56.928240020Z" level=warning msg="cleaning up after shim disconnected" id=e41469328d1620e497d0bc900b05191e01a0a0ecbc4f3824a1b9498a1397a744 namespace=k8s.io Feb 13 15:39:56.928257 containerd[1895]: time="2025-02-13T15:39:56.928251955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:57.071091 kubelet[3490]: E0213 15:39:57.070847 3490 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-v6vct" podUID="bdbf5c34-c43b-4a1b-8fdb-12d4df9c8784" Feb 13 15:39:57.264230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e41469328d1620e497d0bc900b05191e01a0a0ecbc4f3824a1b9498a1397a744-rootfs.mount: Deactivated successfully. Feb 13 15:39:57.749293 containerd[1895]: time="2025-02-13T15:39:57.749247333Z" level=info msg="CreateContainer within sandbox \"8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:39:57.777807 containerd[1895]: time="2025-02-13T15:39:57.777692817Z" level=info msg="CreateContainer within sandbox \"8edd35894b8be30507d7798def383fa1074824ca89d01e3c4707399ce5447b4d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5c9d32fd3a28b74cec900486a178d4dab3f293e7be15fe50a2ca310966ad1c3c\"" Feb 13 15:39:57.778703 containerd[1895]: time="2025-02-13T15:39:57.778675587Z" level=info msg="StartContainer for \"5c9d32fd3a28b74cec900486a178d4dab3f293e7be15fe50a2ca310966ad1c3c\"" Feb 13 15:39:57.826260 systemd[1]: Started cri-containerd-5c9d32fd3a28b74cec900486a178d4dab3f293e7be15fe50a2ca310966ad1c3c.scope - libcontainer container 5c9d32fd3a28b74cec900486a178d4dab3f293e7be15fe50a2ca310966ad1c3c. Feb 13 15:39:57.875071 containerd[1895]: time="2025-02-13T15:39:57.873944247Z" level=info msg="StartContainer for \"5c9d32fd3a28b74cec900486a178d4dab3f293e7be15fe50a2ca310966ad1c3c\" returns successfully" Feb 13 15:39:59.073066 kubelet[3490]: E0213 15:39:59.070782 3490 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-v6vct" podUID="bdbf5c34-c43b-4a1b-8fdb-12d4df9c8784" Feb 13 15:39:59.126203 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:39:59.127307 containerd[1895]: time="2025-02-13T15:39:59.126903718Z" level=info msg="StopPodSandbox for \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\"" Feb 13 15:39:59.127307 containerd[1895]: time="2025-02-13T15:39:59.127032152Z" level=info msg="TearDown network for sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" successfully" Feb 13 15:39:59.127307 containerd[1895]: time="2025-02-13T15:39:59.127067720Z" level=info msg="StopPodSandbox for \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" returns successfully" Feb 13 15:39:59.128723 containerd[1895]: time="2025-02-13T15:39:59.128638745Z" level=info msg="RemovePodSandbox for \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\"" Feb 13 15:39:59.132208 containerd[1895]: time="2025-02-13T15:39:59.131082964Z" level=info msg="Forcibly stopping sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\"" Feb 13 15:39:59.132208 containerd[1895]: time="2025-02-13T15:39:59.131197158Z" level=info msg="TearDown network for sandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" successfully" Feb 13 15:39:59.154928 containerd[1895]: time="2025-02-13T15:39:59.154256414Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:39:59.154928 containerd[1895]: time="2025-02-13T15:39:59.154351789Z" level=info msg="RemovePodSandbox \"a5de139d6005f2d226cd0220adafec3faa5bf1602f9a6a3032aa62debb4136c5\" returns successfully" Feb 13 15:39:59.155985 containerd[1895]: time="2025-02-13T15:39:59.155379638Z" level=info msg="StopPodSandbox for \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\"" Feb 13 15:39:59.156632 containerd[1895]: time="2025-02-13T15:39:59.156467574Z" level=info msg="TearDown network for sandbox \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\" successfully" Feb 13 15:39:59.156632 containerd[1895]: time="2025-02-13T15:39:59.156493050Z" level=info msg="StopPodSandbox for \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\" returns successfully" Feb 13 15:39:59.160886 containerd[1895]: time="2025-02-13T15:39:59.159808704Z" level=info msg="RemovePodSandbox for \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\"" Feb 13 15:39:59.160886 containerd[1895]: time="2025-02-13T15:39:59.159851934Z" level=info msg="Forcibly stopping sandbox \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\"" Feb 13 15:39:59.160886 containerd[1895]: time="2025-02-13T15:39:59.159934239Z" level=info msg="TearDown network for sandbox \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\" successfully" Feb 13 15:39:59.172501 containerd[1895]: time="2025-02-13T15:39:59.172414224Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:39:59.172655 containerd[1895]: time="2025-02-13T15:39:59.172524908Z" level=info msg="RemovePodSandbox \"bebcecce08589cfad49216fffa5de6e6e1d09e3cf21e6fb87ee9e1e67b5a274f\" returns successfully" Feb 13 15:40:00.711360 systemd[1]: run-containerd-runc-k8s.io-5c9d32fd3a28b74cec900486a178d4dab3f293e7be15fe50a2ca310966ad1c3c-runc.qKwGzo.mount: Deactivated successfully. Feb 13 15:40:03.491590 (udev-worker)[6152]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:40:03.496072 systemd-networkd[1742]: lxc_health: Link UP Feb 13 15:40:03.501950 (udev-worker)[6153]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:40:03.507813 systemd-networkd[1742]: lxc_health: Gained carrier Feb 13 15:40:05.424457 systemd[1]: run-containerd-runc-k8s.io-5c9d32fd3a28b74cec900486a178d4dab3f293e7be15fe50a2ca310966ad1c3c-runc.4Y43Hd.mount: Deactivated successfully. Feb 13 15:40:05.428849 systemd-networkd[1742]: lxc_health: Gained IPv6LL Feb 13 15:40:05.482245 kubelet[3490]: I0213 15:40:05.482176 3490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mb8l7" podStartSLOduration=13.482154434 podStartE2EDuration="13.482154434s" podCreationTimestamp="2025-02-13 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:58.793242848 +0000 UTC m=+119.929883301" watchObservedRunningTime="2025-02-13 15:40:05.482154434 +0000 UTC m=+126.618794889" Feb 13 15:40:08.058471 ntpd[1869]: Listen normally on 15 lxc_health [fe80::d857:78ff:fecc:d472%14]:123 Feb 13 15:40:08.058840 ntpd[1869]: 13 Feb 15:40:08 ntpd[1869]: Listen normally on 15 lxc_health [fe80::d857:78ff:fecc:d472%14]:123 Feb 13 15:40:10.192466 sshd[5373]: Connection closed by 139.178.89.65 port 41154 Feb 13 15:40:10.194114 sshd-session[5293]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:10.203453 systemd[1]: sshd@29-172.31.28.54:22-139.178.89.65:41154.service: Deactivated successfully. Feb 13 15:40:10.213336 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 15:40:10.221594 systemd-logind[1876]: Session 30 logged out. Waiting for processes to exit. Feb 13 15:40:10.228679 systemd-logind[1876]: Removed session 30.