Jan 30 13:51:45.066583 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:51:45.066623 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:51:45.066638 kernel: BIOS-provided physical RAM map: Jan 30 13:51:45.066649 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:51:45.066658 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:51:45.066669 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:51:45.066686 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 30 13:51:45.066698 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 30 13:51:45.066710 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 30 13:51:45.066722 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:51:45.066733 kernel: NX (Execute Disable) protection: active Jan 30 13:51:45.066745 kernel: APIC: Static calls initialized Jan 30 13:51:45.066756 kernel: SMBIOS 2.7 present. Jan 30 13:51:45.066769 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 30 13:51:45.066787 kernel: Hypervisor detected: KVM Jan 30 13:51:45.066800 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:51:45.066813 kernel: kvm-clock: using sched offset of 6688348658 cycles Jan 30 13:51:45.066827 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:51:45.066841 kernel: tsc: Detected 2499.994 MHz processor Jan 30 13:51:45.066855 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:51:45.066868 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:51:45.066884 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 30 13:51:45.066898 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:51:45.066911 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:51:45.066924 kernel: Using GB pages for direct mapping Jan 30 13:51:45.066938 kernel: ACPI: Early table checksum verification disabled Jan 30 13:51:45.066951 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 30 13:51:45.066964 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 30 13:51:45.066978 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 30 13:51:45.066991 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 30 13:51:45.067007 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 30 13:51:45.067021 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 30 13:51:45.067034 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 30 13:51:45.067047 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 30 13:51:45.067061 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 30 13:51:45.067074 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 30 13:51:45.067088 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 30 13:51:45.067101 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 30 13:51:45.067114 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 30 13:51:45.067130 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 30 13:51:45.067150 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 30 13:51:45.067164 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 30 13:51:45.067177 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 30 13:51:45.067192 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 30 13:51:45.067209 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 30 13:51:45.067223 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 30 13:51:45.067238 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 30 13:51:45.067251 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 30 13:51:45.067265 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:51:45.067280 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:51:45.067294 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 30 13:51:45.067308 kernel: NUMA: Initialized distance table, cnt=1 Jan 30 13:51:45.067323 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 30 13:51:45.067340 kernel: Zone ranges: Jan 30 13:51:45.067354 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:51:45.067368 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 30 13:51:45.067382 kernel: Normal empty Jan 30 13:51:45.067396 kernel: Movable zone start for each node Jan 30 13:51:45.067409 kernel: Early memory node ranges Jan 30 13:51:45.067423 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:51:45.067492 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 30 13:51:45.067583 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 30 13:51:45.067597 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:51:45.067618 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:51:45.067632 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 30 13:51:45.067645 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 13:51:45.067657 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:51:45.067671 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 30 13:51:45.067686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:51:45.067698 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:51:45.067711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:51:45.067723 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:51:45.067751 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:51:45.067764 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:51:45.067777 kernel: TSC deadline timer available Jan 30 13:51:45.067791 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:51:45.067805 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:51:45.067818 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 30 13:51:45.067832 kernel: Booting paravirtualized kernel on KVM Jan 30 13:51:45.067845 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:51:45.067859 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:51:45.067876 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:51:45.067890 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:51:45.067904 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:51:45.067918 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:51:45.067932 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:51:45.067948 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:51:45.067963 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:51:45.067977 kernel: random: crng init done Jan 30 13:51:45.067994 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:51:45.068009 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:51:45.068022 kernel: Fallback order for Node 0: 0 Jan 30 13:51:45.068135 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 30 13:51:45.068149 kernel: Policy zone: DMA32 Jan 30 13:51:45.068164 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:51:45.068179 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 30 13:51:45.068194 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:51:45.068208 kernel: Kernel/User page tables isolation: enabled Jan 30 13:51:45.068226 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:51:45.068241 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:51:45.068255 kernel: Dynamic Preempt: voluntary Jan 30 13:51:45.068270 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:51:45.068285 kernel: rcu: RCU event tracing is enabled. Jan 30 13:51:45.068299 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:51:45.068314 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:51:45.068329 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:51:45.068344 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:51:45.068363 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:51:45.068377 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:51:45.068392 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:51:45.068408 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:51:45.068528 kernel: Console: colour VGA+ 80x25 Jan 30 13:51:45.068545 kernel: printk: console [ttyS0] enabled Jan 30 13:51:45.068561 kernel: ACPI: Core revision 20230628 Jan 30 13:51:45.068575 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 30 13:51:45.068590 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:51:45.068608 kernel: x2apic enabled Jan 30 13:51:45.068623 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:51:45.068648 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Jan 30 13:51:45.068667 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Jan 30 13:51:45.068682 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:51:45.068697 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:51:45.068712 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:51:45.068727 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:51:45.068741 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:51:45.068756 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:51:45.068771 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:51:45.068785 kernel: RETBleed: Vulnerable Jan 30 13:51:45.068803 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:51:45.068818 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:51:45.068833 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:51:45.068847 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 13:51:45.068862 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:51:45.068877 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:51:45.068892 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:51:45.068909 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 13:51:45.068924 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 13:51:45.068939 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:51:45.068953 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:51:45.068968 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:51:45.068983 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 30 13:51:45.068998 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:51:45.069013 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 13:51:45.069027 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 13:51:45.069042 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 30 13:51:45.069056 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 30 13:51:45.069073 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 30 13:51:45.069088 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 30 13:51:45.069103 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 30 13:51:45.069118 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:51:45.069132 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:51:45.069147 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:51:45.069162 kernel: landlock: Up and running. Jan 30 13:51:45.069176 kernel: SELinux: Initializing. Jan 30 13:51:45.069191 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:51:45.069206 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:51:45.069221 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:51:45.069238 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:51:45.069254 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:51:45.069269 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:51:45.069284 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:51:45.069299 kernel: signal: max sigframe size: 3632 Jan 30 13:51:45.069314 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:51:45.069329 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:51:45.069344 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:51:45.069359 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:51:45.069377 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:51:45.069391 kernel: .... node #0, CPUs: #1 Jan 30 13:51:45.069407 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 13:51:45.069423 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:51:45.069457 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:51:45.069472 kernel: smpboot: Max logical packages: 1 Jan 30 13:51:45.069487 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Jan 30 13:51:45.069502 kernel: devtmpfs: initialized Jan 30 13:51:45.069521 kernel: x86/mm: Memory block size: 128MB Jan 30 13:51:45.069536 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:51:45.069551 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:51:45.069566 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:51:45.069581 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:51:45.069596 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:51:45.069611 kernel: audit: type=2000 audit(1738245104.167:1): state=initialized audit_enabled=0 res=1 Jan 30 13:51:45.069626 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:51:45.069641 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:51:45.069659 kernel: cpuidle: using governor menu Jan 30 13:51:45.069674 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:51:45.069689 kernel: dca service started, version 1.12.1 Jan 30 13:51:45.069705 kernel: PCI: Using configuration type 1 for base access Jan 30 13:51:45.069720 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:51:45.069735 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:51:45.069750 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:51:45.069766 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:51:45.069781 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:51:45.069798 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:51:45.069814 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:51:45.069828 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:51:45.069843 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:51:45.069858 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 13:51:45.069873 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:51:45.069888 kernel: ACPI: Interpreter enabled Jan 30 13:51:45.069903 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:51:45.069918 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:51:45.069933 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:51:45.069951 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:51:45.069967 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 13:51:45.069983 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:51:45.070236 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:51:45.070375 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:51:45.070529 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:51:45.070549 kernel: acpiphp: Slot [3] registered Jan 30 13:51:45.070570 kernel: acpiphp: Slot [4] registered Jan 30 13:51:45.070586 kernel: acpiphp: Slot [5] registered Jan 30 13:51:45.070601 kernel: acpiphp: Slot [6] registered Jan 30 13:51:45.070617 kernel: acpiphp: Slot [7] registered Jan 30 13:51:45.070633 kernel: acpiphp: Slot [8] registered Jan 30 13:51:45.070649 kernel: acpiphp: Slot [9] registered Jan 30 13:51:45.070665 kernel: acpiphp: Slot [10] registered Jan 30 13:51:45.070681 kernel: acpiphp: Slot [11] registered Jan 30 13:51:45.070697 kernel: acpiphp: Slot [12] registered Jan 30 13:51:45.070715 kernel: acpiphp: Slot [13] registered Jan 30 13:51:45.070731 kernel: acpiphp: Slot [14] registered Jan 30 13:51:45.070747 kernel: acpiphp: Slot [15] registered Jan 30 13:51:45.070763 kernel: acpiphp: Slot [16] registered Jan 30 13:51:45.070778 kernel: acpiphp: Slot [17] registered Jan 30 13:51:45.070794 kernel: acpiphp: Slot [18] registered Jan 30 13:51:45.070810 kernel: acpiphp: Slot [19] registered Jan 30 13:51:45.070826 kernel: acpiphp: Slot [20] registered Jan 30 13:51:45.070842 kernel: acpiphp: Slot [21] registered Jan 30 13:51:45.070858 kernel: acpiphp: Slot [22] registered Jan 30 13:51:45.070876 kernel: acpiphp: Slot [23] registered Jan 30 13:51:45.070892 kernel: acpiphp: Slot [24] registered Jan 30 13:51:45.070907 kernel: acpiphp: Slot [25] registered Jan 30 13:51:45.070923 kernel: acpiphp: Slot [26] registered Jan 30 13:51:45.070938 kernel: acpiphp: Slot [27] registered Jan 30 13:51:45.070954 kernel: acpiphp: Slot [28] registered Jan 30 13:51:45.070969 kernel: acpiphp: Slot [29] registered Jan 30 13:51:45.070985 kernel: acpiphp: Slot [30] registered Jan 30 13:51:45.071001 kernel: acpiphp: Slot [31] registered Jan 30 13:51:45.071019 kernel: PCI host bridge to bus 0000:00 Jan 30 13:51:45.071151 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:51:45.071269 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:51:45.071383 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:51:45.071580 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:51:45.071697 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:51:45.071861 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:51:45.072007 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:51:45.072145 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 30 13:51:45.072309 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 13:51:45.072448 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 30 13:51:45.072594 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 30 13:51:45.072724 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 30 13:51:45.072863 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 30 13:51:45.073124 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 30 13:51:45.073261 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 30 13:51:45.073404 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 30 13:51:45.073577 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 30 13:51:45.073708 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 30 13:51:45.073836 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:51:45.073962 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:51:45.074102 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 30 13:51:45.074230 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 30 13:51:45.074365 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 30 13:51:45.074529 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 30 13:51:45.074549 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:51:45.074566 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:51:45.074587 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:51:45.074604 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:51:45.074618 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:51:45.074633 kernel: iommu: Default domain type: Translated Jan 30 13:51:45.074647 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:51:45.074664 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:51:45.074680 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:51:45.074697 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:51:45.074713 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 30 13:51:45.074860 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 30 13:51:45.074999 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 30 13:51:45.075131 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:51:45.075150 kernel: vgaarb: loaded Jan 30 13:51:45.075167 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 30 13:51:45.075182 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 30 13:51:45.075198 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:51:45.075214 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:51:45.075229 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:51:45.075248 kernel: pnp: PnP ACPI init Jan 30 13:51:45.075264 kernel: pnp: PnP ACPI: found 5 devices Jan 30 13:51:45.075279 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:51:45.075295 kernel: NET: Registered PF_INET protocol family Jan 30 13:51:45.075311 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:51:45.075327 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:51:45.075343 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:51:45.075358 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:51:45.075377 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:51:45.075392 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:51:45.075407 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:51:45.075423 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:51:45.075505 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:51:45.075532 kernel: NET: Registered PF_XDP protocol family Jan 30 13:51:45.075662 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:51:45.075789 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:51:45.075915 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:51:45.076062 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:51:45.076314 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:51:45.076337 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:51:45.076355 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:51:45.076372 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Jan 30 13:51:45.076389 kernel: clocksource: Switched to clocksource tsc Jan 30 13:51:45.076404 kernel: Initialise system trusted keyrings Jan 30 13:51:45.076421 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:51:45.076529 kernel: Key type asymmetric registered Jan 30 13:51:45.076545 kernel: Asymmetric key parser 'x509' registered Jan 30 13:51:45.076562 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:51:45.076578 kernel: io scheduler mq-deadline registered Jan 30 13:51:45.076596 kernel: io scheduler kyber registered Jan 30 13:51:45.076612 kernel: io scheduler bfq registered Jan 30 13:51:45.076629 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:51:45.076645 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:51:45.076662 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:51:45.076681 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:51:45.076697 kernel: i8042: Warning: Keylock active Jan 30 13:51:45.076713 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:51:45.076730 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:51:45.076891 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 13:51:45.077019 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 13:51:45.077145 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T13:51:44 UTC (1738245104) Jan 30 13:51:45.077271 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 13:51:45.077295 kernel: intel_pstate: CPU model not supported Jan 30 13:51:45.077312 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:51:45.077328 kernel: Segment Routing with IPv6 Jan 30 13:51:45.077344 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:51:45.077361 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:51:45.077378 kernel: Key type dns_resolver registered Jan 30 13:51:45.077394 kernel: IPI shorthand broadcast: enabled Jan 30 13:51:45.077411 kernel: sched_clock: Marking stable (718001861, 312476205)->(1133006715, -102528649) Jan 30 13:51:45.077428 kernel: registered taskstats version 1 Jan 30 13:51:45.077469 kernel: Loading compiled-in X.509 certificates Jan 30 13:51:45.077486 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:51:45.077502 kernel: Key type .fscrypt registered Jan 30 13:51:45.077519 kernel: Key type fscrypt-provisioning registered Jan 30 13:51:45.077535 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:51:45.077551 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:51:45.077568 kernel: ima: No architecture policies found Jan 30 13:51:45.077584 kernel: clk: Disabling unused clocks Jan 30 13:51:45.077601 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:51:45.077621 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:51:45.077637 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:51:45.077653 kernel: Run /init as init process Jan 30 13:51:45.077670 kernel: with arguments: Jan 30 13:51:45.077686 kernel: /init Jan 30 13:51:45.077702 kernel: with environment: Jan 30 13:51:45.077717 kernel: HOME=/ Jan 30 13:51:45.077733 kernel: TERM=linux Jan 30 13:51:45.077748 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:51:45.077775 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:51:45.077809 systemd[1]: Detected virtualization amazon. Jan 30 13:51:45.077831 systemd[1]: Detected architecture x86-64. Jan 30 13:51:45.077849 systemd[1]: Running in initrd. Jan 30 13:51:45.077866 systemd[1]: No hostname configured, using default hostname. Jan 30 13:51:45.077886 systemd[1]: Hostname set to . Jan 30 13:51:45.077906 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:51:45.077924 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:51:45.077942 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:51:45.077961 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:51:45.077980 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:51:45.077998 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:51:45.078016 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:51:45.078038 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:51:45.078059 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:51:45.078077 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:51:45.078096 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:51:45.078113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:45.078132 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:51:45.078153 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:51:45.078171 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:51:45.078189 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:51:45.078207 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:51:45.078225 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:51:45.078244 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:51:45.078262 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:51:45.078280 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:51:45.078298 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:51:45.078320 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:51:45.078339 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:51:45.078356 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:51:45.078374 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:51:45.078396 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:51:45.078417 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:51:45.078492 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:51:45.078517 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:51:45.078537 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:51:45.078584 systemd-journald[178]: Collecting audit messages is disabled. Jan 30 13:51:45.078626 systemd-journald[178]: Journal started Jan 30 13:51:45.078665 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2c717a4ac0da0886a554c81e347e56) is 4.8M, max 38.6M, 33.7M free. Jan 30 13:51:45.072609 systemd-modules-load[179]: Inserted module 'overlay' Jan 30 13:51:45.086481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:45.090463 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:51:45.096479 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:51:45.098712 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:51:45.100980 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:51:45.128507 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:51:45.128577 kernel: Bridge firewalling registered Jan 30 13:51:45.126834 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:51:45.126883 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 30 13:51:45.136705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:51:45.256653 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:51:45.258351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:45.284246 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:51:45.299713 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:51:45.302491 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:51:45.306273 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:51:45.320741 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:51:45.329857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:51:45.339717 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:51:45.345114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:51:45.356539 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:45.389485 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:51:45.421455 dracut-cmdline[214]: dracut-dracut-053 Jan 30 13:51:45.424863 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:51:45.438399 systemd-resolved[207]: Positive Trust Anchors: Jan 30 13:51:45.438422 systemd-resolved[207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:51:45.438535 systemd-resolved[207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:51:45.442825 systemd-resolved[207]: Defaulting to hostname 'linux'. Jan 30 13:51:45.444316 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:51:45.445690 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:45.515469 kernel: SCSI subsystem initialized Jan 30 13:51:45.524465 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:51:45.535463 kernel: iscsi: registered transport (tcp) Jan 30 13:51:45.556466 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:51:45.556538 kernel: QLogic iSCSI HBA Driver Jan 30 13:51:45.594686 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:51:45.599665 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:51:45.626785 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:51:45.626867 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:51:45.626899 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:51:45.667467 kernel: raid6: avx512x4 gen() 16333 MB/s Jan 30 13:51:45.684462 kernel: raid6: avx512x2 gen() 18076 MB/s Jan 30 13:51:45.701469 kernel: raid6: avx512x1 gen() 18007 MB/s Jan 30 13:51:45.718466 kernel: raid6: avx2x4 gen() 17450 MB/s Jan 30 13:51:45.735459 kernel: raid6: avx2x2 gen() 17879 MB/s Jan 30 13:51:45.752464 kernel: raid6: avx2x1 gen() 12856 MB/s Jan 30 13:51:45.752557 kernel: raid6: using algorithm avx512x2 gen() 18076 MB/s Jan 30 13:51:45.769465 kernel: raid6: .... xor() 23133 MB/s, rmw enabled Jan 30 13:51:45.769552 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:51:45.790465 kernel: xor: automatically using best checksumming function avx Jan 30 13:51:45.961469 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:51:45.972211 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:51:45.976708 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:51:46.025726 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 30 13:51:46.033123 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:51:46.047641 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:51:46.075667 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Jan 30 13:51:46.109576 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:51:46.118008 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:51:46.199945 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:51:46.211957 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:51:46.254793 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:51:46.260347 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:51:46.264061 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:51:46.266452 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:51:46.275252 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:51:46.324207 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:51:46.342735 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 30 13:51:46.401053 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 30 13:51:46.401273 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:51:46.401295 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 30 13:51:46.401487 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:51:46.401506 kernel: AES CTR mode by8 optimization enabled Jan 30 13:51:46.401525 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:f2:b6:48:a2:cf Jan 30 13:51:46.408592 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 30 13:51:46.408846 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:51:46.407900 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:51:46.408177 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:46.412799 (udev-worker)[457]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:51:46.418352 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:51:46.419991 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:51:46.420280 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:46.422569 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:46.437051 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 30 13:51:46.444270 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:51:46.444333 kernel: GPT:9289727 != 16777215 Jan 30 13:51:46.444356 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:51:46.444375 kernel: GPT:9289727 != 16777215 Jan 30 13:51:46.444397 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:51:46.444422 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:51:46.444789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:46.633480 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (452) Jan 30 13:51:46.639492 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (451) Jan 30 13:51:46.694201 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 30 13:51:46.700641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:46.707698 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:51:46.755227 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 30 13:51:46.790840 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:46.803870 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:51:46.810840 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 30 13:51:46.811073 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 30 13:51:46.826095 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:51:46.837082 disk-uuid[631]: Primary Header is updated. Jan 30 13:51:46.837082 disk-uuid[631]: Secondary Entries is updated. Jan 30 13:51:46.837082 disk-uuid[631]: Secondary Header is updated. Jan 30 13:51:46.855467 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:51:46.874493 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:51:47.879464 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:51:47.879859 disk-uuid[632]: The operation has completed successfully. Jan 30 13:51:48.031131 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:51:48.031253 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:51:48.053722 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:51:48.070635 sh[890]: Success Jan 30 13:51:48.095551 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:51:48.179577 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:51:48.195574 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:51:48.197732 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:51:48.231462 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:51:48.231535 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:48.231556 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:51:48.232836 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:51:48.232859 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:51:48.388461 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:51:48.404955 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:51:48.405872 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:51:48.413737 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:51:48.416052 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:51:48.436508 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:48.436580 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:48.436600 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:51:48.441459 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:51:48.455494 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:48.455347 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:51:48.477883 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:51:48.490840 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:51:48.552420 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:51:48.567777 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:51:48.592283 systemd-networkd[1082]: lo: Link UP Jan 30 13:51:48.592296 systemd-networkd[1082]: lo: Gained carrier Jan 30 13:51:48.596303 systemd-networkd[1082]: Enumeration completed Jan 30 13:51:48.598266 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:48.598279 systemd-networkd[1082]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:51:48.599201 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:51:48.607738 systemd[1]: Reached target network.target - Network. Jan 30 13:51:48.610937 systemd-networkd[1082]: eth0: Link UP Jan 30 13:51:48.610946 systemd-networkd[1082]: eth0: Gained carrier Jan 30 13:51:48.610962 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:48.624585 systemd-networkd[1082]: eth0: DHCPv4 address 172.31.18.59/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:51:48.938034 ignition[1005]: Ignition 2.19.0 Jan 30 13:51:48.938044 ignition[1005]: Stage: fetch-offline Jan 30 13:51:48.938256 ignition[1005]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:48.938264 ignition[1005]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:48.938738 ignition[1005]: Ignition finished successfully Jan 30 13:51:48.951057 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:51:48.966693 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:51:49.001125 ignition[1090]: Ignition 2.19.0 Jan 30 13:51:49.001142 ignition[1090]: Stage: fetch Jan 30 13:51:49.001642 ignition[1090]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:49.001655 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:49.001762 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:49.045968 ignition[1090]: PUT result: OK Jan 30 13:51:49.048787 ignition[1090]: parsed url from cmdline: "" Jan 30 13:51:49.048798 ignition[1090]: no config URL provided Jan 30 13:51:49.048808 ignition[1090]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:51:49.048823 ignition[1090]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:51:49.048848 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:49.052005 ignition[1090]: PUT result: OK Jan 30 13:51:49.055974 ignition[1090]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 30 13:51:49.061875 ignition[1090]: GET result: OK Jan 30 13:51:49.062136 ignition[1090]: parsing config with SHA512: 4989d4bf349d91c8ff88eb9896607fafdf818a2a6d46856b8359acb196318e455a7088cb43b82b688c5ceb61be53f264e6c513f4b4d319b3361ab0f14cebc196 Jan 30 13:51:49.070556 unknown[1090]: fetched base config from "system" Jan 30 13:51:49.071162 unknown[1090]: fetched base config from "system" Jan 30 13:51:49.072560 ignition[1090]: fetch: fetch complete Jan 30 13:51:49.071171 unknown[1090]: fetched user config from "aws" Jan 30 13:51:49.072568 ignition[1090]: fetch: fetch passed Jan 30 13:51:49.076836 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:51:49.072645 ignition[1090]: Ignition finished successfully Jan 30 13:51:49.084691 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:51:49.107556 ignition[1096]: Ignition 2.19.0 Jan 30 13:51:49.107568 ignition[1096]: Stage: kargs Jan 30 13:51:49.108052 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:49.108061 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:49.108146 ignition[1096]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:49.111359 ignition[1096]: PUT result: OK Jan 30 13:51:49.140584 ignition[1096]: kargs: kargs passed Jan 30 13:51:49.140691 ignition[1096]: Ignition finished successfully Jan 30 13:51:49.149296 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:51:49.160700 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:51:49.186949 ignition[1103]: Ignition 2.19.0 Jan 30 13:51:49.186963 ignition[1103]: Stage: disks Jan 30 13:51:49.187564 ignition[1103]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:49.187578 ignition[1103]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:49.188231 ignition[1103]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:49.190382 ignition[1103]: PUT result: OK Jan 30 13:51:49.196390 ignition[1103]: disks: disks passed Jan 30 13:51:49.196612 ignition[1103]: Ignition finished successfully Jan 30 13:51:49.200165 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:51:49.200608 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:51:49.204012 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:51:49.205613 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:51:49.208174 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:51:49.210835 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:51:49.219618 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:51:49.262387 systemd-fsck[1111]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:51:49.268585 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:51:49.278562 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:51:49.415467 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:51:49.415948 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:51:49.417012 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:51:49.434721 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:51:49.440576 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:51:49.447120 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:51:49.447200 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:51:49.447237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:51:49.459467 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1130) Jan 30 13:51:49.460872 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:51:49.464468 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:49.465938 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:49.465984 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:51:49.470184 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:51:49.469688 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:51:49.475864 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:51:49.987730 initrd-setup-root[1154]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:51:50.008663 initrd-setup-root[1161]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:51:50.014563 initrd-setup-root[1168]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:51:50.037287 initrd-setup-root[1175]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:51:50.151617 systemd-networkd[1082]: eth0: Gained IPv6LL Jan 30 13:51:50.484097 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:51:50.491645 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:51:50.505687 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:51:50.521579 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:50.526864 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:51:50.569562 ignition[1242]: INFO : Ignition 2.19.0 Jan 30 13:51:50.571142 ignition[1242]: INFO : Stage: mount Jan 30 13:51:50.572227 ignition[1242]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:50.572227 ignition[1242]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:50.572227 ignition[1242]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:50.576495 ignition[1242]: INFO : PUT result: OK Jan 30 13:51:50.580010 ignition[1242]: INFO : mount: mount passed Jan 30 13:51:50.580010 ignition[1242]: INFO : Ignition finished successfully Jan 30 13:51:50.584419 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:51:50.584737 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:51:50.594630 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:51:50.610707 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:51:50.634402 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1254) Jan 30 13:51:50.634478 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:50.634499 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:50.635974 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:51:50.640739 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:51:50.643511 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:51:50.678700 ignition[1271]: INFO : Ignition 2.19.0 Jan 30 13:51:50.678700 ignition[1271]: INFO : Stage: files Jan 30 13:51:50.681284 ignition[1271]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:50.681284 ignition[1271]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:50.681284 ignition[1271]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:50.685768 ignition[1271]: INFO : PUT result: OK Jan 30 13:51:50.697395 ignition[1271]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:51:50.730151 ignition[1271]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:51:50.730151 ignition[1271]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:51:50.783808 ignition[1271]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:51:50.785692 ignition[1271]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:51:50.790378 ignition[1271]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:51:50.790325 unknown[1271]: wrote ssh authorized keys file for user: core Jan 30 13:51:50.810406 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:51:50.816051 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:51:50.942411 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:51:51.106104 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:51:51.106104 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:51:51.115486 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:51:51.524699 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:51:51.742521 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:51:51.748498 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:51:51.748498 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:51:51.758568 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:51:51.758568 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:51:51.758568 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:51:51.758568 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:51:51.758568 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:51:51.758568 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:51:51.758568 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:51:51.758568 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:51:51.758568 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:51:51.758568 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:51:51.758568 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:51:51.758568 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:51:52.050136 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:51:52.600669 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:51:52.600669 ignition[1271]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:51:52.606682 ignition[1271]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:51:52.610605 ignition[1271]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:51:52.610605 ignition[1271]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:51:52.610605 ignition[1271]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:51:52.610605 ignition[1271]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:51:52.610605 ignition[1271]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:51:52.610605 ignition[1271]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:51:52.610605 ignition[1271]: INFO : files: files passed Jan 30 13:51:52.610605 ignition[1271]: INFO : Ignition finished successfully Jan 30 13:51:52.615981 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:51:52.637810 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:51:52.645783 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:51:52.651882 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:51:52.652018 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:51:52.703073 initrd-setup-root-after-ignition[1299]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:52.703073 initrd-setup-root-after-ignition[1299]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:52.710579 initrd-setup-root-after-ignition[1303]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:52.729542 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:51:52.729977 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:51:52.749424 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:51:52.788133 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:51:52.788235 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:51:52.790976 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:51:52.793775 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:51:52.795193 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:51:52.802813 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:51:52.823466 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:51:52.829711 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:51:52.845338 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:52.848300 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:51:52.850857 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:51:52.852081 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:51:52.852400 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:51:52.872798 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:51:52.875428 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:51:52.878052 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:51:52.880496 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:51:52.889077 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:51:52.897250 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:51:52.902988 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:51:52.916037 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:51:52.916294 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:51:52.923354 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:51:52.925372 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:51:52.925616 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:51:52.931014 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:52.932530 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:51:52.936720 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:51:52.938071 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:51:52.943237 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:51:52.943372 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:51:52.950171 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:51:52.950756 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:51:52.956418 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:51:52.956634 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:51:52.972017 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:51:52.973563 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:51:52.973902 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:51:52.986172 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:51:52.989520 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:51:52.989816 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:51:52.994003 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:51:52.994193 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:51:53.005376 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:51:53.006030 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:51:53.018499 ignition[1323]: INFO : Ignition 2.19.0 Jan 30 13:51:53.018499 ignition[1323]: INFO : Stage: umount Jan 30 13:51:53.020998 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:53.020998 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:53.020998 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:53.025800 ignition[1323]: INFO : PUT result: OK Jan 30 13:51:53.035564 ignition[1323]: INFO : umount: umount passed Jan 30 13:51:53.040763 ignition[1323]: INFO : Ignition finished successfully Jan 30 13:51:53.044884 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:51:53.045644 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:51:53.045773 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:51:53.050046 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:51:53.050177 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:51:53.052502 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:51:53.052576 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:51:53.055004 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:51:53.055097 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:51:53.057542 systemd[1]: Stopped target network.target - Network. Jan 30 13:51:53.058097 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:51:53.058288 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:51:53.059008 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:51:53.059528 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:51:53.065745 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:51:53.067864 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:51:53.081221 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:51:53.081483 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:51:53.081644 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:51:53.095314 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:51:53.096508 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:51:53.101330 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:51:53.102626 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:51:53.102824 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:51:53.102886 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:51:53.106358 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:51:53.108489 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:51:53.112489 systemd-networkd[1082]: eth0: DHCPv6 lease lost Jan 30 13:51:53.113927 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:51:53.114021 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:51:53.126535 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:51:53.126825 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:51:53.129404 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:51:53.129593 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:51:53.135383 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:51:53.135490 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:51:53.137987 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:51:53.138076 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:51:53.148693 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:51:53.150222 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:51:53.150455 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:51:53.153935 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:51:53.154024 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:51:53.166427 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:51:53.166532 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:51:53.168056 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:51:53.168141 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:51:53.173164 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:51:53.203522 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:51:53.203752 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:51:53.206269 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:51:53.206348 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:51:53.211970 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:51:53.212044 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:51:53.216469 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:51:53.216538 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:51:53.219225 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:51:53.219295 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:51:53.224070 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:51:53.224159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:53.238819 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:51:53.240175 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:51:53.240251 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:51:53.246545 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:51:53.246637 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:53.248761 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:51:53.248920 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:51:53.253259 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:51:53.253967 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:51:53.266919 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:51:53.273873 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:51:53.313311 systemd[1]: Switching root. Jan 30 13:51:53.375505 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 30 13:51:53.375741 systemd-journald[178]: Journal stopped Jan 30 13:51:56.442948 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:51:56.443079 kernel: SELinux: policy capability open_perms=1 Jan 30 13:51:56.443101 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:51:56.443127 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:51:56.443146 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:51:56.443301 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:51:56.443324 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:51:56.443354 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:51:56.443372 kernel: audit: type=1403 audit(1738245114.615:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:51:56.443400 systemd[1]: Successfully loaded SELinux policy in 79.030ms. Jan 30 13:51:56.443431 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.371ms. Jan 30 13:51:56.443481 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:51:56.443502 systemd[1]: Detected virtualization amazon. Jan 30 13:51:56.443522 systemd[1]: Detected architecture x86-64. Jan 30 13:51:56.443541 systemd[1]: Detected first boot. Jan 30 13:51:56.443562 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:51:56.443581 zram_generator::config[1365]: No configuration found. Jan 30 13:51:56.443607 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:51:56.443628 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:51:56.443656 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:51:56.443676 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:51:56.443698 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:51:56.443718 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:51:56.443738 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:51:56.443759 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:51:56.443783 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:51:56.443807 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:51:56.443829 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:51:56.443848 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:51:56.443868 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:51:56.443890 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:51:56.443911 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:51:56.443930 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:51:56.443952 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:51:56.443976 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:51:56.443996 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:51:56.444016 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:51:56.444037 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:51:56.444058 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:51:56.444078 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:51:56.447275 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:51:56.447324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:51:56.447357 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:51:56.447378 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:51:56.448959 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:51:56.448997 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:51:56.449020 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:51:56.449041 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:51:56.449073 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:51:56.449091 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:51:56.449110 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:51:56.449135 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:51:56.449154 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:51:56.449175 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:51:56.449197 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:56.449216 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:51:56.449233 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:51:56.449252 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:51:56.449272 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:51:56.449294 systemd[1]: Reached target machines.target - Containers. Jan 30 13:51:56.449314 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:51:56.449334 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:56.449353 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:51:56.449373 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:51:56.449392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:56.449412 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:51:56.449432 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:56.462747 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:51:56.462783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:51:56.462808 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:51:56.462832 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:51:56.462856 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:51:56.462878 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:51:56.462900 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:51:56.462922 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:51:56.462945 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:51:56.462967 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:51:56.462993 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:51:56.463017 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:51:56.463040 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:51:56.463063 systemd[1]: Stopped verity-setup.service. Jan 30 13:51:56.463087 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:56.463110 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:51:56.463175 systemd-journald[1440]: Collecting audit messages is disabled. Jan 30 13:51:56.463224 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:51:56.463247 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:51:56.463270 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:51:56.463293 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:51:56.463322 kernel: fuse: init (API version 7.39) Jan 30 13:51:56.463346 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:51:56.463368 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:51:56.465527 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:51:56.465567 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:51:56.465588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:56.465609 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:56.465629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:56.465652 kernel: loop: module loaded Jan 30 13:51:56.465674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:56.465694 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:51:56.465714 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:51:56.465734 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:51:56.465755 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:51:56.465781 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:51:56.465802 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:51:56.465827 systemd-journald[1440]: Journal started Jan 30 13:51:56.465870 systemd-journald[1440]: Runtime Journal (/run/log/journal/ec2c717a4ac0da0886a554c81e347e56) is 4.8M, max 38.6M, 33.7M free. Jan 30 13:51:56.473511 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:51:56.473582 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:51:55.894256 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:51:55.927222 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 30 13:51:55.927683 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:51:56.520477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:51:56.520562 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:51:56.522903 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:51:56.526403 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:51:56.528193 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:51:56.530339 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:51:56.543792 kernel: ACPI: bus type drm_connector registered Jan 30 13:51:56.545265 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:51:56.545751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:51:56.564990 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:51:56.589854 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:51:56.591221 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:51:56.591269 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:51:56.596092 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:51:56.603829 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:51:56.613739 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:51:56.615564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:56.626325 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:51:56.630657 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:51:56.632614 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:51:56.639486 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:51:56.644907 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:51:56.652890 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:51:56.655818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:51:56.658358 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:51:56.684548 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:51:56.695882 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:51:56.712117 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:51:56.713737 kernel: loop0: detected capacity change from 0 to 205544 Jan 30 13:51:56.715007 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:51:56.720835 systemd-journald[1440]: Time spent on flushing to /var/log/journal/ec2c717a4ac0da0886a554c81e347e56 is 42.139ms for 968 entries. Jan 30 13:51:56.720835 systemd-journald[1440]: System Journal (/var/log/journal/ec2c717a4ac0da0886a554c81e347e56) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:51:56.785058 systemd-journald[1440]: Received client request to flush runtime journal. Jan 30 13:51:56.728746 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:51:56.788999 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:51:56.810102 udevadm[1501]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:51:56.811677 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:51:56.823793 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:51:56.839970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:51:56.871104 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 13:51:56.875550 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:51:56.880111 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:51:56.913090 systemd-tmpfiles[1510]: ACLs are not supported, ignoring. Jan 30 13:51:56.913120 systemd-tmpfiles[1510]: ACLs are not supported, ignoring. Jan 30 13:51:56.923544 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:51:57.042468 kernel: loop2: detected capacity change from 0 to 61336 Jan 30 13:51:57.158783 kernel: loop3: detected capacity change from 0 to 140768 Jan 30 13:51:57.307503 kernel: loop4: detected capacity change from 0 to 205544 Jan 30 13:51:57.341472 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 13:51:57.371495 kernel: loop6: detected capacity change from 0 to 61336 Jan 30 13:51:57.394483 kernel: loop7: detected capacity change from 0 to 140768 Jan 30 13:51:57.423553 (sd-merge)[1517]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 30 13:51:57.424366 (sd-merge)[1517]: Merged extensions into '/usr'. Jan 30 13:51:57.434215 systemd[1]: Reloading requested from client PID 1495 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:51:57.434392 systemd[1]: Reloading... Jan 30 13:51:57.621394 zram_generator::config[1546]: No configuration found. Jan 30 13:51:57.984535 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:58.105795 systemd[1]: Reloading finished in 670 ms. Jan 30 13:51:58.143932 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:51:58.154792 systemd[1]: Starting ensure-sysext.service... Jan 30 13:51:58.163267 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:51:58.200493 systemd[1]: Reloading requested from client PID 1591 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:51:58.200514 systemd[1]: Reloading... Jan 30 13:51:58.214520 systemd-tmpfiles[1592]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:51:58.215273 systemd-tmpfiles[1592]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:51:58.216720 systemd-tmpfiles[1592]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:51:58.217217 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Jan 30 13:51:58.217294 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Jan 30 13:51:58.228545 systemd-tmpfiles[1592]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:51:58.228562 systemd-tmpfiles[1592]: Skipping /boot Jan 30 13:51:58.254039 systemd-tmpfiles[1592]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:51:58.254055 systemd-tmpfiles[1592]: Skipping /boot Jan 30 13:51:58.383463 zram_generator::config[1626]: No configuration found. Jan 30 13:51:58.519940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:58.590266 systemd[1]: Reloading finished in 389 ms. Jan 30 13:51:58.624545 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:51:58.632050 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:51:58.648828 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:51:58.652959 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:51:58.660131 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:51:58.678962 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:51:58.686645 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:51:58.714736 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:51:58.733129 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:58.733714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:58.743426 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:58.748619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:58.755578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:51:58.756950 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:58.757227 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:58.770810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:58.771055 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:58.774974 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:58.777530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:58.778078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:58.778278 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:58.784287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:58.793618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:58.811118 ldconfig[1491]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:51:58.812012 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:58.815007 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:58.833012 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:58.843802 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:51:58.855255 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:58.857248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:58.857648 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:51:58.875644 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:51:58.876944 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:58.880583 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:51:58.883163 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:51:58.883885 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:51:58.903649 augenrules[1701]: No rules Jan 30 13:51:58.912499 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:51:58.915937 systemd[1]: Finished ensure-sysext.service. Jan 30 13:51:58.924079 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:51:58.925531 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:51:58.928987 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:51:58.931114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:58.931299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:58.936852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:58.937162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:58.946117 systemd-udevd[1677]: Using default interface naming scheme 'v255'. Jan 30 13:51:58.947361 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:51:58.947467 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:51:58.960548 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:51:58.972891 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:51:59.008429 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:51:59.012835 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:51:59.029255 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:51:59.033729 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:51:59.039789 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:51:59.053580 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:51:59.221696 systemd-networkd[1724]: lo: Link UP Jan 30 13:51:59.221708 systemd-networkd[1724]: lo: Gained carrier Jan 30 13:51:59.224270 systemd-networkd[1724]: Enumeration completed Jan 30 13:51:59.225637 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:51:59.242908 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:51:59.252818 systemd-resolved[1676]: Positive Trust Anchors: Jan 30 13:51:59.252845 systemd-resolved[1676]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:51:59.252896 systemd-resolved[1676]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:51:59.260110 (udev-worker)[1725]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:51:59.272905 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:51:59.278968 systemd-resolved[1676]: Defaulting to hostname 'linux'. Jan 30 13:51:59.284050 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:51:59.285887 systemd[1]: Reached target network.target - Network. Jan 30 13:51:59.287241 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:59.364804 systemd-networkd[1724]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:59.364993 systemd-networkd[1724]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:51:59.370970 systemd-networkd[1724]: eth0: Link UP Jan 30 13:51:59.371396 systemd-networkd[1724]: eth0: Gained carrier Jan 30 13:51:59.372924 systemd-networkd[1724]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:59.378463 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1725) Jan 30 13:51:59.383627 systemd-networkd[1724]: eth0: DHCPv4 address 172.31.18.59/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:51:59.401532 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:51:59.403460 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 30 13:51:59.418980 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:51:59.421497 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 30 13:51:59.430957 kernel: ACPI: button: Sleep Button [SLPF] Jan 30 13:51:59.585477 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:51:59.671778 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:59.683101 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:51:59.723330 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:51:59.728899 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:51:59.731653 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:51:59.745039 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:51:59.753222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:51:59.770464 lvm[1840]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:51:59.804671 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:51:59.806652 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:59.818415 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:51:59.831593 lvm[1845]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:51:59.868791 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:51:59.979996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:59.982490 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:51:59.984171 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:51:59.985643 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:51:59.990188 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:51:59.991999 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:51:59.993526 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:51:59.994959 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:51:59.994999 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:51:59.996167 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:51:59.998553 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:52:00.003290 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:52:00.011316 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:52:00.015287 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:52:00.018391 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:52:00.021972 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:52:00.023885 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:52:00.023912 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:52:00.037686 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:52:00.056679 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:52:00.085731 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:52:00.096406 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:52:00.112431 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:52:00.113629 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:52:00.131076 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:52:00.134366 jq[1854]: false Jan 30 13:52:00.156844 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 13:52:00.174606 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:52:00.185687 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 13:52:00.194735 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:52:00.209195 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:52:00.210408 extend-filesystems[1855]: Found loop4 Jan 30 13:52:00.210408 extend-filesystems[1855]: Found loop5 Jan 30 13:52:00.210408 extend-filesystems[1855]: Found loop6 Jan 30 13:52:00.210408 extend-filesystems[1855]: Found loop7 Jan 30 13:52:00.210408 extend-filesystems[1855]: Found nvme0n1 Jan 30 13:52:00.210408 extend-filesystems[1855]: Found nvme0n1p1 Jan 30 13:52:00.210408 extend-filesystems[1855]: Found nvme0n1p2 Jan 30 13:52:00.210408 extend-filesystems[1855]: Found nvme0n1p3 Jan 30 13:52:00.210408 extend-filesystems[1855]: Found usr Jan 30 13:52:00.210408 extend-filesystems[1855]: Found nvme0n1p4 Jan 30 13:52:00.238704 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:52:00.254069 extend-filesystems[1855]: Found nvme0n1p6 Jan 30 13:52:00.254069 extend-filesystems[1855]: Found nvme0n1p7 Jan 30 13:52:00.254069 extend-filesystems[1855]: Found nvme0n1p9 Jan 30 13:52:00.254069 extend-filesystems[1855]: Checking size of /dev/nvme0n1p9 Jan 30 13:52:00.239322 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:52:00.241858 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:52:00.242882 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:52:00.255569 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:52:00.269035 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:52:00.271365 dbus-daemon[1853]: [system] SELinux support is enabled Jan 30 13:52:00.271588 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:52:00.282728 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:52:00.310797 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:52:00.311965 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:52:00.360563 jq[1875]: true Jan 30 13:52:00.392224 extend-filesystems[1855]: Resized partition /dev/nvme0n1p9 Jan 30 13:52:00.361848 dbus-daemon[1853]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1724 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 13:52:00.398744 extend-filesystems[1884]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:52:00.404346 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:52:00.407790 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:52:00.418949 (ntainerd)[1880]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:52:00.431944 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 30 13:52:00.447887 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:52:00.449204 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:52:00.452455 tar[1877]: linux-amd64/helm Jan 30 13:52:00.449250 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:52:00.452671 dbus-daemon[1853]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:52:00.463599 update_engine[1874]: I20250130 13:52:00.455796 1874 main.cc:92] Flatcar Update Engine starting Jan 30 13:52:00.463599 update_engine[1874]: I20250130 13:52:00.458313 1874 update_check_scheduler.cc:74] Next update check in 8m25s Jan 30 13:52:00.450710 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:52:00.450739 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:52:00.470370 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:52:00.518712 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 13:52:00.528767 systemd-networkd[1724]: eth0: Gained IPv6LL Jan 30 13:52:00.562968 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:52:00.572709 ntpd[1857]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:52:00.587097 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:52:00.587097 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:52:00.587097 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: ---------------------------------------------------- Jan 30 13:52:00.587097 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:52:00.587097 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:52:00.587097 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: corporation. Support and training for ntp-4 are Jan 30 13:52:00.587097 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: available at https://www.nwtime.org/support Jan 30 13:52:00.587097 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: ---------------------------------------------------- Jan 30 13:52:00.576280 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:52:00.572751 ntpd[1857]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:52:00.609591 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: proto: precision = 0.074 usec (-24) Jan 30 13:52:00.609591 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: basedate set to 2025-01-17 Jan 30 13:52:00.609591 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: gps base set to 2025-01-19 (week 2350) Jan 30 13:52:00.580988 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:52:00.572763 ntpd[1857]: ---------------------------------------------------- Jan 30 13:52:00.595706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:00.572775 ntpd[1857]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:52:00.600094 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:52:00.572787 ntpd[1857]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:52:00.572798 ntpd[1857]: corporation. Support and training for ntp-4 are Jan 30 13:52:00.572809 ntpd[1857]: available at https://www.nwtime.org/support Jan 30 13:52:00.572820 ntpd[1857]: ---------------------------------------------------- Jan 30 13:52:00.594002 ntpd[1857]: proto: precision = 0.074 usec (-24) Jan 30 13:52:00.594346 ntpd[1857]: basedate set to 2025-01-17 Jan 30 13:52:00.594360 ntpd[1857]: gps base set to 2025-01-19 (week 2350) Jan 30 13:52:00.633459 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 30 13:52:00.633666 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:52:00.633666 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:52:00.633666 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:52:00.633666 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: Listen normally on 3 eth0 172.31.18.59:123 Jan 30 13:52:00.633666 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: Listen normally on 4 lo [::1]:123 Jan 30 13:52:00.633666 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: Listen normally on 5 eth0 [fe80::4f2:b6ff:fe48:a2cf%2]:123 Jan 30 13:52:00.633666 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: Listening on routing socket on fd #22 for interface updates Jan 30 13:52:00.631569 ntpd[1857]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:52:00.677394 jq[1887]: true Jan 30 13:52:00.677541 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:52:00.677541 ntpd[1857]: 30 Jan 13:52:00 ntpd[1857]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:52:00.631645 ntpd[1857]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:52:00.631851 ntpd[1857]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:52:00.631892 ntpd[1857]: Listen normally on 3 eth0 172.31.18.59:123 Jan 30 13:52:00.687976 extend-filesystems[1884]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 30 13:52:00.687976 extend-filesystems[1884]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:52:00.687976 extend-filesystems[1884]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 30 13:52:00.631936 ntpd[1857]: Listen normally on 4 lo [::1]:123 Jan 30 13:52:00.754508 extend-filesystems[1855]: Resized filesystem in /dev/nvme0n1p9 Jan 30 13:52:00.691088 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:52:00.631985 ntpd[1857]: Listen normally on 5 eth0 [fe80::4f2:b6ff:fe48:a2cf%2]:123 Jan 30 13:52:00.692911 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:52:00.632025 ntpd[1857]: Listening on routing socket on fd #22 for interface updates Jan 30 13:52:00.666480 ntpd[1857]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:52:00.666520 ntpd[1857]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:52:00.808949 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 13:52:00.818745 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 30 13:52:00.825002 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:52:00.839290 coreos-metadata[1852]: Jan 30 13:52:00.838 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:52:00.839290 coreos-metadata[1852]: Jan 30 13:52:00.839 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 30 13:52:00.839290 coreos-metadata[1852]: Jan 30 13:52:00.839 INFO Fetch successful Jan 30 13:52:00.839290 coreos-metadata[1852]: Jan 30 13:52:00.839 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 30 13:52:00.839290 coreos-metadata[1852]: Jan 30 13:52:00.839 INFO Fetch successful Jan 30 13:52:00.839290 coreos-metadata[1852]: Jan 30 13:52:00.839 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 30 13:52:00.839290 coreos-metadata[1852]: Jan 30 13:52:00.839 INFO Fetch successful Jan 30 13:52:00.839290 coreos-metadata[1852]: Jan 30 13:52:00.839 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 30 13:52:00.839290 coreos-metadata[1852]: Jan 30 13:52:00.839 INFO Fetch successful Jan 30 13:52:00.839290 coreos-metadata[1852]: Jan 30 13:52:00.839 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 30 13:52:00.839290 coreos-metadata[1852]: Jan 30 13:52:00.839 INFO Fetch failed with 404: resource not found Jan 30 13:52:00.839290 coreos-metadata[1852]: Jan 30 13:52:00.839 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 30 13:52:00.843768 coreos-metadata[1852]: Jan 30 13:52:00.840 INFO Fetch successful Jan 30 13:52:00.843768 coreos-metadata[1852]: Jan 30 13:52:00.840 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 30 13:52:00.843768 coreos-metadata[1852]: Jan 30 13:52:00.840 INFO Fetch successful Jan 30 13:52:00.843768 coreos-metadata[1852]: Jan 30 13:52:00.840 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 30 13:52:00.843768 coreos-metadata[1852]: Jan 30 13:52:00.841 INFO Fetch successful Jan 30 13:52:00.843768 coreos-metadata[1852]: Jan 30 13:52:00.841 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 30 13:52:00.858145 coreos-metadata[1852]: Jan 30 13:52:00.845 INFO Fetch successful Jan 30 13:52:00.858145 coreos-metadata[1852]: Jan 30 13:52:00.845 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 30 13:52:00.858145 coreos-metadata[1852]: Jan 30 13:52:00.845 INFO Fetch successful Jan 30 13:52:00.876081 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1725) Jan 30 13:52:00.977089 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:52:00.979008 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:52:00.989942 systemd-logind[1873]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:52:00.989975 systemd-logind[1873]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 13:52:00.989999 systemd-logind[1873]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:52:00.990244 systemd-logind[1873]: New seat seat0. Jan 30 13:52:00.992477 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:52:01.030882 bash[1952]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:52:01.035637 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:52:01.048585 systemd[1]: Starting sshkeys.service... Jan 30 13:52:01.110285 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:52:01.121411 dbus-daemon[1853]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 13:52:01.123896 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:52:01.128505 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 13:52:01.151810 sshd_keygen[1903]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:52:01.158709 dbus-daemon[1853]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1901 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 13:52:01.206948 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 13:52:01.346346 amazon-ssm-agent[1931]: Initializing new seelog logger Jan 30 13:52:01.371624 amazon-ssm-agent[1931]: New Seelog Logger Creation Complete Jan 30 13:52:01.371624 amazon-ssm-agent[1931]: 2025/01/30 13:52:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:01.371624 amazon-ssm-agent[1931]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:01.395954 amazon-ssm-agent[1931]: 2025/01/30 13:52:01 processing appconfig overrides Jan 30 13:52:01.399149 polkitd[1995]: Started polkitd version 121 Jan 30 13:52:01.407475 amazon-ssm-agent[1931]: 2025/01/30 13:52:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:01.407475 amazon-ssm-agent[1931]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:01.407475 amazon-ssm-agent[1931]: 2025/01/30 13:52:01 processing appconfig overrides Jan 30 13:52:01.407475 amazon-ssm-agent[1931]: 2025/01/30 13:52:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:01.407475 amazon-ssm-agent[1931]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:01.407475 amazon-ssm-agent[1931]: 2025/01/30 13:52:01 processing appconfig overrides Jan 30 13:52:01.433697 amazon-ssm-agent[1931]: 2025-01-30 13:52:01 INFO Proxy environment variables: Jan 30 13:52:01.480244 coreos-metadata[1989]: Jan 30 13:52:01.479 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:52:01.520561 coreos-metadata[1989]: Jan 30 13:52:01.486 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 30 13:52:01.520561 coreos-metadata[1989]: Jan 30 13:52:01.502 INFO Fetch successful Jan 30 13:52:01.520561 coreos-metadata[1989]: Jan 30 13:52:01.502 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 13:52:01.520561 coreos-metadata[1989]: Jan 30 13:52:01.507 INFO Fetch successful Jan 30 13:52:01.518041 polkitd[1995]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 13:52:01.532858 amazon-ssm-agent[1931]: 2025/01/30 13:52:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:01.532858 amazon-ssm-agent[1931]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:01.532858 amazon-ssm-agent[1931]: 2025/01/30 13:52:01 processing appconfig overrides Jan 30 13:52:01.532858 amazon-ssm-agent[1931]: 2025-01-30 13:52:01 INFO no_proxy: Jan 30 13:52:01.513558 unknown[1989]: wrote ssh authorized keys file for user: core Jan 30 13:52:01.518125 polkitd[1995]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 13:52:01.518886 polkitd[1995]: Finished loading, compiling and executing 2 rules Jan 30 13:52:01.535687 dbus-daemon[1853]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 13:52:01.535927 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 13:52:01.548279 polkitd[1995]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 13:52:01.609116 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:52:01.654975 amazon-ssm-agent[1931]: 2025-01-30 13:52:01 INFO https_proxy: Jan 30 13:52:01.660923 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:52:01.683954 systemd[1]: Started sshd@0-172.31.18.59:22-139.178.68.195:33748.service - OpenSSH per-connection server daemon (139.178.68.195:33748). Jan 30 13:52:01.776643 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:52:01.776890 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:52:01.791098 amazon-ssm-agent[1931]: 2025-01-30 13:52:01 INFO http_proxy: Jan 30 13:52:01.800047 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:52:01.866124 update-ssh-keys[2028]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:52:01.865143 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:52:01.867399 locksmithd[1902]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:52:01.892744 systemd[1]: Finished sshkeys.service. Jan 30 13:52:01.939628 amazon-ssm-agent[1931]: 2025-01-30 13:52:01 INFO Checking if agent identity type OnPrem can be assumed Jan 30 13:52:01.967976 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:52:01.997268 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:52:02.019653 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:52:02.020293 systemd-hostnamed[1901]: Hostname set to (transient) Jan 30 13:52:02.020725 systemd-resolved[1676]: System hostname changed to 'ip-172-31-18-59'. Jan 30 13:52:02.025868 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:52:02.048606 amazon-ssm-agent[1931]: 2025-01-30 13:52:01 INFO Checking if agent identity type EC2 can be assumed Jan 30 13:52:02.341283 amazon-ssm-agent[1931]: 2025-01-30 13:52:02 INFO Agent will take identity from EC2 Jan 30 13:52:02.355514 sshd[2055]: Accepted publickey for core from 139.178.68.195 port 33748 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:02.357670 sshd[2055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:02.391613 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:52:02.400021 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:52:02.415461 systemd-logind[1873]: New session 1 of user core. Jan 30 13:52:02.446459 amazon-ssm-agent[1931]: 2025-01-30 13:52:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:52:02.444933 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:52:02.454981 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:52:02.474251 (systemd)[2093]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:52:02.488577 containerd[1880]: time="2025-01-30T13:52:02.483853249Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:52:02.542532 amazon-ssm-agent[1931]: 2025-01-30 13:52:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:52:02.608197 containerd[1880]: time="2025-01-30T13:52:02.608044746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:02.613471 containerd[1880]: time="2025-01-30T13:52:02.610542665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:02.613471 containerd[1880]: time="2025-01-30T13:52:02.610840223Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:52:02.613471 containerd[1880]: time="2025-01-30T13:52:02.610871680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:52:02.613471 containerd[1880]: time="2025-01-30T13:52:02.611081287Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:52:02.613471 containerd[1880]: time="2025-01-30T13:52:02.611107139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:02.613471 containerd[1880]: time="2025-01-30T13:52:02.611178507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:02.613471 containerd[1880]: time="2025-01-30T13:52:02.611199621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:02.613471 containerd[1880]: time="2025-01-30T13:52:02.611549814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:02.613471 containerd[1880]: time="2025-01-30T13:52:02.611576682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:02.613471 containerd[1880]: time="2025-01-30T13:52:02.611610859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:02.613471 containerd[1880]: time="2025-01-30T13:52:02.611630203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:02.615686 containerd[1880]: time="2025-01-30T13:52:02.611746563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:02.615686 containerd[1880]: time="2025-01-30T13:52:02.612007001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:02.615686 containerd[1880]: time="2025-01-30T13:52:02.612179254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:02.615686 containerd[1880]: time="2025-01-30T13:52:02.612204262Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:52:02.615686 containerd[1880]: time="2025-01-30T13:52:02.612404317Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:52:02.615686 containerd[1880]: time="2025-01-30T13:52:02.612482353Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.623067313Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.623148227Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.623174522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.623200446Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.623228084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.624698425Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.625055869Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.625272987Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.625299956Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.625320817Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.625344057Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.625363708Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.625383273Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:52:02.626493 containerd[1880]: time="2025-01-30T13:52:02.625405727Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625432433Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625469300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625487728Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625511331Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625542594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625565245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625586269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625608901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625629410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625651721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625670732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625691434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625716680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627139 containerd[1880]: time="2025-01-30T13:52:02.625740483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.625760241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.625781089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.625803234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.625829250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.625863285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.625883391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.625901107Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.625964618Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.625990169Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.626006998Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.626026941Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.626045361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.626067045Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:52:02.627774 containerd[1880]: time="2025-01-30T13:52:02.626085735Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:52:02.628337 containerd[1880]: time="2025-01-30T13:52:02.626104942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:52:02.637022 containerd[1880]: time="2025-01-30T13:52:02.633155305Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:52:02.637022 containerd[1880]: time="2025-01-30T13:52:02.633650049Z" level=info msg="Connect containerd service" Jan 30 13:52:02.637022 containerd[1880]: time="2025-01-30T13:52:02.633752931Z" level=info msg="using legacy CRI server" Jan 30 13:52:02.637022 containerd[1880]: time="2025-01-30T13:52:02.633767160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:52:02.637022 containerd[1880]: time="2025-01-30T13:52:02.633978207Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:52:02.641889 containerd[1880]: time="2025-01-30T13:52:02.639413456Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:52:02.641889 containerd[1880]: time="2025-01-30T13:52:02.640247724Z" level=info msg="Start subscribing containerd event" Jan 30 13:52:02.641889 containerd[1880]: time="2025-01-30T13:52:02.640319573Z" level=info msg="Start recovering state" Jan 30 13:52:02.641889 containerd[1880]: time="2025-01-30T13:52:02.640406407Z" level=info msg="Start event monitor" Jan 30 13:52:02.641889 containerd[1880]: time="2025-01-30T13:52:02.640430946Z" level=info msg="Start snapshots syncer" Jan 30 13:52:02.641889 containerd[1880]: time="2025-01-30T13:52:02.640465984Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:52:02.641889 containerd[1880]: time="2025-01-30T13:52:02.640510567Z" level=info msg="Start streaming server" Jan 30 13:52:02.643800 amazon-ssm-agent[1931]: 2025-01-30 13:52:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:52:02.658553 containerd[1880]: time="2025-01-30T13:52:02.655534370Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:52:02.658553 containerd[1880]: time="2025-01-30T13:52:02.655649309Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:52:02.658553 containerd[1880]: time="2025-01-30T13:52:02.655726796Z" level=info msg="containerd successfully booted in 0.176102s" Jan 30 13:52:02.655844 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:52:02.744532 amazon-ssm-agent[1931]: 2025-01-30 13:52:02 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 30 13:52:02.844307 amazon-ssm-agent[1931]: 2025-01-30 13:52:02 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 30 13:52:02.890510 systemd[2093]: Queued start job for default target default.target. Jan 30 13:52:02.896293 systemd[2093]: Created slice app.slice - User Application Slice. Jan 30 13:52:02.896337 systemd[2093]: Reached target paths.target - Paths. Jan 30 13:52:02.896359 systemd[2093]: Reached target timers.target - Timers. Jan 30 13:52:02.923710 systemd[2093]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:52:02.955574 amazon-ssm-agent[1931]: 2025-01-30 13:52:02 INFO [amazon-ssm-agent] Starting Core Agent Jan 30 13:52:02.980104 systemd[2093]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:52:02.980192 systemd[2093]: Reached target sockets.target - Sockets. Jan 30 13:52:02.980214 systemd[2093]: Reached target basic.target - Basic System. Jan 30 13:52:02.980274 systemd[2093]: Reached target default.target - Main User Target. Jan 30 13:52:02.980315 systemd[2093]: Startup finished in 487ms. Jan 30 13:52:02.980421 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:52:03.008568 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:52:03.055667 amazon-ssm-agent[1931]: 2025-01-30 13:52:02 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 30 13:52:03.158859 amazon-ssm-agent[1931]: 2025-01-30 13:52:02 INFO [Registrar] Starting registrar module Jan 30 13:52:03.205914 systemd[1]: Started sshd@1-172.31.18.59:22-139.178.68.195:33750.service - OpenSSH per-connection server daemon (139.178.68.195:33750). Jan 30 13:52:03.258317 amazon-ssm-agent[1931]: 2025-01-30 13:52:02 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 30 13:52:03.288230 amazon-ssm-agent[1931]: 2025-01-30 13:52:03 INFO [EC2Identity] EC2 registration was successful. Jan 30 13:52:03.288230 amazon-ssm-agent[1931]: 2025-01-30 13:52:03 INFO [CredentialRefresher] credentialRefresher has started Jan 30 13:52:03.288230 amazon-ssm-agent[1931]: 2025-01-30 13:52:03 INFO [CredentialRefresher] Starting credentials refresher loop Jan 30 13:52:03.288230 amazon-ssm-agent[1931]: 2025-01-30 13:52:03 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 30 13:52:03.358918 amazon-ssm-agent[1931]: 2025-01-30 13:52:03 INFO [CredentialRefresher] Next credential rotation will be in 31.441656391016668 minutes Jan 30 13:52:03.422544 sshd[2109]: Accepted publickey for core from 139.178.68.195 port 33750 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:03.424254 sshd[2109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:03.440330 systemd-logind[1873]: New session 2 of user core. Jan 30 13:52:03.448802 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:52:03.582245 tar[1877]: linux-amd64/LICENSE Jan 30 13:52:03.582245 tar[1877]: linux-amd64/README.md Jan 30 13:52:03.593842 sshd[2109]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:03.607198 systemd[1]: sshd@1-172.31.18.59:22-139.178.68.195:33750.service: Deactivated successfully. Jan 30 13:52:03.618672 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:52:03.624987 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:52:03.625352 systemd-logind[1873]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:52:03.669547 systemd[1]: Started sshd@2-172.31.18.59:22-139.178.68.195:33756.service - OpenSSH per-connection server daemon (139.178.68.195:33756). Jan 30 13:52:03.675034 systemd-logind[1873]: Removed session 2. Jan 30 13:52:03.849116 sshd[2119]: Accepted publickey for core from 139.178.68.195 port 33756 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:03.851629 sshd[2119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:03.865607 systemd-logind[1873]: New session 3 of user core. Jan 30 13:52:03.868771 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:52:04.007613 sshd[2119]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:04.016931 systemd[1]: sshd@2-172.31.18.59:22-139.178.68.195:33756.service: Deactivated successfully. Jan 30 13:52:04.024612 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:52:04.031502 systemd-logind[1873]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:52:04.033748 systemd-logind[1873]: Removed session 3. Jan 30 13:52:04.152304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:04.154806 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:52:04.158089 systemd[1]: Startup finished in 877ms (kernel) + 9.851s (initrd) + 9.613s (userspace) = 20.342s. Jan 30 13:52:04.325003 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:52:04.353114 amazon-ssm-agent[1931]: 2025-01-30 13:52:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 30 13:52:04.454097 amazon-ssm-agent[1931]: 2025-01-30 13:52:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2133) started Jan 30 13:52:04.556645 amazon-ssm-agent[1931]: 2025-01-30 13:52:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 30 13:52:04.936076 kubelet[2130]: E0130 13:52:04.936022 2130 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:52:04.938575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:52:04.938776 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:52:04.939123 systemd[1]: kubelet.service: Consumed 1.026s CPU time. Jan 30 13:52:08.282012 systemd-resolved[1676]: Clock change detected. Flushing caches. Jan 30 13:52:14.750476 systemd[1]: Started sshd@3-172.31.18.59:22-139.178.68.195:44574.service - OpenSSH per-connection server daemon (139.178.68.195:44574). Jan 30 13:52:14.931690 sshd[2154]: Accepted publickey for core from 139.178.68.195 port 44574 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:14.933387 sshd[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:14.938819 systemd-logind[1873]: New session 4 of user core. Jan 30 13:52:14.948656 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:52:15.072849 sshd[2154]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:15.081407 systemd[1]: sshd@3-172.31.18.59:22-139.178.68.195:44574.service: Deactivated successfully. Jan 30 13:52:15.085805 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:52:15.092718 systemd-logind[1873]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:52:15.111217 systemd-logind[1873]: Removed session 4. Jan 30 13:52:15.118529 systemd[1]: Started sshd@4-172.31.18.59:22-139.178.68.195:44580.service - OpenSSH per-connection server daemon (139.178.68.195:44580). Jan 30 13:52:15.277688 sshd[2161]: Accepted publickey for core from 139.178.68.195 port 44580 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:15.279339 sshd[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:15.295393 systemd-logind[1873]: New session 5 of user core. Jan 30 13:52:15.306328 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:52:15.422278 sshd[2161]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:15.426266 systemd[1]: sshd@4-172.31.18.59:22-139.178.68.195:44580.service: Deactivated successfully. Jan 30 13:52:15.429681 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:52:15.431277 systemd-logind[1873]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:52:15.432705 systemd-logind[1873]: Removed session 5. Jan 30 13:52:15.473102 systemd[1]: Started sshd@5-172.31.18.59:22-139.178.68.195:44588.service - OpenSSH per-connection server daemon (139.178.68.195:44588). Jan 30 13:52:15.640670 sshd[2168]: Accepted publickey for core from 139.178.68.195 port 44588 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:15.643564 sshd[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:15.649742 systemd-logind[1873]: New session 6 of user core. Jan 30 13:52:15.650245 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:52:15.660521 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:52:15.663351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:15.785758 sshd[2168]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:15.791038 systemd-logind[1873]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:52:15.791821 systemd[1]: sshd@5-172.31.18.59:22-139.178.68.195:44588.service: Deactivated successfully. Jan 30 13:52:15.795314 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:52:15.796927 systemd-logind[1873]: Removed session 6. Jan 30 13:52:15.831025 systemd[1]: Started sshd@6-172.31.18.59:22-139.178.68.195:44590.service - OpenSSH per-connection server daemon (139.178.68.195:44590). Jan 30 13:52:15.998567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:16.007888 sshd[2178]: Accepted publickey for core from 139.178.68.195 port 44590 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:16.010972 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:16.014819 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:52:16.024760 systemd-logind[1873]: New session 7 of user core. Jan 30 13:52:16.029935 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:52:16.078681 kubelet[2185]: E0130 13:52:16.078627 2185 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:52:16.082689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:52:16.082884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:52:16.145985 sudo[2192]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:52:16.146404 sudo[2192]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:52:16.163182 sudo[2192]: pam_unix(sudo:session): session closed for user root Jan 30 13:52:16.186487 sshd[2178]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:16.191064 systemd[1]: sshd@6-172.31.18.59:22-139.178.68.195:44590.service: Deactivated successfully. Jan 30 13:52:16.193163 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:52:16.194698 systemd-logind[1873]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:52:16.197063 systemd-logind[1873]: Removed session 7. Jan 30 13:52:16.228472 systemd[1]: Started sshd@7-172.31.18.59:22-139.178.68.195:44596.service - OpenSSH per-connection server daemon (139.178.68.195:44596). Jan 30 13:52:16.410919 sshd[2197]: Accepted publickey for core from 139.178.68.195 port 44596 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:16.413575 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:16.424837 systemd-logind[1873]: New session 8 of user core. Jan 30 13:52:16.431329 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:52:16.534228 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:52:16.534632 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:52:16.542427 sudo[2201]: pam_unix(sudo:session): session closed for user root Jan 30 13:52:16.548423 sudo[2200]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:52:16.548899 sudo[2200]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:52:16.570400 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:52:16.572557 auditctl[2204]: No rules Jan 30 13:52:16.573415 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:52:16.573698 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:52:16.580677 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:52:16.647793 augenrules[2222]: No rules Jan 30 13:52:16.649526 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:52:16.653163 sudo[2200]: pam_unix(sudo:session): session closed for user root Jan 30 13:52:16.678532 sshd[2197]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:16.685062 systemd[1]: sshd@7-172.31.18.59:22-139.178.68.195:44596.service: Deactivated successfully. Jan 30 13:52:16.687634 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:52:16.689797 systemd-logind[1873]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:52:16.693694 systemd-logind[1873]: Removed session 8. Jan 30 13:52:16.719653 systemd[1]: Started sshd@8-172.31.18.59:22-139.178.68.195:44606.service - OpenSSH per-connection server daemon (139.178.68.195:44606). Jan 30 13:52:16.898647 sshd[2230]: Accepted publickey for core from 139.178.68.195 port 44606 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:16.900292 sshd[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:16.906296 systemd-logind[1873]: New session 9 of user core. Jan 30 13:52:16.913412 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:52:17.012256 sudo[2233]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:52:17.012658 sudo[2233]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:52:17.787476 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:52:17.787662 (dockerd)[2248]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:52:18.487000 dockerd[2248]: time="2025-01-30T13:52:18.486926096Z" level=info msg="Starting up" Jan 30 13:52:18.714462 dockerd[2248]: time="2025-01-30T13:52:18.714337543Z" level=info msg="Loading containers: start." Jan 30 13:52:18.915140 kernel: Initializing XFRM netlink socket Jan 30 13:52:18.945729 (udev-worker)[2270]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:52:19.028304 systemd-networkd[1724]: docker0: Link UP Jan 30 13:52:19.062015 dockerd[2248]: time="2025-01-30T13:52:19.061960174Z" level=info msg="Loading containers: done." Jan 30 13:52:19.119100 dockerd[2248]: time="2025-01-30T13:52:19.119024273Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:52:19.119305 dockerd[2248]: time="2025-01-30T13:52:19.119171062Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:52:19.119359 dockerd[2248]: time="2025-01-30T13:52:19.119310004Z" level=info msg="Daemon has completed initialization" Jan 30 13:52:19.181959 dockerd[2248]: time="2025-01-30T13:52:19.181709527Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:52:19.182237 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:52:20.358905 containerd[1880]: time="2025-01-30T13:52:20.358861137Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:52:21.082484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1393066350.mount: Deactivated successfully. Jan 30 13:52:23.501978 containerd[1880]: time="2025-01-30T13:52:23.501913579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:23.503480 containerd[1880]: time="2025-01-30T13:52:23.503425144Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 30 13:52:23.505351 containerd[1880]: time="2025-01-30T13:52:23.504797114Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:23.510275 containerd[1880]: time="2025-01-30T13:52:23.510228314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:23.511876 containerd[1880]: time="2025-01-30T13:52:23.511783233Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 3.152875979s" Jan 30 13:52:23.512252 containerd[1880]: time="2025-01-30T13:52:23.512223282Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 13:52:23.518333 containerd[1880]: time="2025-01-30T13:52:23.518294742Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:52:25.851163 containerd[1880]: time="2025-01-30T13:52:25.851107958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:25.852913 containerd[1880]: time="2025-01-30T13:52:25.852866912Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 30 13:52:25.854514 containerd[1880]: time="2025-01-30T13:52:25.854476552Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:25.866576 containerd[1880]: time="2025-01-30T13:52:25.866520713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:25.869094 containerd[1880]: time="2025-01-30T13:52:25.868959410Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 2.350381953s" Jan 30 13:52:25.869094 containerd[1880]: time="2025-01-30T13:52:25.869064561Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 13:52:25.870489 containerd[1880]: time="2025-01-30T13:52:25.870445220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:52:26.297406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:52:26.303807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:26.887480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:26.901632 (kubelet)[2453]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:52:26.981972 kubelet[2453]: E0130 13:52:26.981870 2453 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:52:26.985990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:52:26.986310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:52:27.859806 containerd[1880]: time="2025-01-30T13:52:27.859696252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:27.861659 containerd[1880]: time="2025-01-30T13:52:27.861600799Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 30 13:52:27.863993 containerd[1880]: time="2025-01-30T13:52:27.863912228Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:27.880360 containerd[1880]: time="2025-01-30T13:52:27.879658955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:27.881383 containerd[1880]: time="2025-01-30T13:52:27.881334512Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 2.010844248s" Jan 30 13:52:27.881639 containerd[1880]: time="2025-01-30T13:52:27.881388252Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 13:52:27.882374 containerd[1880]: time="2025-01-30T13:52:27.882331904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:52:29.221057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291806595.mount: Deactivated successfully. Jan 30 13:52:29.901466 containerd[1880]: time="2025-01-30T13:52:29.901401503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:29.903173 containerd[1880]: time="2025-01-30T13:52:29.903100761Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 30 13:52:29.905313 containerd[1880]: time="2025-01-30T13:52:29.905244092Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:29.912189 containerd[1880]: time="2025-01-30T13:52:29.912129179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:29.913737 containerd[1880]: time="2025-01-30T13:52:29.913560946Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.031185722s" Jan 30 13:52:29.913737 containerd[1880]: time="2025-01-30T13:52:29.913613729Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 13:52:29.914580 containerd[1880]: time="2025-01-30T13:52:29.914417709Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:52:30.504447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415696469.mount: Deactivated successfully. Jan 30 13:52:31.824207 containerd[1880]: time="2025-01-30T13:52:31.824152822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:31.826207 containerd[1880]: time="2025-01-30T13:52:31.826150391Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:52:31.828273 containerd[1880]: time="2025-01-30T13:52:31.828161741Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:31.832853 containerd[1880]: time="2025-01-30T13:52:31.832496127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:31.833670 containerd[1880]: time="2025-01-30T13:52:31.833630289Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.919170009s" Jan 30 13:52:31.833759 containerd[1880]: time="2025-01-30T13:52:31.833676479Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:52:31.835013 containerd[1880]: time="2025-01-30T13:52:31.834989813Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:52:32.369708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854878439.mount: Deactivated successfully. Jan 30 13:52:32.380615 containerd[1880]: time="2025-01-30T13:52:32.380561381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:32.382107 containerd[1880]: time="2025-01-30T13:52:32.381975058Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:52:32.383778 containerd[1880]: time="2025-01-30T13:52:32.383718041Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:32.390344 containerd[1880]: time="2025-01-30T13:52:32.390291456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:32.392823 containerd[1880]: time="2025-01-30T13:52:32.392642791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 557.490472ms" Jan 30 13:52:32.392823 containerd[1880]: time="2025-01-30T13:52:32.392695389Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:52:32.393588 containerd[1880]: time="2025-01-30T13:52:32.393487898Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:52:32.762911 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 13:52:33.002971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645862974.mount: Deactivated successfully. Jan 30 13:52:35.960351 containerd[1880]: time="2025-01-30T13:52:35.960293276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:35.962101 containerd[1880]: time="2025-01-30T13:52:35.961908034Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 30 13:52:35.964172 containerd[1880]: time="2025-01-30T13:52:35.964130107Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:35.968958 containerd[1880]: time="2025-01-30T13:52:35.968887266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:35.970510 containerd[1880]: time="2025-01-30T13:52:35.970462165Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.576934524s" Jan 30 13:52:35.970732 containerd[1880]: time="2025-01-30T13:52:35.970515342Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 13:52:37.045974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:52:37.054211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:37.639292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:37.650670 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:52:37.739124 kubelet[2604]: E0130 13:52:37.738634 2604 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:52:37.745646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:52:37.746025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:52:39.277911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:39.285493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:39.364388 systemd[1]: Reloading requested from client PID 2618 ('systemctl') (unit session-9.scope)... Jan 30 13:52:39.364406 systemd[1]: Reloading... Jan 30 13:52:39.511604 zram_generator::config[2654]: No configuration found. Jan 30 13:52:39.700286 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:52:39.811272 systemd[1]: Reloading finished in 446 ms. Jan 30 13:52:39.869484 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:52:39.869624 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:52:39.869967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:39.875553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:40.437367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:40.457644 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:52:40.542980 kubelet[2715]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:52:40.542980 kubelet[2715]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:52:40.542980 kubelet[2715]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:52:40.544829 kubelet[2715]: I0130 13:52:40.544767 2715 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:52:41.352684 kubelet[2715]: I0130 13:52:41.352057 2715 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:52:41.352684 kubelet[2715]: I0130 13:52:41.352679 2715 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:52:41.353429 kubelet[2715]: I0130 13:52:41.353240 2715 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:52:41.418996 kubelet[2715]: I0130 13:52:41.418753 2715 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:52:41.425025 kubelet[2715]: E0130 13:52:41.424969 2715 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.59:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:52:41.440190 kubelet[2715]: E0130 13:52:41.440142 2715 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:52:41.440190 kubelet[2715]: I0130 13:52:41.440184 2715 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:52:41.445917 kubelet[2715]: I0130 13:52:41.445871 2715 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:52:41.448297 kubelet[2715]: I0130 13:52:41.448264 2715 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:52:41.448548 kubelet[2715]: I0130 13:52:41.448512 2715 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:52:41.448758 kubelet[2715]: I0130 13:52:41.448547 2715 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:52:41.451345 kubelet[2715]: I0130 13:52:41.448766 2715 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:52:41.451345 kubelet[2715]: I0130 13:52:41.448780 2715 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:52:41.451345 kubelet[2715]: I0130 13:52:41.448929 2715 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:52:41.454056 kubelet[2715]: I0130 13:52:41.454021 2715 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:52:41.460509 kubelet[2715]: I0130 13:52:41.454067 2715 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:52:41.460509 kubelet[2715]: I0130 13:52:41.454289 2715 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:52:41.460509 kubelet[2715]: I0130 13:52:41.454314 2715 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:52:41.465157 kubelet[2715]: W0130 13:52:41.464746 2715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-59&limit=500&resourceVersion=0": dial tcp 172.31.18.59:6443: connect: connection refused Jan 30 13:52:41.465157 kubelet[2715]: E0130 13:52:41.464830 2715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-59&limit=500&resourceVersion=0\": dial tcp 172.31.18.59:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:52:41.467322 kubelet[2715]: W0130 13:52:41.467047 2715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.59:6443: connect: connection refused Jan 30 13:52:41.467322 kubelet[2715]: E0130 13:52:41.467131 2715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.59:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:52:41.467322 kubelet[2715]: I0130 13:52:41.467228 2715 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:52:41.469762 kubelet[2715]: I0130 13:52:41.469710 2715 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:52:41.471801 kubelet[2715]: W0130 13:52:41.471769 2715 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:52:41.472767 kubelet[2715]: I0130 13:52:41.472741 2715 server.go:1269] "Started kubelet" Jan 30 13:52:41.476637 kubelet[2715]: I0130 13:52:41.476227 2715 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:52:41.477592 kubelet[2715]: I0130 13:52:41.477559 2715 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:52:41.480863 kubelet[2715]: I0130 13:52:41.480832 2715 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:52:41.482801 kubelet[2715]: I0130 13:52:41.482499 2715 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:52:41.482801 kubelet[2715]: I0130 13:52:41.482683 2715 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:52:41.489480 kubelet[2715]: E0130 13:52:41.485708 2715 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.59:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.59:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-59.181f7cc3c61f792d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-59,UID:ip-172-31-18-59,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-59,},FirstTimestamp:2025-01-30 13:52:41.472719149 +0000 UTC m=+1.009945749,LastTimestamp:2025-01-30 13:52:41.472719149 +0000 UTC m=+1.009945749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-59,}" Jan 30 13:52:41.490545 kubelet[2715]: I0130 13:52:41.490512 2715 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:52:41.493974 kubelet[2715]: I0130 13:52:41.493549 2715 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:52:41.493974 kubelet[2715]: E0130 13:52:41.493873 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:41.501478 kubelet[2715]: E0130 13:52:41.495820 2715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-59?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" interval="200ms" Jan 30 13:52:41.501478 kubelet[2715]: I0130 13:52:41.498279 2715 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:52:41.501478 kubelet[2715]: I0130 13:52:41.498879 2715 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:52:41.501478 kubelet[2715]: I0130 13:52:41.499826 2715 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:52:41.501478 kubelet[2715]: I0130 13:52:41.499904 2715 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:52:41.501478 kubelet[2715]: W0130 13:52:41.499505 2715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.59:6443: connect: connection refused Jan 30 13:52:41.501478 kubelet[2715]: E0130 13:52:41.501145 2715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.59:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:52:41.511278 kubelet[2715]: E0130 13:52:41.503999 2715 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:52:41.511278 kubelet[2715]: I0130 13:52:41.504973 2715 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:52:41.533660 kubelet[2715]: I0130 13:52:41.533483 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:52:41.536211 kubelet[2715]: I0130 13:52:41.535836 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:52:41.536211 kubelet[2715]: I0130 13:52:41.535870 2715 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:52:41.536211 kubelet[2715]: I0130 13:52:41.535905 2715 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:52:41.536211 kubelet[2715]: E0130 13:52:41.535961 2715 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:52:41.544712 kubelet[2715]: W0130 13:52:41.544622 2715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.59:6443: connect: connection refused Jan 30 13:52:41.544712 kubelet[2715]: E0130 13:52:41.544712 2715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.59:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:52:41.548142 kubelet[2715]: I0130 13:52:41.547980 2715 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:52:41.548142 kubelet[2715]: I0130 13:52:41.548001 2715 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:52:41.548142 kubelet[2715]: I0130 13:52:41.548021 2715 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:52:41.571325 kubelet[2715]: I0130 13:52:41.571195 2715 policy_none.go:49] "None policy: Start" Jan 30 13:52:41.572536 kubelet[2715]: I0130 13:52:41.572337 2715 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:52:41.572536 kubelet[2715]: I0130 13:52:41.572363 2715 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:52:41.591350 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:52:41.598898 kubelet[2715]: E0130 13:52:41.594528 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:41.603670 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:52:41.615233 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:52:41.630309 kubelet[2715]: I0130 13:52:41.630279 2715 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:52:41.631348 kubelet[2715]: I0130 13:52:41.631241 2715 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:52:41.631348 kubelet[2715]: I0130 13:52:41.631260 2715 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:52:41.632869 kubelet[2715]: I0130 13:52:41.631618 2715 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:52:41.635294 kubelet[2715]: E0130 13:52:41.635219 2715 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-59\" not found" Jan 30 13:52:41.649944 systemd[1]: Created slice kubepods-burstable-pod6ea1fcfca273a25fa4d9d5fd8417c5e7.slice - libcontainer container kubepods-burstable-pod6ea1fcfca273a25fa4d9d5fd8417c5e7.slice. Jan 30 13:52:41.665018 systemd[1]: Created slice kubepods-burstable-podcbf15f46b986ab2f131132e42305ce18.slice - libcontainer container kubepods-burstable-podcbf15f46b986ab2f131132e42305ce18.slice. Jan 30 13:52:41.671591 systemd[1]: Created slice kubepods-burstable-podd189b8b508def84bc53f76c8318a5928.slice - libcontainer container kubepods-burstable-podd189b8b508def84bc53f76c8318a5928.slice. Jan 30 13:52:41.696462 kubelet[2715]: E0130 13:52:41.696410 2715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-59?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" interval="400ms" Jan 30 13:52:41.733247 kubelet[2715]: I0130 13:52:41.733213 2715 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-59" Jan 30 13:52:41.733948 kubelet[2715]: E0130 13:52:41.733914 2715 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.59:6443/api/v1/nodes\": dial tcp 172.31.18.59:6443: connect: connection refused" node="ip-172-31-18-59" Jan 30 13:52:41.800713 kubelet[2715]: I0130 13:52:41.800656 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ea1fcfca273a25fa4d9d5fd8417c5e7-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-59\" (UID: \"6ea1fcfca273a25fa4d9d5fd8417c5e7\") " pod="kube-system/kube-controller-manager-ip-172-31-18-59" Jan 30 13:52:41.800713 kubelet[2715]: I0130 13:52:41.800714 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ea1fcfca273a25fa4d9d5fd8417c5e7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-59\" (UID: \"6ea1fcfca273a25fa4d9d5fd8417c5e7\") " pod="kube-system/kube-controller-manager-ip-172-31-18-59" Jan 30 13:52:41.800913 kubelet[2715]: I0130 13:52:41.800739 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d189b8b508def84bc53f76c8318a5928-ca-certs\") pod \"kube-apiserver-ip-172-31-18-59\" (UID: \"d189b8b508def84bc53f76c8318a5928\") " pod="kube-system/kube-apiserver-ip-172-31-18-59" Jan 30 13:52:41.800913 kubelet[2715]: I0130 13:52:41.800760 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d189b8b508def84bc53f76c8318a5928-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-59\" (UID: \"d189b8b508def84bc53f76c8318a5928\") " pod="kube-system/kube-apiserver-ip-172-31-18-59" Jan 30 13:52:41.800913 kubelet[2715]: I0130 13:52:41.800782 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ea1fcfca273a25fa4d9d5fd8417c5e7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-59\" (UID: \"6ea1fcfca273a25fa4d9d5fd8417c5e7\") " pod="kube-system/kube-controller-manager-ip-172-31-18-59" Jan 30 13:52:41.800913 kubelet[2715]: I0130 13:52:41.800806 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ea1fcfca273a25fa4d9d5fd8417c5e7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-59\" (UID: \"6ea1fcfca273a25fa4d9d5fd8417c5e7\") " pod="kube-system/kube-controller-manager-ip-172-31-18-59" Jan 30 13:52:41.800913 kubelet[2715]: I0130 13:52:41.800832 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ea1fcfca273a25fa4d9d5fd8417c5e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-59\" (UID: \"6ea1fcfca273a25fa4d9d5fd8417c5e7\") " pod="kube-system/kube-controller-manager-ip-172-31-18-59" Jan 30 13:52:41.801106 kubelet[2715]: I0130 13:52:41.800855 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbf15f46b986ab2f131132e42305ce18-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-59\" (UID: \"cbf15f46b986ab2f131132e42305ce18\") " pod="kube-system/kube-scheduler-ip-172-31-18-59" Jan 30 13:52:41.801106 kubelet[2715]: I0130 13:52:41.800880 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d189b8b508def84bc53f76c8318a5928-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-59\" (UID: \"d189b8b508def84bc53f76c8318a5928\") " pod="kube-system/kube-apiserver-ip-172-31-18-59" Jan 30 13:52:41.937476 kubelet[2715]: I0130 13:52:41.937329 2715 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-59" Jan 30 13:52:41.937803 kubelet[2715]: E0130 13:52:41.937772 2715 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.59:6443/api/v1/nodes\": dial tcp 172.31.18.59:6443: connect: connection refused" node="ip-172-31-18-59" Jan 30 13:52:41.966571 containerd[1880]: time="2025-01-30T13:52:41.966516075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-59,Uid:6ea1fcfca273a25fa4d9d5fd8417c5e7,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:41.973093 containerd[1880]: time="2025-01-30T13:52:41.971869403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-59,Uid:cbf15f46b986ab2f131132e42305ce18,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:41.977395 containerd[1880]: time="2025-01-30T13:52:41.977357962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-59,Uid:d189b8b508def84bc53f76c8318a5928,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:42.097585 kubelet[2715]: E0130 13:52:42.097533 2715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-59?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" interval="800ms" Jan 30 13:52:42.325775 kubelet[2715]: W0130 13:52:42.325712 2715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-59&limit=500&resourceVersion=0": dial tcp 172.31.18.59:6443: connect: connection refused Jan 30 13:52:42.325923 kubelet[2715]: E0130 13:52:42.325787 2715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-59&limit=500&resourceVersion=0\": dial tcp 172.31.18.59:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:52:42.339999 kubelet[2715]: I0130 13:52:42.339968 2715 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-59" Jan 30 13:52:42.340354 kubelet[2715]: E0130 13:52:42.340323 2715 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.59:6443/api/v1/nodes\": dial tcp 172.31.18.59:6443: connect: connection refused" node="ip-172-31-18-59" Jan 30 13:52:42.458465 kubelet[2715]: W0130 13:52:42.458393 2715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.59:6443: connect: connection refused Jan 30 13:52:42.458611 kubelet[2715]: E0130 13:52:42.458477 2715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.59:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:52:42.550532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746038331.mount: Deactivated successfully. Jan 30 13:52:42.561854 kubelet[2715]: W0130 13:52:42.561748 2715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.59:6443: connect: connection refused Jan 30 13:52:42.561854 kubelet[2715]: E0130 13:52:42.561801 2715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.59:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:52:42.563483 containerd[1880]: time="2025-01-30T13:52:42.563437667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:52:42.568320 containerd[1880]: time="2025-01-30T13:52:42.568258719Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:52:42.569925 containerd[1880]: time="2025-01-30T13:52:42.569885350Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:52:42.571622 containerd[1880]: time="2025-01-30T13:52:42.571580617Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:52:42.572840 containerd[1880]: time="2025-01-30T13:52:42.572803444Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:52:42.574657 containerd[1880]: time="2025-01-30T13:52:42.574608817Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:52:42.577550 containerd[1880]: time="2025-01-30T13:52:42.575774852Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:52:42.579036 containerd[1880]: time="2025-01-30T13:52:42.578997759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:52:42.581715 containerd[1880]: time="2025-01-30T13:52:42.581674869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.64404ms" Jan 30 13:52:42.583044 containerd[1880]: time="2025-01-30T13:52:42.583001256Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 605.405037ms" Jan 30 13:52:42.583157 containerd[1880]: time="2025-01-30T13:52:42.583096698Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.481273ms" Jan 30 13:52:42.739513 kubelet[2715]: W0130 13:52:42.739442 2715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.59:6443: connect: connection refused Jan 30 13:52:42.739513 kubelet[2715]: E0130 13:52:42.739516 2715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.59:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:52:42.898222 kubelet[2715]: E0130 13:52:42.897953 2715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-59?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" interval="1.6s" Jan 30 13:52:42.946520 containerd[1880]: time="2025-01-30T13:52:42.946396777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:42.947201 containerd[1880]: time="2025-01-30T13:52:42.947103319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:42.947399 containerd[1880]: time="2025-01-30T13:52:42.947356907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:42.947676 containerd[1880]: time="2025-01-30T13:52:42.947347009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:42.947858 containerd[1880]: time="2025-01-30T13:52:42.947813900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:42.947858 containerd[1880]: time="2025-01-30T13:52:42.947798261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:42.948146 containerd[1880]: time="2025-01-30T13:52:42.948091110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:42.948407 containerd[1880]: time="2025-01-30T13:52:42.948357982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:42.956872 containerd[1880]: time="2025-01-30T13:52:42.956701747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:42.957140 containerd[1880]: time="2025-01-30T13:52:42.956939767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:42.957140 containerd[1880]: time="2025-01-30T13:52:42.957007367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:42.958175 containerd[1880]: time="2025-01-30T13:52:42.957231547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:43.014762 systemd[1]: Started cri-containerd-90ae189f2dfe9bbdcacac10613e3b00c9ad0a6dadf2341bcd0fd6c1ee7171da3.scope - libcontainer container 90ae189f2dfe9bbdcacac10613e3b00c9ad0a6dadf2341bcd0fd6c1ee7171da3. Jan 30 13:52:43.026232 systemd[1]: Started cri-containerd-06bbff7cedd2a0c04553124f40bfbce71117ea7dba3e58cf973a998627bf9e49.scope - libcontainer container 06bbff7cedd2a0c04553124f40bfbce71117ea7dba3e58cf973a998627bf9e49. Jan 30 13:52:43.029609 systemd[1]: Started cri-containerd-faf8840955d044d4b8e8d34ab2ce785bf13f92909c0aba4ff42e74826de470b2.scope - libcontainer container faf8840955d044d4b8e8d34ab2ce785bf13f92909c0aba4ff42e74826de470b2. Jan 30 13:52:43.127469 containerd[1880]: time="2025-01-30T13:52:43.127209799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-59,Uid:d189b8b508def84bc53f76c8318a5928,Namespace:kube-system,Attempt:0,} returns sandbox id \"90ae189f2dfe9bbdcacac10613e3b00c9ad0a6dadf2341bcd0fd6c1ee7171da3\"" Jan 30 13:52:43.143523 containerd[1880]: time="2025-01-30T13:52:43.141649499Z" level=info msg="CreateContainer within sandbox \"90ae189f2dfe9bbdcacac10613e3b00c9ad0a6dadf2341bcd0fd6c1ee7171da3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:52:43.145429 kubelet[2715]: I0130 13:52:43.145340 2715 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-59" Jan 30 13:52:43.145732 kubelet[2715]: E0130 13:52:43.145683 2715 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.59:6443/api/v1/nodes\": dial tcp 172.31.18.59:6443: connect: connection refused" node="ip-172-31-18-59" Jan 30 13:52:43.181779 containerd[1880]: time="2025-01-30T13:52:43.181626396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-59,Uid:cbf15f46b986ab2f131132e42305ce18,Namespace:kube-system,Attempt:0,} returns sandbox id \"06bbff7cedd2a0c04553124f40bfbce71117ea7dba3e58cf973a998627bf9e49\"" Jan 30 13:52:43.191348 containerd[1880]: time="2025-01-30T13:52:43.191290776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-59,Uid:6ea1fcfca273a25fa4d9d5fd8417c5e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"faf8840955d044d4b8e8d34ab2ce785bf13f92909c0aba4ff42e74826de470b2\"" Jan 30 13:52:43.192208 containerd[1880]: time="2025-01-30T13:52:43.192174258Z" level=info msg="CreateContainer within sandbox \"06bbff7cedd2a0c04553124f40bfbce71117ea7dba3e58cf973a998627bf9e49\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:52:43.196022 containerd[1880]: time="2025-01-30T13:52:43.195894923Z" level=info msg="CreateContainer within sandbox \"faf8840955d044d4b8e8d34ab2ce785bf13f92909c0aba4ff42e74826de470b2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:52:43.215582 containerd[1880]: time="2025-01-30T13:52:43.215536720Z" level=info msg="CreateContainer within sandbox \"90ae189f2dfe9bbdcacac10613e3b00c9ad0a6dadf2341bcd0fd6c1ee7171da3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c47036f423742152dd506af221b7b1d270063216e4c738b5403dc3198fa79c30\"" Jan 30 13:52:43.217048 containerd[1880]: time="2025-01-30T13:52:43.216697355Z" level=info msg="StartContainer for \"c47036f423742152dd506af221b7b1d270063216e4c738b5403dc3198fa79c30\"" Jan 30 13:52:43.228822 containerd[1880]: time="2025-01-30T13:52:43.228758152Z" level=info msg="CreateContainer within sandbox \"06bbff7cedd2a0c04553124f40bfbce71117ea7dba3e58cf973a998627bf9e49\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f4ad814c9e70ade4e20169c90ab5ed1ec24232f02ae3db5318ecd23140c1428a\"" Jan 30 13:52:43.230578 containerd[1880]: time="2025-01-30T13:52:43.229344203Z" level=info msg="StartContainer for \"f4ad814c9e70ade4e20169c90ab5ed1ec24232f02ae3db5318ecd23140c1428a\"" Jan 30 13:52:43.235022 containerd[1880]: time="2025-01-30T13:52:43.234962422Z" level=info msg="CreateContainer within sandbox \"faf8840955d044d4b8e8d34ab2ce785bf13f92909c0aba4ff42e74826de470b2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f16e61da3addfe127e63679a84ed7a35cde262e3443de7e2bb4473e04a0cf723\"" Jan 30 13:52:43.237561 containerd[1880]: time="2025-01-30T13:52:43.237338725Z" level=info msg="StartContainer for \"f16e61da3addfe127e63679a84ed7a35cde262e3443de7e2bb4473e04a0cf723\"" Jan 30 13:52:43.277746 systemd[1]: Started cri-containerd-f4ad814c9e70ade4e20169c90ab5ed1ec24232f02ae3db5318ecd23140c1428a.scope - libcontainer container f4ad814c9e70ade4e20169c90ab5ed1ec24232f02ae3db5318ecd23140c1428a. Jan 30 13:52:43.295563 systemd[1]: Started cri-containerd-c47036f423742152dd506af221b7b1d270063216e4c738b5403dc3198fa79c30.scope - libcontainer container c47036f423742152dd506af221b7b1d270063216e4c738b5403dc3198fa79c30. Jan 30 13:52:43.318401 systemd[1]: Started cri-containerd-f16e61da3addfe127e63679a84ed7a35cde262e3443de7e2bb4473e04a0cf723.scope - libcontainer container f16e61da3addfe127e63679a84ed7a35cde262e3443de7e2bb4473e04a0cf723. Jan 30 13:52:43.429865 containerd[1880]: time="2025-01-30T13:52:43.429810730Z" level=info msg="StartContainer for \"c47036f423742152dd506af221b7b1d270063216e4c738b5403dc3198fa79c30\" returns successfully" Jan 30 13:52:43.479471 containerd[1880]: time="2025-01-30T13:52:43.479132090Z" level=info msg="StartContainer for \"f4ad814c9e70ade4e20169c90ab5ed1ec24232f02ae3db5318ecd23140c1428a\" returns successfully" Jan 30 13:52:43.495984 containerd[1880]: time="2025-01-30T13:52:43.495823959Z" level=info msg="StartContainer for \"f16e61da3addfe127e63679a84ed7a35cde262e3443de7e2bb4473e04a0cf723\" returns successfully" Jan 30 13:52:43.575280 kubelet[2715]: E0130 13:52:43.574976 2715 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.59:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:52:43.656462 kubelet[2715]: E0130 13:52:43.656301 2715 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.59:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.59:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-59.181f7cc3c61f792d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-59,UID:ip-172-31-18-59,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-59,},FirstTimestamp:2025-01-30 13:52:41.472719149 +0000 UTC m=+1.009945749,LastTimestamp:2025-01-30 13:52:41.472719149 +0000 UTC m=+1.009945749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-59,}" Jan 30 13:52:44.099216 kubelet[2715]: W0130 13:52:44.099125 2715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-59&limit=500&resourceVersion=0": dial tcp 172.31.18.59:6443: connect: connection refused Jan 30 13:52:44.099216 kubelet[2715]: E0130 13:52:44.099184 2715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-59&limit=500&resourceVersion=0\": dial tcp 172.31.18.59:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:52:44.747757 kubelet[2715]: I0130 13:52:44.747724 2715 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-59" Jan 30 13:52:46.527202 update_engine[1874]: I20250130 13:52:46.527122 1874 update_attempter.cc:509] Updating boot flags... Jan 30 13:52:46.634102 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3001) Jan 30 13:52:46.658875 kubelet[2715]: E0130 13:52:46.658834 2715 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-59\" not found" node="ip-172-31-18-59" Jan 30 13:52:46.706538 kubelet[2715]: I0130 13:52:46.706167 2715 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-59" Jan 30 13:52:46.706538 kubelet[2715]: E0130 13:52:46.706212 2715 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-59\": node \"ip-172-31-18-59\" not found" Jan 30 13:52:46.775499 kubelet[2715]: E0130 13:52:46.775457 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:46.879816 kubelet[2715]: E0130 13:52:46.876145 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:46.955566 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3002) Jan 30 13:52:46.980277 kubelet[2715]: E0130 13:52:46.980204 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:47.081039 kubelet[2715]: E0130 13:52:47.080994 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:47.181711 kubelet[2715]: E0130 13:52:47.181237 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:47.233463 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3002) Jan 30 13:52:47.282049 kubelet[2715]: E0130 13:52:47.281604 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:47.382191 kubelet[2715]: E0130 13:52:47.382160 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:47.482900 kubelet[2715]: E0130 13:52:47.482785 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:47.584035 kubelet[2715]: E0130 13:52:47.583975 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:47.685059 kubelet[2715]: E0130 13:52:47.685009 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:47.789000 kubelet[2715]: E0130 13:52:47.785158 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:47.888114 kubelet[2715]: E0130 13:52:47.885309 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:47.988386 kubelet[2715]: E0130 13:52:47.988346 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:48.089069 kubelet[2715]: E0130 13:52:48.089030 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:48.190182 kubelet[2715]: E0130 13:52:48.190142 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:48.291227 kubelet[2715]: E0130 13:52:48.291048 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:48.392067 kubelet[2715]: E0130 13:52:48.391921 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:48.493051 kubelet[2715]: E0130 13:52:48.492980 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:48.593636 kubelet[2715]: E0130 13:52:48.593474 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:48.694483 kubelet[2715]: E0130 13:52:48.694366 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:48.795172 kubelet[2715]: E0130 13:52:48.795003 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:48.895954 kubelet[2715]: E0130 13:52:48.895903 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:48.993048 systemd[1]: Reloading requested from client PID 3255 ('systemctl') (unit session-9.scope)... Jan 30 13:52:48.993095 systemd[1]: Reloading... Jan 30 13:52:48.996523 kubelet[2715]: E0130 13:52:48.996473 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:49.096990 kubelet[2715]: E0130 13:52:49.096906 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:49.162456 zram_generator::config[3295]: No configuration found. Jan 30 13:52:49.198253 kubelet[2715]: E0130 13:52:49.198211 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:49.298899 kubelet[2715]: E0130 13:52:49.298850 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:49.383449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:52:49.400098 kubelet[2715]: E0130 13:52:49.399474 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:49.500630 kubelet[2715]: E0130 13:52:49.500585 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:49.510026 systemd[1]: Reloading finished in 516 ms. Jan 30 13:52:49.562139 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:49.573685 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:52:49.573933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:49.573989 systemd[1]: kubelet.service: Consumed 1.085s CPU time, 113.1M memory peak, 0B memory swap peak. Jan 30 13:52:49.582853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:49.974428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:49.977432 (kubelet)[3351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:52:50.082357 kubelet[3351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:52:50.082357 kubelet[3351]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:52:50.082357 kubelet[3351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:52:50.088030 kubelet[3351]: I0130 13:52:50.085214 3351 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:52:50.103494 kubelet[3351]: I0130 13:52:50.103138 3351 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:52:50.103770 kubelet[3351]: I0130 13:52:50.103753 3351 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:52:50.104432 kubelet[3351]: I0130 13:52:50.104411 3351 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:52:50.106803 kubelet[3351]: I0130 13:52:50.106783 3351 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:52:50.110402 kubelet[3351]: I0130 13:52:50.110207 3351 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:52:50.120231 kubelet[3351]: E0130 13:52:50.119331 3351 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:52:50.120231 kubelet[3351]: I0130 13:52:50.119372 3351 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:52:50.123117 kubelet[3351]: I0130 13:52:50.123061 3351 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:52:50.123264 kubelet[3351]: I0130 13:52:50.123214 3351 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:52:50.123416 kubelet[3351]: I0130 13:52:50.123377 3351 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:52:50.123609 kubelet[3351]: I0130 13:52:50.123411 3351 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:52:50.131656 kubelet[3351]: I0130 13:52:50.131604 3351 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:52:50.131656 kubelet[3351]: I0130 13:52:50.131655 3351 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:52:50.131880 kubelet[3351]: I0130 13:52:50.131712 3351 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:52:50.138396 kubelet[3351]: I0130 13:52:50.138359 3351 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:52:50.138396 kubelet[3351]: I0130 13:52:50.138402 3351 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:52:50.138597 kubelet[3351]: I0130 13:52:50.138439 3351 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:52:50.138597 kubelet[3351]: I0130 13:52:50.138457 3351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:52:50.140452 kubelet[3351]: I0130 13:52:50.139755 3351 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:52:50.140452 kubelet[3351]: I0130 13:52:50.140352 3351 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:52:50.140867 kubelet[3351]: I0130 13:52:50.140845 3351 server.go:1269] "Started kubelet" Jan 30 13:52:50.181044 kubelet[3351]: I0130 13:52:50.180314 3351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:52:50.188885 kubelet[3351]: I0130 13:52:50.188707 3351 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:52:50.191716 kubelet[3351]: I0130 13:52:50.191458 3351 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:52:50.194795 kubelet[3351]: I0130 13:52:50.193329 3351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:52:50.194795 kubelet[3351]: I0130 13:52:50.193641 3351 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:52:50.202025 kubelet[3351]: I0130 13:52:50.199209 3351 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:52:50.203772 kubelet[3351]: I0130 13:52:50.203255 3351 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:52:50.203772 kubelet[3351]: E0130 13:52:50.203520 3351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-59\" not found" Jan 30 13:52:50.212951 kubelet[3351]: I0130 13:52:50.212058 3351 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:52:50.213665 kubelet[3351]: I0130 13:52:50.213631 3351 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:52:50.217547 sudo[3366]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:52:50.218536 sudo[3366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:52:50.220002 kubelet[3351]: I0130 13:52:50.218879 3351 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:52:50.220002 kubelet[3351]: I0130 13:52:50.219034 3351 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:52:50.227420 kubelet[3351]: I0130 13:52:50.226853 3351 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:52:50.236513 kubelet[3351]: I0130 13:52:50.236459 3351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:52:50.240819 kubelet[3351]: I0130 13:52:50.240382 3351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:52:50.240819 kubelet[3351]: I0130 13:52:50.240420 3351 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:52:50.240819 kubelet[3351]: I0130 13:52:50.240442 3351 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:52:50.240819 kubelet[3351]: E0130 13:52:50.240545 3351 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:52:50.329182 kubelet[3351]: I0130 13:52:50.329153 3351 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:52:50.329762 kubelet[3351]: I0130 13:52:50.329347 3351 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:52:50.329762 kubelet[3351]: I0130 13:52:50.329378 3351 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:52:50.329762 kubelet[3351]: I0130 13:52:50.329679 3351 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:52:50.329762 kubelet[3351]: I0130 13:52:50.329695 3351 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:52:50.329762 kubelet[3351]: I0130 13:52:50.329719 3351 policy_none.go:49] "None policy: Start" Jan 30 13:52:50.333866 kubelet[3351]: I0130 13:52:50.330990 3351 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:52:50.333866 kubelet[3351]: I0130 13:52:50.331017 3351 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:52:50.333866 kubelet[3351]: I0130 13:52:50.331316 3351 state_mem.go:75] "Updated machine memory state" Jan 30 13:52:50.339652 kubelet[3351]: I0130 13:52:50.339624 3351 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:52:50.340451 kubelet[3351]: I0130 13:52:50.340432 3351 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:52:50.341212 kubelet[3351]: E0130 13:52:50.341190 3351 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:52:50.344499 kubelet[3351]: I0130 13:52:50.342427 3351 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:52:50.345122 kubelet[3351]: I0130 13:52:50.344990 3351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:52:50.459895 kubelet[3351]: I0130 13:52:50.459746 3351 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-59" Jan 30 13:52:50.471796 kubelet[3351]: I0130 13:52:50.471720 3351 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-18-59" Jan 30 13:52:50.472059 kubelet[3351]: I0130 13:52:50.472041 3351 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-59" Jan 30 13:52:50.622149 kubelet[3351]: I0130 13:52:50.622109 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ea1fcfca273a25fa4d9d5fd8417c5e7-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-59\" (UID: \"6ea1fcfca273a25fa4d9d5fd8417c5e7\") " pod="kube-system/kube-controller-manager-ip-172-31-18-59" Jan 30 13:52:50.622309 kubelet[3351]: I0130 13:52:50.622155 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ea1fcfca273a25fa4d9d5fd8417c5e7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-59\" (UID: \"6ea1fcfca273a25fa4d9d5fd8417c5e7\") " pod="kube-system/kube-controller-manager-ip-172-31-18-59" Jan 30 13:52:50.622309 kubelet[3351]: I0130 13:52:50.622196 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ea1fcfca273a25fa4d9d5fd8417c5e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-59\" (UID: \"6ea1fcfca273a25fa4d9d5fd8417c5e7\") " pod="kube-system/kube-controller-manager-ip-172-31-18-59" Jan 30 13:52:50.622309 kubelet[3351]: I0130 13:52:50.622223 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d189b8b508def84bc53f76c8318a5928-ca-certs\") pod \"kube-apiserver-ip-172-31-18-59\" (UID: \"d189b8b508def84bc53f76c8318a5928\") " pod="kube-system/kube-apiserver-ip-172-31-18-59" Jan 30 13:52:50.622309 kubelet[3351]: I0130 13:52:50.622247 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ea1fcfca273a25fa4d9d5fd8417c5e7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-59\" (UID: \"6ea1fcfca273a25fa4d9d5fd8417c5e7\") " pod="kube-system/kube-controller-manager-ip-172-31-18-59" Jan 30 13:52:50.622309 kubelet[3351]: I0130 13:52:50.622269 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ea1fcfca273a25fa4d9d5fd8417c5e7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-59\" (UID: \"6ea1fcfca273a25fa4d9d5fd8417c5e7\") " pod="kube-system/kube-controller-manager-ip-172-31-18-59" Jan 30 13:52:50.622531 kubelet[3351]: I0130 13:52:50.622289 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbf15f46b986ab2f131132e42305ce18-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-59\" (UID: \"cbf15f46b986ab2f131132e42305ce18\") " pod="kube-system/kube-scheduler-ip-172-31-18-59" Jan 30 13:52:50.622531 kubelet[3351]: I0130 13:52:50.622314 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d189b8b508def84bc53f76c8318a5928-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-59\" (UID: \"d189b8b508def84bc53f76c8318a5928\") " pod="kube-system/kube-apiserver-ip-172-31-18-59" Jan 30 13:52:50.622531 kubelet[3351]: I0130 13:52:50.622338 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d189b8b508def84bc53f76c8318a5928-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-59\" (UID: \"d189b8b508def84bc53f76c8318a5928\") " pod="kube-system/kube-apiserver-ip-172-31-18-59" Jan 30 13:52:51.114975 sudo[3366]: pam_unix(sudo:session): session closed for user root Jan 30 13:52:51.155350 kubelet[3351]: I0130 13:52:51.155032 3351 apiserver.go:52] "Watching apiserver" Jan 30 13:52:51.220190 kubelet[3351]: I0130 13:52:51.219797 3351 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:52:51.338759 kubelet[3351]: I0130 13:52:51.338690 3351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-59" podStartSLOduration=1.338669294 podStartE2EDuration="1.338669294s" podCreationTimestamp="2025-01-30 13:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:51.318705351 +0000 UTC m=+1.326404462" watchObservedRunningTime="2025-01-30 13:52:51.338669294 +0000 UTC m=+1.346368398" Jan 30 13:52:51.368166 kubelet[3351]: I0130 13:52:51.367921 3351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-59" podStartSLOduration=1.367899417 podStartE2EDuration="1.367899417s" podCreationTimestamp="2025-01-30 13:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:51.33943859 +0000 UTC m=+1.347137697" watchObservedRunningTime="2025-01-30 13:52:51.367899417 +0000 UTC m=+1.375598524" Jan 30 13:52:51.398094 kubelet[3351]: I0130 13:52:51.395607 3351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-59" podStartSLOduration=1.395585846 podStartE2EDuration="1.395585846s" podCreationTimestamp="2025-01-30 13:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:51.374156709 +0000 UTC m=+1.381855816" watchObservedRunningTime="2025-01-30 13:52:51.395585846 +0000 UTC m=+1.403284952" Jan 30 13:52:53.495476 sudo[2233]: pam_unix(sudo:session): session closed for user root Jan 30 13:52:53.521963 sshd[2230]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:53.527052 systemd[1]: sshd@8-172.31.18.59:22-139.178.68.195:44606.service: Deactivated successfully. Jan 30 13:52:53.535374 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:52:53.536230 systemd[1]: session-9.scope: Consumed 4.962s CPU time, 141.6M memory peak, 0B memory swap peak. Jan 30 13:52:53.537227 systemd-logind[1873]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:52:53.538552 systemd-logind[1873]: Removed session 9. Jan 30 13:52:54.430870 kubelet[3351]: I0130 13:52:54.430785 3351 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:52:54.433017 containerd[1880]: time="2025-01-30T13:52:54.432898925Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:52:54.433992 kubelet[3351]: I0130 13:52:54.433968 3351 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:52:55.139097 systemd[1]: Created slice kubepods-besteffort-pod65f6ead4_dfb8_433a_b983_c9dfbe2213bd.slice - libcontainer container kubepods-besteffort-pod65f6ead4_dfb8_433a_b983_c9dfbe2213bd.slice. Jan 30 13:52:55.160587 systemd[1]: Created slice kubepods-burstable-pod0b194f1a_858f_4872_bba7_43886e78b639.slice - libcontainer container kubepods-burstable-pod0b194f1a_858f_4872_bba7_43886e78b639.slice. Jan 30 13:52:55.171562 kubelet[3351]: I0130 13:52:55.171524 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65f6ead4-dfb8-433a-b983-c9dfbe2213bd-xtables-lock\") pod \"kube-proxy-s8md5\" (UID: \"65f6ead4-dfb8-433a-b983-c9dfbe2213bd\") " pod="kube-system/kube-proxy-s8md5" Jan 30 13:52:55.172268 kubelet[3351]: I0130 13:52:55.172238 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-cilium-run\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.172376 kubelet[3351]: I0130 13:52:55.172362 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-bpf-maps\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.174166 kubelet[3351]: I0130 13:52:55.174134 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-cni-path\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.174908 kubelet[3351]: I0130 13:52:55.174339 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-xtables-lock\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.174908 kubelet[3351]: I0130 13:52:55.174372 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-host-proc-sys-kernel\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.174908 kubelet[3351]: I0130 13:52:55.174409 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-cilium-cgroup\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.174908 kubelet[3351]: I0130 13:52:55.174434 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-host-proc-sys-net\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.174908 kubelet[3351]: I0130 13:52:55.174456 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b194f1a-858f-4872-bba7-43886e78b639-hubble-tls\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.174908 kubelet[3351]: I0130 13:52:55.174479 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/65f6ead4-dfb8-433a-b983-c9dfbe2213bd-kube-proxy\") pod \"kube-proxy-s8md5\" (UID: \"65f6ead4-dfb8-433a-b983-c9dfbe2213bd\") " pod="kube-system/kube-proxy-s8md5" Jan 30 13:52:55.175428 kubelet[3351]: I0130 13:52:55.174507 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65f6ead4-dfb8-433a-b983-c9dfbe2213bd-lib-modules\") pod \"kube-proxy-s8md5\" (UID: \"65f6ead4-dfb8-433a-b983-c9dfbe2213bd\") " pod="kube-system/kube-proxy-s8md5" Jan 30 13:52:55.175428 kubelet[3351]: I0130 13:52:55.174528 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-lib-modules\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.175428 kubelet[3351]: I0130 13:52:55.174551 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2xtm\" (UniqueName: \"kubernetes.io/projected/0b194f1a-858f-4872-bba7-43886e78b639-kube-api-access-d2xtm\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.175428 kubelet[3351]: I0130 13:52:55.174576 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b194f1a-858f-4872-bba7-43886e78b639-clustermesh-secrets\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.175428 kubelet[3351]: I0130 13:52:55.174598 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b194f1a-858f-4872-bba7-43886e78b639-cilium-config-path\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.175428 kubelet[3351]: I0130 13:52:55.174621 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-hostproc\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.175666 kubelet[3351]: I0130 13:52:55.174645 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85l7t\" (UniqueName: \"kubernetes.io/projected/65f6ead4-dfb8-433a-b983-c9dfbe2213bd-kube-api-access-85l7t\") pod \"kube-proxy-s8md5\" (UID: \"65f6ead4-dfb8-433a-b983-c9dfbe2213bd\") " pod="kube-system/kube-proxy-s8md5" Jan 30 13:52:55.175666 kubelet[3351]: I0130 13:52:55.174671 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-etc-cni-netd\") pod \"cilium-jxsrz\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " pod="kube-system/cilium-jxsrz" Jan 30 13:52:55.458632 containerd[1880]: time="2025-01-30T13:52:55.456805407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s8md5,Uid:65f6ead4-dfb8-433a-b983-c9dfbe2213bd,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:55.467220 containerd[1880]: time="2025-01-30T13:52:55.466222276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jxsrz,Uid:0b194f1a-858f-4872-bba7-43886e78b639,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:55.547773 systemd[1]: Created slice kubepods-besteffort-pod6566b7ce_41ea_4caf_a5d0_dc71ba7fe9c0.slice - libcontainer container kubepods-besteffort-pod6566b7ce_41ea_4caf_a5d0_dc71ba7fe9c0.slice. Jan 30 13:52:55.571758 containerd[1880]: time="2025-01-30T13:52:55.570645544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:55.571758 containerd[1880]: time="2025-01-30T13:52:55.570885194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:55.571758 containerd[1880]: time="2025-01-30T13:52:55.570910848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:55.573130 containerd[1880]: time="2025-01-30T13:52:55.571881000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:55.578405 kubelet[3351]: I0130 13:52:55.578164 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmdfw\" (UniqueName: \"kubernetes.io/projected/6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0-kube-api-access-rmdfw\") pod \"cilium-operator-5d85765b45-n876h\" (UID: \"6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0\") " pod="kube-system/cilium-operator-5d85765b45-n876h" Jan 30 13:52:55.583215 kubelet[3351]: I0130 13:52:55.582535 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0-cilium-config-path\") pod \"cilium-operator-5d85765b45-n876h\" (UID: \"6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0\") " pod="kube-system/cilium-operator-5d85765b45-n876h" Jan 30 13:52:55.598611 containerd[1880]: time="2025-01-30T13:52:55.598465721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:55.598611 containerd[1880]: time="2025-01-30T13:52:55.598561461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:55.598611 containerd[1880]: time="2025-01-30T13:52:55.598583257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:55.599572 containerd[1880]: time="2025-01-30T13:52:55.599430665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:55.628681 systemd[1]: Started cri-containerd-0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70.scope - libcontainer container 0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70. Jan 30 13:52:55.651010 systemd[1]: Started cri-containerd-8dc58d6409810e848234b2e6060a03d42f46aad896ee5bc0a9b699c3ed7783f0.scope - libcontainer container 8dc58d6409810e848234b2e6060a03d42f46aad896ee5bc0a9b699c3ed7783f0. Jan 30 13:52:55.687431 containerd[1880]: time="2025-01-30T13:52:55.687300154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jxsrz,Uid:0b194f1a-858f-4872-bba7-43886e78b639,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\"" Jan 30 13:52:55.698814 containerd[1880]: time="2025-01-30T13:52:55.698509421Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:52:55.742291 containerd[1880]: time="2025-01-30T13:52:55.739961780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s8md5,Uid:65f6ead4-dfb8-433a-b983-c9dfbe2213bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dc58d6409810e848234b2e6060a03d42f46aad896ee5bc0a9b699c3ed7783f0\"" Jan 30 13:52:55.748942 containerd[1880]: time="2025-01-30T13:52:55.748900788Z" level=info msg="CreateContainer within sandbox \"8dc58d6409810e848234b2e6060a03d42f46aad896ee5bc0a9b699c3ed7783f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:52:55.784216 containerd[1880]: time="2025-01-30T13:52:55.783785806Z" level=info msg="CreateContainer within sandbox \"8dc58d6409810e848234b2e6060a03d42f46aad896ee5bc0a9b699c3ed7783f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0ee26fe61adf2837fb64974b3e27ee39934a0b7a8b9b6683b8b12251ea20f2fc\"" Jan 30 13:52:55.785935 containerd[1880]: time="2025-01-30T13:52:55.785129842Z" level=info msg="StartContainer for \"0ee26fe61adf2837fb64974b3e27ee39934a0b7a8b9b6683b8b12251ea20f2fc\"" Jan 30 13:52:55.823595 systemd[1]: Started cri-containerd-0ee26fe61adf2837fb64974b3e27ee39934a0b7a8b9b6683b8b12251ea20f2fc.scope - libcontainer container 0ee26fe61adf2837fb64974b3e27ee39934a0b7a8b9b6683b8b12251ea20f2fc. Jan 30 13:52:55.864544 containerd[1880]: time="2025-01-30T13:52:55.864491614Z" level=info msg="StartContainer for \"0ee26fe61adf2837fb64974b3e27ee39934a0b7a8b9b6683b8b12251ea20f2fc\" returns successfully" Jan 30 13:52:55.871467 containerd[1880]: time="2025-01-30T13:52:55.871420207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n876h,Uid:6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:55.905789 containerd[1880]: time="2025-01-30T13:52:55.904980838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:55.905789 containerd[1880]: time="2025-01-30T13:52:55.905128092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:55.905789 containerd[1880]: time="2025-01-30T13:52:55.905153357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:55.905789 containerd[1880]: time="2025-01-30T13:52:55.905497161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:55.929328 systemd[1]: Started cri-containerd-156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6.scope - libcontainer container 156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6. Jan 30 13:52:55.994365 containerd[1880]: time="2025-01-30T13:52:55.994168578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n876h,Uid:6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\"" Jan 30 13:52:56.371405 kubelet[3351]: I0130 13:52:56.370904 3351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s8md5" podStartSLOduration=1.370879165 podStartE2EDuration="1.370879165s" podCreationTimestamp="2025-01-30 13:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:56.353485083 +0000 UTC m=+6.361184190" watchObservedRunningTime="2025-01-30 13:52:56.370879165 +0000 UTC m=+6.378578270" Jan 30 13:53:03.838996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1312799334.mount: Deactivated successfully. Jan 30 13:53:07.858107 containerd[1880]: time="2025-01-30T13:53:07.858040299Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:07.877642 containerd[1880]: time="2025-01-30T13:53:07.877511634Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:53:07.881168 containerd[1880]: time="2025-01-30T13:53:07.878815758Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:07.893184 containerd[1880]: time="2025-01-30T13:53:07.893133095Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.194570207s" Jan 30 13:53:07.893184 containerd[1880]: time="2025-01-30T13:53:07.893188280Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:53:07.896358 containerd[1880]: time="2025-01-30T13:53:07.896052579Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:53:07.900297 containerd[1880]: time="2025-01-30T13:53:07.900258210Z" level=info msg="CreateContainer within sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:53:08.579713 containerd[1880]: time="2025-01-30T13:53:08.578056949Z" level=info msg="CreateContainer within sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade\"" Jan 30 13:53:08.585501 containerd[1880]: time="2025-01-30T13:53:08.585455685Z" level=info msg="StartContainer for \"b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade\"" Jan 30 13:53:08.790584 systemd[1]: Started cri-containerd-b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade.scope - libcontainer container b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade. Jan 30 13:53:08.847181 containerd[1880]: time="2025-01-30T13:53:08.843627259Z" level=info msg="StartContainer for \"b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade\" returns successfully" Jan 30 13:53:08.864576 systemd[1]: cri-containerd-b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade.scope: Deactivated successfully. Jan 30 13:53:08.941244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade-rootfs.mount: Deactivated successfully. Jan 30 13:53:09.609995 containerd[1880]: time="2025-01-30T13:53:09.516023955Z" level=info msg="shim disconnected" id=b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade namespace=k8s.io Jan 30 13:53:09.610611 containerd[1880]: time="2025-01-30T13:53:09.610004032Z" level=warning msg="cleaning up after shim disconnected" id=b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade namespace=k8s.io Jan 30 13:53:09.610611 containerd[1880]: time="2025-01-30T13:53:09.610025268Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:09.626868 containerd[1880]: time="2025-01-30T13:53:09.626806541Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:53:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:53:10.112802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544394619.mount: Deactivated successfully. Jan 30 13:53:10.514189 containerd[1880]: time="2025-01-30T13:53:10.513300910Z" level=info msg="CreateContainer within sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:53:10.675613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374243431.mount: Deactivated successfully. Jan 30 13:53:10.682822 containerd[1880]: time="2025-01-30T13:53:10.682767033Z" level=info msg="CreateContainer within sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a\"" Jan 30 13:53:10.687844 containerd[1880]: time="2025-01-30T13:53:10.687758477Z" level=info msg="StartContainer for \"b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a\"" Jan 30 13:53:10.764332 systemd[1]: Started cri-containerd-b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a.scope - libcontainer container b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a. Jan 30 13:53:10.830089 containerd[1880]: time="2025-01-30T13:53:10.829795817Z" level=info msg="StartContainer for \"b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a\" returns successfully" Jan 30 13:53:10.853472 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:53:10.854333 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:53:10.854434 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:53:10.862202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:53:10.862711 systemd[1]: cri-containerd-b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a.scope: Deactivated successfully. Jan 30 13:53:10.924324 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:53:11.055285 containerd[1880]: time="2025-01-30T13:53:11.055004831Z" level=info msg="shim disconnected" id=b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a namespace=k8s.io Jan 30 13:53:11.055285 containerd[1880]: time="2025-01-30T13:53:11.055098052Z" level=warning msg="cleaning up after shim disconnected" id=b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a namespace=k8s.io Jan 30 13:53:11.055285 containerd[1880]: time="2025-01-30T13:53:11.055116227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:11.099702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a-rootfs.mount: Deactivated successfully. Jan 30 13:53:11.459777 containerd[1880]: time="2025-01-30T13:53:11.459445679Z" level=info msg="CreateContainer within sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:53:11.561736 containerd[1880]: time="2025-01-30T13:53:11.560689971Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:11.565260 containerd[1880]: time="2025-01-30T13:53:11.565183232Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:53:11.572242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2955458976.mount: Deactivated successfully. Jan 30 13:53:11.575370 containerd[1880]: time="2025-01-30T13:53:11.575324668Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:11.576824 containerd[1880]: time="2025-01-30T13:53:11.576779213Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.680667278s" Jan 30 13:53:11.577208 containerd[1880]: time="2025-01-30T13:53:11.576822315Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:53:11.580303 containerd[1880]: time="2025-01-30T13:53:11.580184450Z" level=info msg="CreateContainer within sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215\"" Jan 30 13:53:11.580837 containerd[1880]: time="2025-01-30T13:53:11.580803964Z" level=info msg="StartContainer for \"72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215\"" Jan 30 13:53:11.582322 containerd[1880]: time="2025-01-30T13:53:11.582292258Z" level=info msg="CreateContainer within sandbox \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:53:11.629311 systemd[1]: Started cri-containerd-72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215.scope - libcontainer container 72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215. Jan 30 13:53:11.636019 containerd[1880]: time="2025-01-30T13:53:11.635962267Z" level=info msg="CreateContainer within sandbox \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\"" Jan 30 13:53:11.637633 containerd[1880]: time="2025-01-30T13:53:11.637580958Z" level=info msg="StartContainer for \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\"" Jan 30 13:53:11.702306 systemd[1]: Started cri-containerd-f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac.scope - libcontainer container f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac. Jan 30 13:53:11.720820 systemd[1]: cri-containerd-72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215.scope: Deactivated successfully. Jan 30 13:53:11.831217 containerd[1880]: time="2025-01-30T13:53:11.831173433Z" level=info msg="StartContainer for \"72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215\" returns successfully" Jan 30 13:53:11.832179 containerd[1880]: time="2025-01-30T13:53:11.831820326Z" level=info msg="StartContainer for \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\" returns successfully" Jan 30 13:53:12.101402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215-rootfs.mount: Deactivated successfully. Jan 30 13:53:12.350096 containerd[1880]: time="2025-01-30T13:53:12.349776574Z" level=info msg="shim disconnected" id=72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215 namespace=k8s.io Jan 30 13:53:12.350096 containerd[1880]: time="2025-01-30T13:53:12.349847571Z" level=warning msg="cleaning up after shim disconnected" id=72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215 namespace=k8s.io Jan 30 13:53:12.350096 containerd[1880]: time="2025-01-30T13:53:12.349859996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:12.479438 containerd[1880]: time="2025-01-30T13:53:12.479182476Z" level=info msg="CreateContainer within sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:53:12.613815 containerd[1880]: time="2025-01-30T13:53:12.613761487Z" level=info msg="CreateContainer within sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3\"" Jan 30 13:53:12.618733 containerd[1880]: time="2025-01-30T13:53:12.618315260Z" level=info msg="StartContainer for \"b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3\"" Jan 30 13:53:12.709577 systemd[1]: Started cri-containerd-b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3.scope - libcontainer container b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3. Jan 30 13:53:12.784179 kubelet[3351]: I0130 13:53:12.776734 3351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-n876h" podStartSLOduration=2.195101042 podStartE2EDuration="17.776707092s" podCreationTimestamp="2025-01-30 13:52:55 +0000 UTC" firstStartedPulling="2025-01-30 13:52:55.996234319 +0000 UTC m=+6.003933409" lastFinishedPulling="2025-01-30 13:53:11.577840375 +0000 UTC m=+21.585539459" observedRunningTime="2025-01-30 13:53:12.752234804 +0000 UTC m=+22.759933909" watchObservedRunningTime="2025-01-30 13:53:12.776707092 +0000 UTC m=+22.784406199" Jan 30 13:53:12.829412 systemd[1]: cri-containerd-b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3.scope: Deactivated successfully. Jan 30 13:53:12.832746 containerd[1880]: time="2025-01-30T13:53:12.831883309Z" level=info msg="StartContainer for \"b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3\" returns successfully" Jan 30 13:53:12.921479 containerd[1880]: time="2025-01-30T13:53:12.921392650Z" level=info msg="shim disconnected" id=b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3 namespace=k8s.io Jan 30 13:53:12.921479 containerd[1880]: time="2025-01-30T13:53:12.921477090Z" level=warning msg="cleaning up after shim disconnected" id=b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3 namespace=k8s.io Jan 30 13:53:12.921827 containerd[1880]: time="2025-01-30T13:53:12.921495608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:13.102182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3-rootfs.mount: Deactivated successfully. Jan 30 13:53:13.496592 containerd[1880]: time="2025-01-30T13:53:13.495687955Z" level=info msg="CreateContainer within sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:53:13.544097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount394565058.mount: Deactivated successfully. Jan 30 13:53:13.548574 containerd[1880]: time="2025-01-30T13:53:13.548524598Z" level=info msg="CreateContainer within sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\"" Jan 30 13:53:13.550784 containerd[1880]: time="2025-01-30T13:53:13.550747888Z" level=info msg="StartContainer for \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\"" Jan 30 13:53:13.668333 systemd[1]: Started cri-containerd-a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775.scope - libcontainer container a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775. Jan 30 13:53:13.747953 containerd[1880]: time="2025-01-30T13:53:13.747357400Z" level=info msg="StartContainer for \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\" returns successfully" Jan 30 13:53:13.972840 kubelet[3351]: I0130 13:53:13.972805 3351 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:53:14.065205 systemd[1]: Created slice kubepods-burstable-pod2acee82f_0190_4fcf_a5cb_2a8d5af952dc.slice - libcontainer container kubepods-burstable-pod2acee82f_0190_4fcf_a5cb_2a8d5af952dc.slice. Jan 30 13:53:14.103595 kubelet[3351]: I0130 13:53:14.103247 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2acee82f-0190-4fcf-a5cb-2a8d5af952dc-config-volume\") pod \"coredns-6f6b679f8f-4fpb2\" (UID: \"2acee82f-0190-4fcf-a5cb-2a8d5af952dc\") " pod="kube-system/coredns-6f6b679f8f-4fpb2" Jan 30 13:53:14.103595 kubelet[3351]: I0130 13:53:14.103316 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/116473cf-f9ae-4679-9e80-3040f8e355aa-config-volume\") pod \"coredns-6f6b679f8f-vldfl\" (UID: \"116473cf-f9ae-4679-9e80-3040f8e355aa\") " pod="kube-system/coredns-6f6b679f8f-vldfl" Jan 30 13:53:14.108946 kubelet[3351]: I0130 13:53:14.105774 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mk9x\" (UniqueName: \"kubernetes.io/projected/2acee82f-0190-4fcf-a5cb-2a8d5af952dc-kube-api-access-5mk9x\") pod \"coredns-6f6b679f8f-4fpb2\" (UID: \"2acee82f-0190-4fcf-a5cb-2a8d5af952dc\") " pod="kube-system/coredns-6f6b679f8f-4fpb2" Jan 30 13:53:14.108946 kubelet[3351]: I0130 13:53:14.105912 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kct6m\" (UniqueName: \"kubernetes.io/projected/116473cf-f9ae-4679-9e80-3040f8e355aa-kube-api-access-kct6m\") pod \"coredns-6f6b679f8f-vldfl\" (UID: \"116473cf-f9ae-4679-9e80-3040f8e355aa\") " pod="kube-system/coredns-6f6b679f8f-vldfl" Jan 30 13:53:14.105055 systemd[1]: Created slice kubepods-burstable-pod116473cf_f9ae_4679_9e80_3040f8e355aa.slice - libcontainer container kubepods-burstable-pod116473cf_f9ae_4679_9e80_3040f8e355aa.slice. Jan 30 13:53:14.391759 containerd[1880]: time="2025-01-30T13:53:14.391233017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4fpb2,Uid:2acee82f-0190-4fcf-a5cb-2a8d5af952dc,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:14.427706 containerd[1880]: time="2025-01-30T13:53:14.427326313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vldfl,Uid:116473cf-f9ae-4679-9e80-3040f8e355aa,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:14.593436 kubelet[3351]: I0130 13:53:14.593200 3351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jxsrz" podStartSLOduration=7.394408984 podStartE2EDuration="19.593171175s" podCreationTimestamp="2025-01-30 13:52:55 +0000 UTC" firstStartedPulling="2025-01-30 13:52:55.696105978 +0000 UTC m=+5.703805074" lastFinishedPulling="2025-01-30 13:53:07.894868168 +0000 UTC m=+17.902567265" observedRunningTime="2025-01-30 13:53:14.590712616 +0000 UTC m=+24.598411721" watchObservedRunningTime="2025-01-30 13:53:14.593171175 +0000 UTC m=+24.600870275" Jan 30 13:53:17.034694 systemd-networkd[1724]: cilium_host: Link UP Jan 30 13:53:17.036041 systemd-networkd[1724]: cilium_net: Link UP Jan 30 13:53:17.037010 systemd-networkd[1724]: cilium_net: Gained carrier Jan 30 13:53:17.037576 systemd-networkd[1724]: cilium_host: Gained carrier Jan 30 13:53:17.038712 (udev-worker)[4148]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:17.041541 (udev-worker)[4182]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:17.193425 (udev-worker)[4195]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:17.204555 systemd-networkd[1724]: cilium_vxlan: Link UP Jan 30 13:53:17.204564 systemd-networkd[1724]: cilium_vxlan: Gained carrier Jan 30 13:53:17.644229 systemd-networkd[1724]: cilium_host: Gained IPv6LL Jan 30 13:53:17.836552 systemd-networkd[1724]: cilium_net: Gained IPv6LL Jan 30 13:53:18.015188 kernel: NET: Registered PF_ALG protocol family Jan 30 13:53:18.351036 systemd-networkd[1724]: cilium_vxlan: Gained IPv6LL Jan 30 13:53:19.259331 systemd-networkd[1724]: lxc_health: Link UP Jan 30 13:53:19.269830 systemd-networkd[1724]: lxc_health: Gained carrier Jan 30 13:53:19.595860 (udev-worker)[4511]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:19.601583 systemd-networkd[1724]: lxcd11793bd5c92: Link UP Jan 30 13:53:19.619266 kernel: eth0: renamed from tmp7c953 Jan 30 13:53:19.620895 systemd-networkd[1724]: lxcd11793bd5c92: Gained carrier Jan 30 13:53:19.681790 systemd-networkd[1724]: lxcb7e1921c002f: Link UP Jan 30 13:53:19.694103 kernel: eth0: renamed from tmp97caa Jan 30 13:53:19.704242 systemd-networkd[1724]: lxcb7e1921c002f: Gained carrier Jan 30 13:53:20.848194 systemd-networkd[1724]: lxc_health: Gained IPv6LL Jan 30 13:53:20.972223 systemd-networkd[1724]: lxcb7e1921c002f: Gained IPv6LL Jan 30 13:53:20.972574 systemd-networkd[1724]: lxcd11793bd5c92: Gained IPv6LL Jan 30 13:53:23.281985 ntpd[1857]: Listen normally on 6 cilium_host 192.168.0.113:123 Jan 30 13:53:23.283498 ntpd[1857]: 30 Jan 13:53:23 ntpd[1857]: Listen normally on 6 cilium_host 192.168.0.113:123 Jan 30 13:53:23.283498 ntpd[1857]: 30 Jan 13:53:23 ntpd[1857]: Listen normally on 7 cilium_net [fe80::1cc5:2bff:fe0d:308a%4]:123 Jan 30 13:53:23.283498 ntpd[1857]: 30 Jan 13:53:23 ntpd[1857]: Listen normally on 8 cilium_host [fe80::ec22:24ff:fea5:5400%5]:123 Jan 30 13:53:23.283498 ntpd[1857]: 30 Jan 13:53:23 ntpd[1857]: Listen normally on 9 cilium_vxlan [fe80::5826:5fff:fe51:4472%6]:123 Jan 30 13:53:23.283498 ntpd[1857]: 30 Jan 13:53:23 ntpd[1857]: Listen normally on 10 lxc_health [fe80::dcd0:96ff:fedc:7a62%8]:123 Jan 30 13:53:23.283498 ntpd[1857]: 30 Jan 13:53:23 ntpd[1857]: Listen normally on 11 lxcd11793bd5c92 [fe80::c0ea:dcff:fe9c:d149%10]:123 Jan 30 13:53:23.283498 ntpd[1857]: 30 Jan 13:53:23 ntpd[1857]: Listen normally on 12 lxcb7e1921c002f [fe80::488d:eff:fea1:52fc%12]:123 Jan 30 13:53:23.282120 ntpd[1857]: Listen normally on 7 cilium_net [fe80::1cc5:2bff:fe0d:308a%4]:123 Jan 30 13:53:23.282181 ntpd[1857]: Listen normally on 8 cilium_host [fe80::ec22:24ff:fea5:5400%5]:123 Jan 30 13:53:23.282220 ntpd[1857]: Listen normally on 9 cilium_vxlan [fe80::5826:5fff:fe51:4472%6]:123 Jan 30 13:53:23.282256 ntpd[1857]: Listen normally on 10 lxc_health [fe80::dcd0:96ff:fedc:7a62%8]:123 Jan 30 13:53:23.282300 ntpd[1857]: Listen normally on 11 lxcd11793bd5c92 [fe80::c0ea:dcff:fe9c:d149%10]:123 Jan 30 13:53:23.282335 ntpd[1857]: Listen normally on 12 lxcb7e1921c002f [fe80::488d:eff:fea1:52fc%12]:123 Jan 30 13:53:26.280168 containerd[1880]: time="2025-01-30T13:53:26.278316395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:26.283004 containerd[1880]: time="2025-01-30T13:53:26.280321510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:26.283615 containerd[1880]: time="2025-01-30T13:53:26.281808349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:26.283753 containerd[1880]: time="2025-01-30T13:53:26.283545334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:26.342570 systemd[1]: run-containerd-runc-k8s.io-97caa24cd40e50748554762bfbcc30acdff22668b377adc94450bc84ff818696-runc.r9q6IV.mount: Deactivated successfully. Jan 30 13:53:26.398337 systemd[1]: Started cri-containerd-97caa24cd40e50748554762bfbcc30acdff22668b377adc94450bc84ff818696.scope - libcontainer container 97caa24cd40e50748554762bfbcc30acdff22668b377adc94450bc84ff818696. Jan 30 13:53:26.466226 containerd[1880]: time="2025-01-30T13:53:26.465987738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:26.467102 containerd[1880]: time="2025-01-30T13:53:26.466304561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:26.467102 containerd[1880]: time="2025-01-30T13:53:26.466373866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:26.467251 containerd[1880]: time="2025-01-30T13:53:26.467158511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:26.553383 systemd[1]: Started cri-containerd-7c9534ba41f5833d266ebdf5321497e87fde344889cf7e2c22af45722154583c.scope - libcontainer container 7c9534ba41f5833d266ebdf5321497e87fde344889cf7e2c22af45722154583c. Jan 30 13:53:26.611585 containerd[1880]: time="2025-01-30T13:53:26.611527249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vldfl,Uid:116473cf-f9ae-4679-9e80-3040f8e355aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"97caa24cd40e50748554762bfbcc30acdff22668b377adc94450bc84ff818696\"" Jan 30 13:53:26.698132 containerd[1880]: time="2025-01-30T13:53:26.696013284Z" level=info msg="CreateContainer within sandbox \"97caa24cd40e50748554762bfbcc30acdff22668b377adc94450bc84ff818696\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:53:26.728380 containerd[1880]: time="2025-01-30T13:53:26.728331159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4fpb2,Uid:2acee82f-0190-4fcf-a5cb-2a8d5af952dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c9534ba41f5833d266ebdf5321497e87fde344889cf7e2c22af45722154583c\"" Jan 30 13:53:26.733762 containerd[1880]: time="2025-01-30T13:53:26.733491783Z" level=info msg="CreateContainer within sandbox \"7c9534ba41f5833d266ebdf5321497e87fde344889cf7e2c22af45722154583c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:53:26.788840 containerd[1880]: time="2025-01-30T13:53:26.788787944Z" level=info msg="CreateContainer within sandbox \"97caa24cd40e50748554762bfbcc30acdff22668b377adc94450bc84ff818696\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dfefe2787496bde9e49c4b824f1f586e40585475871fa61f7021a1156baba40b\"" Jan 30 13:53:26.789665 containerd[1880]: time="2025-01-30T13:53:26.789630779Z" level=info msg="StartContainer for \"dfefe2787496bde9e49c4b824f1f586e40585475871fa61f7021a1156baba40b\"" Jan 30 13:53:26.798539 containerd[1880]: time="2025-01-30T13:53:26.797840972Z" level=info msg="CreateContainer within sandbox \"7c9534ba41f5833d266ebdf5321497e87fde344889cf7e2c22af45722154583c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"960c179402de71b522c8019ebdc98e7257a6d5b92930d97e72bca7378f4cb90e\"" Jan 30 13:53:26.802400 containerd[1880]: time="2025-01-30T13:53:26.799645502Z" level=info msg="StartContainer for \"960c179402de71b522c8019ebdc98e7257a6d5b92930d97e72bca7378f4cb90e\"" Jan 30 13:53:26.842453 systemd[1]: Started cri-containerd-dfefe2787496bde9e49c4b824f1f586e40585475871fa61f7021a1156baba40b.scope - libcontainer container dfefe2787496bde9e49c4b824f1f586e40585475871fa61f7021a1156baba40b. Jan 30 13:53:26.863250 systemd[1]: Started cri-containerd-960c179402de71b522c8019ebdc98e7257a6d5b92930d97e72bca7378f4cb90e.scope - libcontainer container 960c179402de71b522c8019ebdc98e7257a6d5b92930d97e72bca7378f4cb90e. Jan 30 13:53:26.913750 containerd[1880]: time="2025-01-30T13:53:26.912665362Z" level=info msg="StartContainer for \"960c179402de71b522c8019ebdc98e7257a6d5b92930d97e72bca7378f4cb90e\" returns successfully" Jan 30 13:53:26.914741 containerd[1880]: time="2025-01-30T13:53:26.914552428Z" level=info msg="StartContainer for \"dfefe2787496bde9e49c4b824f1f586e40585475871fa61f7021a1156baba40b\" returns successfully" Jan 30 13:53:27.691107 kubelet[3351]: I0130 13:53:27.690980 3351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4fpb2" podStartSLOduration=32.690951843 podStartE2EDuration="32.690951843s" podCreationTimestamp="2025-01-30 13:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:27.68894666 +0000 UTC m=+37.696645791" watchObservedRunningTime="2025-01-30 13:53:27.690951843 +0000 UTC m=+37.698650951" Jan 30 13:53:27.692995 kubelet[3351]: I0130 13:53:27.692762 3351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vldfl" podStartSLOduration=32.692400907 podStartE2EDuration="32.692400907s" podCreationTimestamp="2025-01-30 13:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:27.640570583 +0000 UTC m=+37.648269678" watchObservedRunningTime="2025-01-30 13:53:27.692400907 +0000 UTC m=+37.700100015" Jan 30 13:53:32.809489 systemd[1]: Started sshd@9-172.31.18.59:22-139.178.68.195:45140.service - OpenSSH per-connection server daemon (139.178.68.195:45140). Jan 30 13:53:33.008123 sshd[4720]: Accepted publickey for core from 139.178.68.195 port 45140 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:33.010411 sshd[4720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:33.022579 systemd-logind[1873]: New session 10 of user core. Jan 30 13:53:33.034643 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:53:34.253619 sshd[4720]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:34.264934 systemd-logind[1873]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:53:34.267408 systemd[1]: sshd@9-172.31.18.59:22-139.178.68.195:45140.service: Deactivated successfully. Jan 30 13:53:34.270688 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:53:34.272284 systemd-logind[1873]: Removed session 10. Jan 30 13:53:39.288863 systemd[1]: Started sshd@10-172.31.18.59:22-139.178.68.195:35674.service - OpenSSH per-connection server daemon (139.178.68.195:35674). Jan 30 13:53:39.460531 sshd[4746]: Accepted publickey for core from 139.178.68.195 port 35674 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:39.462204 sshd[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:39.468301 systemd-logind[1873]: New session 11 of user core. Jan 30 13:53:39.477325 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:53:39.705838 sshd[4746]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:39.710105 systemd[1]: sshd@10-172.31.18.59:22-139.178.68.195:35674.service: Deactivated successfully. Jan 30 13:53:39.713684 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:53:39.715904 systemd-logind[1873]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:53:39.717911 systemd-logind[1873]: Removed session 11. Jan 30 13:53:44.746609 systemd[1]: Started sshd@11-172.31.18.59:22-139.178.68.195:53706.service - OpenSSH per-connection server daemon (139.178.68.195:53706). Jan 30 13:53:44.901499 sshd[4760]: Accepted publickey for core from 139.178.68.195 port 53706 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:44.903279 sshd[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:44.921693 systemd-logind[1873]: New session 12 of user core. Jan 30 13:53:44.936398 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:53:45.174851 sshd[4760]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:45.183764 systemd[1]: sshd@11-172.31.18.59:22-139.178.68.195:53706.service: Deactivated successfully. Jan 30 13:53:45.199008 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:53:45.204336 systemd-logind[1873]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:53:45.207011 systemd-logind[1873]: Removed session 12. Jan 30 13:53:50.228592 systemd[1]: Started sshd@12-172.31.18.59:22-139.178.68.195:53714.service - OpenSSH per-connection server daemon (139.178.68.195:53714). Jan 30 13:53:50.404289 sshd[4774]: Accepted publickey for core from 139.178.68.195 port 53714 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:50.406346 sshd[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:50.412186 systemd-logind[1873]: New session 13 of user core. Jan 30 13:53:50.416325 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:53:50.639847 sshd[4774]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:50.643513 systemd[1]: sshd@12-172.31.18.59:22-139.178.68.195:53714.service: Deactivated successfully. Jan 30 13:53:50.646535 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:53:50.648672 systemd-logind[1873]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:53:50.651298 systemd-logind[1873]: Removed session 13. Jan 30 13:53:50.679546 systemd[1]: Started sshd@13-172.31.18.59:22-139.178.68.195:53722.service - OpenSSH per-connection server daemon (139.178.68.195:53722). Jan 30 13:53:50.845906 sshd[4789]: Accepted publickey for core from 139.178.68.195 port 53722 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:50.847949 sshd[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:50.859695 systemd-logind[1873]: New session 14 of user core. Jan 30 13:53:50.868905 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:53:51.153875 sshd[4789]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:51.159772 systemd-logind[1873]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:53:51.161745 systemd[1]: sshd@13-172.31.18.59:22-139.178.68.195:53722.service: Deactivated successfully. Jan 30 13:53:51.168943 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:53:51.172650 systemd-logind[1873]: Removed session 14. Jan 30 13:53:51.192531 systemd[1]: Started sshd@14-172.31.18.59:22-139.178.68.195:53732.service - OpenSSH per-connection server daemon (139.178.68.195:53732). Jan 30 13:53:51.354717 sshd[4800]: Accepted publickey for core from 139.178.68.195 port 53732 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:51.355645 sshd[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:51.367752 systemd-logind[1873]: New session 15 of user core. Jan 30 13:53:51.376325 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:53:51.630936 sshd[4800]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:51.634970 systemd[1]: sshd@14-172.31.18.59:22-139.178.68.195:53732.service: Deactivated successfully. Jan 30 13:53:51.641720 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:53:51.647461 systemd-logind[1873]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:53:51.649112 systemd-logind[1873]: Removed session 15. Jan 30 13:53:56.670511 systemd[1]: Started sshd@15-172.31.18.59:22-139.178.68.195:48388.service - OpenSSH per-connection server daemon (139.178.68.195:48388). Jan 30 13:53:56.853255 sshd[4813]: Accepted publickey for core from 139.178.68.195 port 48388 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:56.854961 sshd[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:56.863739 systemd-logind[1873]: New session 16 of user core. Jan 30 13:53:56.872394 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:53:57.085200 sshd[4813]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:57.090552 systemd[1]: sshd@15-172.31.18.59:22-139.178.68.195:48388.service: Deactivated successfully. Jan 30 13:53:57.094851 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:53:57.095948 systemd-logind[1873]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:53:57.097482 systemd-logind[1873]: Removed session 16. Jan 30 13:54:02.139658 systemd[1]: Started sshd@16-172.31.18.59:22-139.178.68.195:48392.service - OpenSSH per-connection server daemon (139.178.68.195:48392). Jan 30 13:54:02.388155 sshd[4828]: Accepted publickey for core from 139.178.68.195 port 48392 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:02.389057 sshd[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:02.405118 systemd-logind[1873]: New session 17 of user core. Jan 30 13:54:02.413377 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:54:02.699394 sshd[4828]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:02.705045 systemd-logind[1873]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:54:02.706606 systemd[1]: sshd@16-172.31.18.59:22-139.178.68.195:48392.service: Deactivated successfully. Jan 30 13:54:02.713733 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:54:02.715701 systemd-logind[1873]: Removed session 17. Jan 30 13:54:02.740580 systemd[1]: Started sshd@17-172.31.18.59:22-139.178.68.195:48400.service - OpenSSH per-connection server daemon (139.178.68.195:48400). Jan 30 13:54:02.938557 sshd[4841]: Accepted publickey for core from 139.178.68.195 port 48400 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:02.942447 sshd[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:02.971627 systemd-logind[1873]: New session 18 of user core. Jan 30 13:54:02.988397 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:54:03.710498 sshd[4841]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:03.717692 systemd[1]: sshd@17-172.31.18.59:22-139.178.68.195:48400.service: Deactivated successfully. Jan 30 13:54:03.727164 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:54:03.733543 systemd-logind[1873]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:54:03.754601 systemd[1]: Started sshd@18-172.31.18.59:22-139.178.68.195:48404.service - OpenSSH per-connection server daemon (139.178.68.195:48404). Jan 30 13:54:03.756627 systemd-logind[1873]: Removed session 18. Jan 30 13:54:03.943704 sshd[4852]: Accepted publickey for core from 139.178.68.195 port 48404 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:03.967054 sshd[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:04.004980 systemd-logind[1873]: New session 19 of user core. Jan 30 13:54:04.010314 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:54:06.767333 sshd[4852]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:06.781252 systemd[1]: sshd@18-172.31.18.59:22-139.178.68.195:48404.service: Deactivated successfully. Jan 30 13:54:06.787840 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:54:06.792727 systemd-logind[1873]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:54:06.819985 systemd[1]: Started sshd@19-172.31.18.59:22-139.178.68.195:50916.service - OpenSSH per-connection server daemon (139.178.68.195:50916). Jan 30 13:54:06.828191 systemd-logind[1873]: Removed session 19. Jan 30 13:54:07.010345 sshd[4871]: Accepted publickey for core from 139.178.68.195 port 50916 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:07.012983 sshd[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:07.022892 systemd-logind[1873]: New session 20 of user core. Jan 30 13:54:07.029704 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:54:07.435327 sshd[4871]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:07.440468 systemd[1]: sshd@19-172.31.18.59:22-139.178.68.195:50916.service: Deactivated successfully. Jan 30 13:54:07.443132 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:54:07.443943 systemd-logind[1873]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:54:07.445405 systemd-logind[1873]: Removed session 20. Jan 30 13:54:07.468463 systemd[1]: Started sshd@20-172.31.18.59:22-139.178.68.195:50920.service - OpenSSH per-connection server daemon (139.178.68.195:50920). Jan 30 13:54:07.634776 sshd[4883]: Accepted publickey for core from 139.178.68.195 port 50920 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:07.636440 sshd[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:07.641470 systemd-logind[1873]: New session 21 of user core. Jan 30 13:54:07.646290 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:54:07.842495 sshd[4883]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:07.845763 systemd[1]: sshd@20-172.31.18.59:22-139.178.68.195:50920.service: Deactivated successfully. Jan 30 13:54:07.848444 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:54:07.850442 systemd-logind[1873]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:54:07.851826 systemd-logind[1873]: Removed session 21. Jan 30 13:54:12.875762 systemd[1]: Started sshd@21-172.31.18.59:22-139.178.68.195:50922.service - OpenSSH per-connection server daemon (139.178.68.195:50922). Jan 30 13:54:13.038748 sshd[4897]: Accepted publickey for core from 139.178.68.195 port 50922 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:13.040801 sshd[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:13.050626 systemd-logind[1873]: New session 22 of user core. Jan 30 13:54:13.058962 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:54:13.270939 sshd[4897]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:13.276036 systemd[1]: sshd@21-172.31.18.59:22-139.178.68.195:50922.service: Deactivated successfully. Jan 30 13:54:13.279706 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:54:13.281235 systemd-logind[1873]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:54:13.282806 systemd-logind[1873]: Removed session 22. Jan 30 13:54:18.313259 systemd[1]: Started sshd@22-172.31.18.59:22-139.178.68.195:55220.service - OpenSSH per-connection server daemon (139.178.68.195:55220). Jan 30 13:54:18.474198 sshd[4913]: Accepted publickey for core from 139.178.68.195 port 55220 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:18.476063 sshd[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:18.483296 systemd-logind[1873]: New session 23 of user core. Jan 30 13:54:18.493344 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:54:18.687094 sshd[4913]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:18.692156 systemd-logind[1873]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:54:18.692839 systemd[1]: sshd@22-172.31.18.59:22-139.178.68.195:55220.service: Deactivated successfully. Jan 30 13:54:18.695536 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:54:18.696804 systemd-logind[1873]: Removed session 23. Jan 30 13:54:23.720571 systemd[1]: Started sshd@23-172.31.18.59:22-139.178.68.195:55224.service - OpenSSH per-connection server daemon (139.178.68.195:55224). Jan 30 13:54:23.898678 sshd[4925]: Accepted publickey for core from 139.178.68.195 port 55224 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:23.901064 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:23.911610 systemd-logind[1873]: New session 24 of user core. Jan 30 13:54:23.915297 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:54:24.122244 sshd[4925]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:24.126958 systemd[1]: sshd@23-172.31.18.59:22-139.178.68.195:55224.service: Deactivated successfully. Jan 30 13:54:24.129807 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:54:24.130978 systemd-logind[1873]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:54:24.132986 systemd-logind[1873]: Removed session 24. Jan 30 13:54:29.169568 systemd[1]: Started sshd@24-172.31.18.59:22-139.178.68.195:49960.service - OpenSSH per-connection server daemon (139.178.68.195:49960). Jan 30 13:54:29.396737 sshd[4941]: Accepted publickey for core from 139.178.68.195 port 49960 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:29.398667 sshd[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:29.408014 systemd-logind[1873]: New session 25 of user core. Jan 30 13:54:29.411786 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:54:29.659359 sshd[4941]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:29.666834 systemd-logind[1873]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:54:29.667803 systemd[1]: sshd@24-172.31.18.59:22-139.178.68.195:49960.service: Deactivated successfully. Jan 30 13:54:29.677965 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:54:29.679626 systemd-logind[1873]: Removed session 25. Jan 30 13:54:29.703971 systemd[1]: Started sshd@25-172.31.18.59:22-139.178.68.195:49968.service - OpenSSH per-connection server daemon (139.178.68.195:49968). Jan 30 13:54:29.892031 sshd[4954]: Accepted publickey for core from 139.178.68.195 port 49968 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:29.895215 sshd[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:29.905577 systemd-logind[1873]: New session 26 of user core. Jan 30 13:54:29.914358 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:54:31.940777 containerd[1880]: time="2025-01-30T13:54:31.940719578Z" level=info msg="StopContainer for \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\" with timeout 30 (s)" Jan 30 13:54:31.945879 containerd[1880]: time="2025-01-30T13:54:31.945836910Z" level=info msg="Stop container \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\" with signal terminated" Jan 30 13:54:31.997858 systemd[1]: run-containerd-runc-k8s.io-a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775-runc.fQvtRP.mount: Deactivated successfully. Jan 30 13:54:32.009325 systemd[1]: cri-containerd-f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac.scope: Deactivated successfully. Jan 30 13:54:32.039657 containerd[1880]: time="2025-01-30T13:54:32.039505490Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:54:32.052971 containerd[1880]: time="2025-01-30T13:54:32.052823926Z" level=info msg="StopContainer for \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\" with timeout 2 (s)" Jan 30 13:54:32.054975 containerd[1880]: time="2025-01-30T13:54:32.054936927Z" level=info msg="Stop container \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\" with signal terminated" Jan 30 13:54:32.061217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac-rootfs.mount: Deactivated successfully. Jan 30 13:54:32.067713 systemd-networkd[1724]: lxc_health: Link DOWN Jan 30 13:54:32.067722 systemd-networkd[1724]: lxc_health: Lost carrier Jan 30 13:54:32.084535 containerd[1880]: time="2025-01-30T13:54:32.084016698Z" level=info msg="shim disconnected" id=f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac namespace=k8s.io Jan 30 13:54:32.084535 containerd[1880]: time="2025-01-30T13:54:32.084098973Z" level=warning msg="cleaning up after shim disconnected" id=f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac namespace=k8s.io Jan 30 13:54:32.084535 containerd[1880]: time="2025-01-30T13:54:32.084116859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:32.097599 systemd[1]: cri-containerd-a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775.scope: Deactivated successfully. Jan 30 13:54:32.097887 systemd[1]: cri-containerd-a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775.scope: Consumed 8.991s CPU time. Jan 30 13:54:32.137947 containerd[1880]: time="2025-01-30T13:54:32.137896599Z" level=info msg="StopContainer for \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\" returns successfully" Jan 30 13:54:32.139835 containerd[1880]: time="2025-01-30T13:54:32.139790104Z" level=info msg="StopPodSandbox for \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\"" Jan 30 13:54:32.139967 containerd[1880]: time="2025-01-30T13:54:32.139861533Z" level=info msg="Container to stop \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:54:32.144036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6-shm.mount: Deactivated successfully. Jan 30 13:54:32.162209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775-rootfs.mount: Deactivated successfully. Jan 30 13:54:32.166033 systemd[1]: cri-containerd-156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6.scope: Deactivated successfully. Jan 30 13:54:32.229896 containerd[1880]: time="2025-01-30T13:54:32.228932528Z" level=info msg="shim disconnected" id=a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775 namespace=k8s.io Jan 30 13:54:32.229896 containerd[1880]: time="2025-01-30T13:54:32.228997020Z" level=warning msg="cleaning up after shim disconnected" id=a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775 namespace=k8s.io Jan 30 13:54:32.229896 containerd[1880]: time="2025-01-30T13:54:32.229009962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:32.232368 containerd[1880]: time="2025-01-30T13:54:32.231934716Z" level=info msg="shim disconnected" id=156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6 namespace=k8s.io Jan 30 13:54:32.232368 containerd[1880]: time="2025-01-30T13:54:32.232039053Z" level=warning msg="cleaning up after shim disconnected" id=156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6 namespace=k8s.io Jan 30 13:54:32.232368 containerd[1880]: time="2025-01-30T13:54:32.232118477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:32.261766 containerd[1880]: time="2025-01-30T13:54:32.261588711Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:54:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:54:32.265760 containerd[1880]: time="2025-01-30T13:54:32.265439185Z" level=info msg="StopContainer for \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\" returns successfully" Jan 30 13:54:32.267292 containerd[1880]: time="2025-01-30T13:54:32.266380285Z" level=info msg="StopPodSandbox for \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\"" Jan 30 13:54:32.267292 containerd[1880]: time="2025-01-30T13:54:32.266442404Z" level=info msg="Container to stop \"b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:54:32.267292 containerd[1880]: time="2025-01-30T13:54:32.266459528Z" level=info msg="Container to stop \"b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:54:32.267292 containerd[1880]: time="2025-01-30T13:54:32.266493238Z" level=info msg="Container to stop \"72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:54:32.267292 containerd[1880]: time="2025-01-30T13:54:32.266510968Z" level=info msg="Container to stop \"b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:54:32.267292 containerd[1880]: time="2025-01-30T13:54:32.266525404Z" level=info msg="Container to stop \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:54:32.277514 systemd[1]: cri-containerd-0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70.scope: Deactivated successfully. Jan 30 13:54:32.279208 containerd[1880]: time="2025-01-30T13:54:32.279166670Z" level=info msg="TearDown network for sandbox \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\" successfully" Jan 30 13:54:32.279367 containerd[1880]: time="2025-01-30T13:54:32.279349458Z" level=info msg="StopPodSandbox for \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\" returns successfully" Jan 30 13:54:32.320958 containerd[1880]: time="2025-01-30T13:54:32.320859779Z" level=info msg="shim disconnected" id=0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70 namespace=k8s.io Jan 30 13:54:32.321335 containerd[1880]: time="2025-01-30T13:54:32.321291365Z" level=warning msg="cleaning up after shim disconnected" id=0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70 namespace=k8s.io Jan 30 13:54:32.321446 containerd[1880]: time="2025-01-30T13:54:32.321430536Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:32.337997 containerd[1880]: time="2025-01-30T13:54:32.337939967Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:54:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:54:32.339571 containerd[1880]: time="2025-01-30T13:54:32.339427674Z" level=info msg="TearDown network for sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" successfully" Jan 30 13:54:32.339571 containerd[1880]: time="2025-01-30T13:54:32.339464123Z" level=info msg="StopPodSandbox for \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" returns successfully" Jan 30 13:54:32.378057 kubelet[3351]: I0130 13:54:32.378014 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmdfw\" (UniqueName: \"kubernetes.io/projected/6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0-kube-api-access-rmdfw\") pod \"6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0\" (UID: \"6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0\") " Jan 30 13:54:32.378494 kubelet[3351]: I0130 13:54:32.378099 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0-cilium-config-path\") pod \"6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0\" (UID: \"6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0\") " Jan 30 13:54:32.399773 kubelet[3351]: I0130 13:54:32.395377 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0-kube-api-access-rmdfw" (OuterVolumeSpecName: "kube-api-access-rmdfw") pod "6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0" (UID: "6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0"). InnerVolumeSpecName "kube-api-access-rmdfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:54:32.399773 kubelet[3351]: I0130 13:54:32.395383 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0" (UID: "6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:54:32.478707 kubelet[3351]: I0130 13:54:32.478652 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-bpf-maps\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.478707 kubelet[3351]: I0130 13:54:32.478713 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2xtm\" (UniqueName: \"kubernetes.io/projected/0b194f1a-858f-4872-bba7-43886e78b639-kube-api-access-d2xtm\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.478924 kubelet[3351]: I0130 13:54:32.478739 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b194f1a-858f-4872-bba7-43886e78b639-clustermesh-secrets\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.478924 kubelet[3351]: I0130 13:54:32.478764 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b194f1a-858f-4872-bba7-43886e78b639-hubble-tls\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.478924 kubelet[3351]: I0130 13:54:32.478788 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b194f1a-858f-4872-bba7-43886e78b639-cilium-config-path\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.478924 kubelet[3351]: I0130 13:54:32.478807 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-cilium-cgroup\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.478924 kubelet[3351]: I0130 13:54:32.478826 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-etc-cni-netd\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.478924 kubelet[3351]: I0130 13:54:32.478846 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-xtables-lock\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.479233 kubelet[3351]: I0130 13:54:32.478865 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-lib-modules\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.479233 kubelet[3351]: I0130 13:54:32.478888 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-host-proc-sys-kernel\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.479233 kubelet[3351]: I0130 13:54:32.478911 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-host-proc-sys-net\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.479233 kubelet[3351]: I0130 13:54:32.478937 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-cilium-run\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.479233 kubelet[3351]: I0130 13:54:32.478962 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-hostproc\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.479233 kubelet[3351]: I0130 13:54:32.478986 3351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-cni-path\") pod \"0b194f1a-858f-4872-bba7-43886e78b639\" (UID: \"0b194f1a-858f-4872-bba7-43886e78b639\") " Jan 30 13:54:32.479972 kubelet[3351]: I0130 13:54:32.479863 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.479972 kubelet[3351]: I0130 13:54:32.479933 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.481819 kubelet[3351]: I0130 13:54:32.481560 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-cni-path" (OuterVolumeSpecName: "cni-path") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.481819 kubelet[3351]: I0130 13:54:32.481610 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.481819 kubelet[3351]: I0130 13:54:32.481632 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.481819 kubelet[3351]: I0130 13:54:32.481655 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.481819 kubelet[3351]: I0130 13:54:32.481679 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.482347 kubelet[3351]: I0130 13:54:32.481701 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.482347 kubelet[3351]: I0130 13:54:32.481722 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-hostproc" (OuterVolumeSpecName: "hostproc") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.482347 kubelet[3351]: I0130 13:54:32.481746 3351 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rmdfw\" (UniqueName: \"kubernetes.io/projected/6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0-kube-api-access-rmdfw\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.482347 kubelet[3351]: I0130 13:54:32.481768 3351 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0-cilium-config-path\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.494025 kubelet[3351]: I0130 13:54:32.493969 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b194f1a-858f-4872-bba7-43886e78b639-kube-api-access-d2xtm" (OuterVolumeSpecName: "kube-api-access-d2xtm") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "kube-api-access-d2xtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:54:32.494628 kubelet[3351]: I0130 13:54:32.494538 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b194f1a-858f-4872-bba7-43886e78b639-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:54:32.500695 kubelet[3351]: I0130 13:54:32.500583 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b194f1a-858f-4872-bba7-43886e78b639-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:54:32.500695 kubelet[3351]: I0130 13:54:32.500655 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.503413 kubelet[3351]: I0130 13:54:32.503364 3351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b194f1a-858f-4872-bba7-43886e78b639-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0b194f1a-858f-4872-bba7-43886e78b639" (UID: "0b194f1a-858f-4872-bba7-43886e78b639"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:54:32.582841 kubelet[3351]: I0130 13:54:32.582629 3351 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-host-proc-sys-net\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.582841 kubelet[3351]: I0130 13:54:32.582668 3351 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-host-proc-sys-kernel\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.582841 kubelet[3351]: I0130 13:54:32.582683 3351 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-hostproc\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.582841 kubelet[3351]: I0130 13:54:32.582697 3351 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-cilium-run\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.582841 kubelet[3351]: I0130 13:54:32.582710 3351 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-cni-path\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.582841 kubelet[3351]: I0130 13:54:32.582722 3351 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-d2xtm\" (UniqueName: \"kubernetes.io/projected/0b194f1a-858f-4872-bba7-43886e78b639-kube-api-access-d2xtm\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.582841 kubelet[3351]: I0130 13:54:32.582735 3351 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b194f1a-858f-4872-bba7-43886e78b639-clustermesh-secrets\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.582841 kubelet[3351]: I0130 13:54:32.582748 3351 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-bpf-maps\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.583335 kubelet[3351]: I0130 13:54:32.582761 3351 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-cilium-cgroup\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.583335 kubelet[3351]: I0130 13:54:32.582772 3351 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b194f1a-858f-4872-bba7-43886e78b639-hubble-tls\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.583335 kubelet[3351]: I0130 13:54:32.582783 3351 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b194f1a-858f-4872-bba7-43886e78b639-cilium-config-path\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.583335 kubelet[3351]: I0130 13:54:32.582793 3351 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-xtables-lock\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.583335 kubelet[3351]: I0130 13:54:32.582803 3351 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-lib-modules\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.583335 kubelet[3351]: I0130 13:54:32.582813 3351 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b194f1a-858f-4872-bba7-43886e78b639-etc-cni-netd\") on node \"ip-172-31-18-59\" DevicePath \"\"" Jan 30 13:54:32.799016 systemd[1]: Removed slice kubepods-besteffort-pod6566b7ce_41ea_4caf_a5d0_dc71ba7fe9c0.slice - libcontainer container kubepods-besteffort-pod6566b7ce_41ea_4caf_a5d0_dc71ba7fe9c0.slice. Jan 30 13:54:32.812526 kubelet[3351]: I0130 13:54:32.812479 3351 scope.go:117] "RemoveContainer" containerID="f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac" Jan 30 13:54:32.832887 containerd[1880]: time="2025-01-30T13:54:32.832818443Z" level=info msg="RemoveContainer for \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\"" Jan 30 13:54:32.839374 containerd[1880]: time="2025-01-30T13:54:32.838517842Z" level=info msg="RemoveContainer for \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\" returns successfully" Jan 30 13:54:32.840808 kubelet[3351]: I0130 13:54:32.840683 3351 scope.go:117] "RemoveContainer" containerID="f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac" Jan 30 13:54:32.864635 systemd[1]: Removed slice kubepods-burstable-pod0b194f1a_858f_4872_bba7_43886e78b639.slice - libcontainer container kubepods-burstable-pod0b194f1a_858f_4872_bba7_43886e78b639.slice. Jan 30 13:54:32.864774 systemd[1]: kubepods-burstable-pod0b194f1a_858f_4872_bba7_43886e78b639.slice: Consumed 9.098s CPU time. Jan 30 13:54:32.869156 containerd[1880]: time="2025-01-30T13:54:32.852866528Z" level=error msg="ContainerStatus for \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\": not found" Jan 30 13:54:32.892035 kubelet[3351]: E0130 13:54:32.891972 3351 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\": not found" containerID="f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac" Jan 30 13:54:32.892235 kubelet[3351]: I0130 13:54:32.892053 3351 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac"} err="failed to get container status \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"f422512777c0dbbff23ee94d15c31ad66721b4bc4e486500b03da447ddd687ac\": not found" Jan 30 13:54:32.892286 kubelet[3351]: I0130 13:54:32.892238 3351 scope.go:117] "RemoveContainer" containerID="a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775" Jan 30 13:54:32.898613 containerd[1880]: time="2025-01-30T13:54:32.898349843Z" level=info msg="RemoveContainer for \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\"" Jan 30 13:54:32.905469 containerd[1880]: time="2025-01-30T13:54:32.905413775Z" level=info msg="RemoveContainer for \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\" returns successfully" Jan 30 13:54:32.905914 kubelet[3351]: I0130 13:54:32.905895 3351 scope.go:117] "RemoveContainer" containerID="b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3" Jan 30 13:54:32.907315 containerd[1880]: time="2025-01-30T13:54:32.907270391Z" level=info msg="RemoveContainer for \"b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3\"" Jan 30 13:54:32.914322 containerd[1880]: time="2025-01-30T13:54:32.914274003Z" level=info msg="RemoveContainer for \"b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3\" returns successfully" Jan 30 13:54:32.914740 kubelet[3351]: I0130 13:54:32.914611 3351 scope.go:117] "RemoveContainer" containerID="72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215" Jan 30 13:54:32.916120 containerd[1880]: time="2025-01-30T13:54:32.916047213Z" level=info msg="RemoveContainer for \"72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215\"" Jan 30 13:54:32.922410 containerd[1880]: time="2025-01-30T13:54:32.922351295Z" level=info msg="RemoveContainer for \"72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215\" returns successfully" Jan 30 13:54:32.922961 kubelet[3351]: I0130 13:54:32.922870 3351 scope.go:117] "RemoveContainer" containerID="b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a" Jan 30 13:54:32.924206 containerd[1880]: time="2025-01-30T13:54:32.924182949Z" level=info msg="RemoveContainer for \"b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a\"" Jan 30 13:54:32.928975 containerd[1880]: time="2025-01-30T13:54:32.928854986Z" level=info msg="RemoveContainer for \"b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a\" returns successfully" Jan 30 13:54:32.929562 kubelet[3351]: I0130 13:54:32.929475 3351 scope.go:117] "RemoveContainer" containerID="b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade" Jan 30 13:54:32.931096 containerd[1880]: time="2025-01-30T13:54:32.930804515Z" level=info msg="RemoveContainer for \"b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade\"" Jan 30 13:54:32.935660 containerd[1880]: time="2025-01-30T13:54:32.935622299Z" level=info msg="RemoveContainer for \"b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade\" returns successfully" Jan 30 13:54:32.935949 kubelet[3351]: I0130 13:54:32.935922 3351 scope.go:117] "RemoveContainer" containerID="a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775" Jan 30 13:54:32.936276 containerd[1880]: time="2025-01-30T13:54:32.936203380Z" level=error msg="ContainerStatus for \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\": not found" Jan 30 13:54:32.937014 kubelet[3351]: E0130 13:54:32.936642 3351 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\": not found" containerID="a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775" Jan 30 13:54:32.937014 kubelet[3351]: I0130 13:54:32.936681 3351 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775"} err="failed to get container status \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\": rpc error: code = NotFound desc = an error occurred when try to find container \"a22d11a6448e0bfbbc93ac0d78e959921aead9de3f7ff0a0e98f7007328fa775\": not found" Jan 30 13:54:32.937014 kubelet[3351]: I0130 13:54:32.936706 3351 scope.go:117] "RemoveContainer" containerID="b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3" Jan 30 13:54:32.937176 containerd[1880]: time="2025-01-30T13:54:32.936956642Z" level=error msg="ContainerStatus for \"b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3\": not found" Jan 30 13:54:32.937341 kubelet[3351]: E0130 13:54:32.937315 3351 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3\": not found" containerID="b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3" Jan 30 13:54:32.937401 kubelet[3351]: I0130 13:54:32.937345 3351 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3"} err="failed to get container status \"b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0378650de549879a65d797ca758b71b0c22dd236926c651127acd6edaac18d3\": not found" Jan 30 13:54:32.937401 kubelet[3351]: I0130 13:54:32.937369 3351 scope.go:117] "RemoveContainer" containerID="72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215" Jan 30 13:54:32.937715 containerd[1880]: time="2025-01-30T13:54:32.937645232Z" level=error msg="ContainerStatus for \"72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215\": not found" Jan 30 13:54:32.938145 kubelet[3351]: E0130 13:54:32.938122 3351 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215\": not found" containerID="72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215" Jan 30 13:54:32.938537 kubelet[3351]: I0130 13:54:32.938156 3351 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215"} err="failed to get container status \"72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215\": rpc error: code = NotFound desc = an error occurred when try to find container \"72431f699e0574f6f7d28538cfb5ade3fe369efd3881910eb9395357b71c2215\": not found" Jan 30 13:54:32.938537 kubelet[3351]: I0130 13:54:32.938176 3351 scope.go:117] "RemoveContainer" containerID="b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a" Jan 30 13:54:32.938745 containerd[1880]: time="2025-01-30T13:54:32.938710544Z" level=error msg="ContainerStatus for \"b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a\": not found" Jan 30 13:54:32.938905 kubelet[3351]: E0130 13:54:32.938851 3351 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a\": not found" containerID="b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a" Jan 30 13:54:32.938905 kubelet[3351]: I0130 13:54:32.938892 3351 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a"} err="failed to get container status \"b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b294460e3e928dd05d816b685a64723261475fde78b1381c32a1a76fc3c0e93a\": not found" Jan 30 13:54:32.939374 kubelet[3351]: I0130 13:54:32.938913 3351 scope.go:117] "RemoveContainer" containerID="b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade" Jan 30 13:54:32.939426 containerd[1880]: time="2025-01-30T13:54:32.939270821Z" level=error msg="ContainerStatus for \"b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade\": not found" Jan 30 13:54:32.939591 kubelet[3351]: E0130 13:54:32.939519 3351 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade\": not found" containerID="b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade" Jan 30 13:54:32.939591 kubelet[3351]: I0130 13:54:32.939546 3351 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade"} err="failed to get container status \"b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9cb1688434ab6d2434b4884d922abca005effa5a1a17afcd7ca9b1c202c3ade\": not found" Jan 30 13:54:32.985302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6-rootfs.mount: Deactivated successfully. Jan 30 13:54:32.985429 systemd[1]: var-lib-kubelet-pods-6566b7ce\x2d41ea\x2d4caf\x2da5d0\x2ddc71ba7fe9c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drmdfw.mount: Deactivated successfully. Jan 30 13:54:32.985523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70-rootfs.mount: Deactivated successfully. Jan 30 13:54:32.985607 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70-shm.mount: Deactivated successfully. Jan 30 13:54:32.985692 systemd[1]: var-lib-kubelet-pods-0b194f1a\x2d858f\x2d4872\x2dbba7\x2d43886e78b639-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd2xtm.mount: Deactivated successfully. Jan 30 13:54:32.985789 systemd[1]: var-lib-kubelet-pods-0b194f1a\x2d858f\x2d4872\x2dbba7\x2d43886e78b639-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:54:32.985870 systemd[1]: var-lib-kubelet-pods-0b194f1a\x2d858f\x2d4872\x2dbba7\x2d43886e78b639-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:54:33.860261 sshd[4954]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:33.868869 systemd-logind[1873]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:54:33.869948 systemd[1]: sshd@25-172.31.18.59:22-139.178.68.195:49968.service: Deactivated successfully. Jan 30 13:54:33.873048 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:54:33.874775 systemd-logind[1873]: Removed session 26. Jan 30 13:54:33.898796 systemd[1]: Started sshd@26-172.31.18.59:22-139.178.68.195:49974.service - OpenSSH per-connection server daemon (139.178.68.195:49974). Jan 30 13:54:34.069109 sshd[5113]: Accepted publickey for core from 139.178.68.195 port 49974 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:34.071817 sshd[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:34.084876 systemd-logind[1873]: New session 27 of user core. Jan 30 13:54:34.093734 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:54:34.246701 kubelet[3351]: I0130 13:54:34.245750 3351 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b194f1a-858f-4872-bba7-43886e78b639" path="/var/lib/kubelet/pods/0b194f1a-858f-4872-bba7-43886e78b639/volumes" Jan 30 13:54:34.249116 kubelet[3351]: I0130 13:54:34.248824 3351 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0" path="/var/lib/kubelet/pods/6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0/volumes" Jan 30 13:54:34.281953 ntpd[1857]: Deleting interface #10 lxc_health, fe80::dcd0:96ff:fedc:7a62%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Jan 30 13:54:34.282716 ntpd[1857]: 30 Jan 13:54:34 ntpd[1857]: Deleting interface #10 lxc_health, fe80::dcd0:96ff:fedc:7a62%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Jan 30 13:54:35.182243 sshd[5113]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:35.188148 systemd[1]: sshd@26-172.31.18.59:22-139.178.68.195:49974.service: Deactivated successfully. Jan 30 13:54:35.195424 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:54:35.201408 systemd-logind[1873]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:54:35.224873 kubelet[3351]: E0130 13:54:35.224825 3351 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b194f1a-858f-4872-bba7-43886e78b639" containerName="clean-cilium-state" Jan 30 13:54:35.224873 kubelet[3351]: E0130 13:54:35.224869 3351 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b194f1a-858f-4872-bba7-43886e78b639" containerName="cilium-agent" Jan 30 13:54:35.224873 kubelet[3351]: E0130 13:54:35.224880 3351 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b194f1a-858f-4872-bba7-43886e78b639" containerName="mount-cgroup" Jan 30 13:54:35.225160 kubelet[3351]: E0130 13:54:35.224888 3351 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b194f1a-858f-4872-bba7-43886e78b639" containerName="mount-bpf-fs" Jan 30 13:54:35.225160 kubelet[3351]: E0130 13:54:35.224896 3351 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b194f1a-858f-4872-bba7-43886e78b639" containerName="apply-sysctl-overwrites" Jan 30 13:54:35.225160 kubelet[3351]: E0130 13:54:35.224904 3351 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0" containerName="cilium-operator" Jan 30 13:54:35.226085 systemd[1]: Started sshd@27-172.31.18.59:22-139.178.68.195:54770.service - OpenSSH per-connection server daemon (139.178.68.195:54770). Jan 30 13:54:35.231115 systemd-logind[1873]: Removed session 27. Jan 30 13:54:35.247471 kubelet[3351]: I0130 13:54:35.247327 3351 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b194f1a-858f-4872-bba7-43886e78b639" containerName="cilium-agent" Jan 30 13:54:35.247891 kubelet[3351]: I0130 13:54:35.247480 3351 memory_manager.go:354] "RemoveStaleState removing state" podUID="6566b7ce-41ea-4caf-a5d0-dc71ba7fe9c0" containerName="cilium-operator" Jan 30 13:54:35.293591 systemd[1]: Created slice kubepods-burstable-pod870acc34_cfeb_4829_81c2_be36dc66faa9.slice - libcontainer container kubepods-burstable-pod870acc34_cfeb_4829_81c2_be36dc66faa9.slice. Jan 30 13:54:35.322006 kubelet[3351]: I0130 13:54:35.319944 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/870acc34-cfeb-4829-81c2-be36dc66faa9-cilium-run\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322006 kubelet[3351]: I0130 13:54:35.319999 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/870acc34-cfeb-4829-81c2-be36dc66faa9-hostproc\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322006 kubelet[3351]: I0130 13:54:35.320027 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/870acc34-cfeb-4829-81c2-be36dc66faa9-hubble-tls\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322006 kubelet[3351]: I0130 13:54:35.320057 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/870acc34-cfeb-4829-81c2-be36dc66faa9-bpf-maps\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322006 kubelet[3351]: I0130 13:54:35.320095 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/870acc34-cfeb-4829-81c2-be36dc66faa9-etc-cni-netd\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322006 kubelet[3351]: I0130 13:54:35.320120 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/870acc34-cfeb-4829-81c2-be36dc66faa9-lib-modules\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322473 kubelet[3351]: I0130 13:54:35.320147 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/870acc34-cfeb-4829-81c2-be36dc66faa9-cilium-ipsec-secrets\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322473 kubelet[3351]: I0130 13:54:35.320174 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/870acc34-cfeb-4829-81c2-be36dc66faa9-cilium-cgroup\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322473 kubelet[3351]: I0130 13:54:35.320199 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/870acc34-cfeb-4829-81c2-be36dc66faa9-host-proc-sys-net\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322473 kubelet[3351]: I0130 13:54:35.320224 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/870acc34-cfeb-4829-81c2-be36dc66faa9-host-proc-sys-kernel\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322473 kubelet[3351]: I0130 13:54:35.320249 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/870acc34-cfeb-4829-81c2-be36dc66faa9-xtables-lock\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322701 kubelet[3351]: I0130 13:54:35.320276 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/870acc34-cfeb-4829-81c2-be36dc66faa9-clustermesh-secrets\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322701 kubelet[3351]: I0130 13:54:35.320302 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/870acc34-cfeb-4829-81c2-be36dc66faa9-cni-path\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322701 kubelet[3351]: I0130 13:54:35.320329 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/870acc34-cfeb-4829-81c2-be36dc66faa9-cilium-config-path\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.322701 kubelet[3351]: I0130 13:54:35.320441 3351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9fn\" (UniqueName: \"kubernetes.io/projected/870acc34-cfeb-4829-81c2-be36dc66faa9-kube-api-access-8x9fn\") pod \"cilium-ln729\" (UID: \"870acc34-cfeb-4829-81c2-be36dc66faa9\") " pod="kube-system/cilium-ln729" Jan 30 13:54:35.402615 kubelet[3351]: E0130 13:54:35.402560 3351 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:54:35.436655 sshd[5125]: Accepted publickey for core from 139.178.68.195 port 54770 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:35.466630 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:35.512687 systemd-logind[1873]: New session 28 of user core. Jan 30 13:54:35.522384 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 13:54:35.613574 containerd[1880]: time="2025-01-30T13:54:35.613244256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ln729,Uid:870acc34-cfeb-4829-81c2-be36dc66faa9,Namespace:kube-system,Attempt:0,}" Jan 30 13:54:35.645405 sshd[5125]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:35.651546 systemd[1]: sshd@27-172.31.18.59:22-139.178.68.195:54770.service: Deactivated successfully. Jan 30 13:54:35.653554 containerd[1880]: time="2025-01-30T13:54:35.649595655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:35.653694 containerd[1880]: time="2025-01-30T13:54:35.653625479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:35.653746 containerd[1880]: time="2025-01-30T13:54:35.653694360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:35.654213 containerd[1880]: time="2025-01-30T13:54:35.654163314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:35.656364 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 13:54:35.659396 systemd-logind[1873]: Session 28 logged out. Waiting for processes to exit. Jan 30 13:54:35.662495 systemd-logind[1873]: Removed session 28. Jan 30 13:54:35.684404 systemd[1]: Started cri-containerd-dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484.scope - libcontainer container dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484. Jan 30 13:54:35.691206 systemd[1]: Started sshd@28-172.31.18.59:22-139.178.68.195:54784.service - OpenSSH per-connection server daemon (139.178.68.195:54784). Jan 30 13:54:35.727927 containerd[1880]: time="2025-01-30T13:54:35.727876576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ln729,Uid:870acc34-cfeb-4829-81c2-be36dc66faa9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484\"" Jan 30 13:54:35.733475 containerd[1880]: time="2025-01-30T13:54:35.733441542Z" level=info msg="CreateContainer within sandbox \"dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:54:35.814489 containerd[1880]: time="2025-01-30T13:54:35.814436884Z" level=info msg="CreateContainer within sandbox \"dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c66943cae77b5310ce34223099cdf74b19f02ddd54542d829c4ec08d39431e91\"" Jan 30 13:54:35.815673 containerd[1880]: time="2025-01-30T13:54:35.815447372Z" level=info msg="StartContainer for \"c66943cae77b5310ce34223099cdf74b19f02ddd54542d829c4ec08d39431e91\"" Jan 30 13:54:35.852473 systemd[1]: Started cri-containerd-c66943cae77b5310ce34223099cdf74b19f02ddd54542d829c4ec08d39431e91.scope - libcontainer container c66943cae77b5310ce34223099cdf74b19f02ddd54542d829c4ec08d39431e91. Jan 30 13:54:35.872463 sshd[5169]: Accepted publickey for core from 139.178.68.195 port 54784 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:35.882465 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:35.893273 systemd-logind[1873]: New session 29 of user core. Jan 30 13:54:35.912289 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 13:54:35.940411 containerd[1880]: time="2025-01-30T13:54:35.940263203Z" level=info msg="StartContainer for \"c66943cae77b5310ce34223099cdf74b19f02ddd54542d829c4ec08d39431e91\" returns successfully" Jan 30 13:54:35.962617 systemd[1]: cri-containerd-c66943cae77b5310ce34223099cdf74b19f02ddd54542d829c4ec08d39431e91.scope: Deactivated successfully. Jan 30 13:54:36.061364 containerd[1880]: time="2025-01-30T13:54:36.061118891Z" level=info msg="shim disconnected" id=c66943cae77b5310ce34223099cdf74b19f02ddd54542d829c4ec08d39431e91 namespace=k8s.io Jan 30 13:54:36.061364 containerd[1880]: time="2025-01-30T13:54:36.061365941Z" level=warning msg="cleaning up after shim disconnected" id=c66943cae77b5310ce34223099cdf74b19f02ddd54542d829c4ec08d39431e91 namespace=k8s.io Jan 30 13:54:36.061661 containerd[1880]: time="2025-01-30T13:54:36.061382399Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:36.876217 containerd[1880]: time="2025-01-30T13:54:36.873790304Z" level=info msg="CreateContainer within sandbox \"dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:54:36.962481 containerd[1880]: time="2025-01-30T13:54:36.962427283Z" level=info msg="CreateContainer within sandbox \"dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cc765defdcc99dbbd2c617dc32889edf427bdc46671dd2d8d77f276e7703eae9\"" Jan 30 13:54:36.963671 containerd[1880]: time="2025-01-30T13:54:36.963635574Z" level=info msg="StartContainer for \"cc765defdcc99dbbd2c617dc32889edf427bdc46671dd2d8d77f276e7703eae9\"" Jan 30 13:54:37.007320 systemd[1]: Started cri-containerd-cc765defdcc99dbbd2c617dc32889edf427bdc46671dd2d8d77f276e7703eae9.scope - libcontainer container cc765defdcc99dbbd2c617dc32889edf427bdc46671dd2d8d77f276e7703eae9. Jan 30 13:54:37.045791 containerd[1880]: time="2025-01-30T13:54:37.045748190Z" level=info msg="StartContainer for \"cc765defdcc99dbbd2c617dc32889edf427bdc46671dd2d8d77f276e7703eae9\" returns successfully" Jan 30 13:54:37.063362 systemd[1]: cri-containerd-cc765defdcc99dbbd2c617dc32889edf427bdc46671dd2d8d77f276e7703eae9.scope: Deactivated successfully. Jan 30 13:54:37.099388 containerd[1880]: time="2025-01-30T13:54:37.099313635Z" level=info msg="shim disconnected" id=cc765defdcc99dbbd2c617dc32889edf427bdc46671dd2d8d77f276e7703eae9 namespace=k8s.io Jan 30 13:54:37.099388 containerd[1880]: time="2025-01-30T13:54:37.099378638Z" level=warning msg="cleaning up after shim disconnected" id=cc765defdcc99dbbd2c617dc32889edf427bdc46671dd2d8d77f276e7703eae9 namespace=k8s.io Jan 30 13:54:37.099828 containerd[1880]: time="2025-01-30T13:54:37.099391040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:37.433704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc765defdcc99dbbd2c617dc32889edf427bdc46671dd2d8d77f276e7703eae9-rootfs.mount: Deactivated successfully. Jan 30 13:54:37.898864 containerd[1880]: time="2025-01-30T13:54:37.898821628Z" level=info msg="CreateContainer within sandbox \"dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:54:37.958622 containerd[1880]: time="2025-01-30T13:54:37.958564619Z" level=info msg="CreateContainer within sandbox \"dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0b5c7044e3a567455b127bfb2afe503b5333a2189ad611abd26710e96a375e86\"" Jan 30 13:54:37.959402 containerd[1880]: time="2025-01-30T13:54:37.959346207Z" level=info msg="StartContainer for \"0b5c7044e3a567455b127bfb2afe503b5333a2189ad611abd26710e96a375e86\"" Jan 30 13:54:38.008342 systemd[1]: Started cri-containerd-0b5c7044e3a567455b127bfb2afe503b5333a2189ad611abd26710e96a375e86.scope - libcontainer container 0b5c7044e3a567455b127bfb2afe503b5333a2189ad611abd26710e96a375e86. Jan 30 13:54:38.188836 containerd[1880]: time="2025-01-30T13:54:38.188332248Z" level=info msg="StartContainer for \"0b5c7044e3a567455b127bfb2afe503b5333a2189ad611abd26710e96a375e86\" returns successfully" Jan 30 13:54:38.216231 systemd[1]: cri-containerd-0b5c7044e3a567455b127bfb2afe503b5333a2189ad611abd26710e96a375e86.scope: Deactivated successfully. Jan 30 13:54:38.302580 containerd[1880]: time="2025-01-30T13:54:38.302507916Z" level=info msg="shim disconnected" id=0b5c7044e3a567455b127bfb2afe503b5333a2189ad611abd26710e96a375e86 namespace=k8s.io Jan 30 13:54:38.302580 containerd[1880]: time="2025-01-30T13:54:38.302572945Z" level=warning msg="cleaning up after shim disconnected" id=0b5c7044e3a567455b127bfb2afe503b5333a2189ad611abd26710e96a375e86 namespace=k8s.io Jan 30 13:54:38.302580 containerd[1880]: time="2025-01-30T13:54:38.302584028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:38.330053 containerd[1880]: time="2025-01-30T13:54:38.329995965Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:54:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:54:38.433688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b5c7044e3a567455b127bfb2afe503b5333a2189ad611abd26710e96a375e86-rootfs.mount: Deactivated successfully. Jan 30 13:54:38.894934 containerd[1880]: time="2025-01-30T13:54:38.894890469Z" level=info msg="CreateContainer within sandbox \"dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:54:38.922437 containerd[1880]: time="2025-01-30T13:54:38.922388502Z" level=info msg="CreateContainer within sandbox \"dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c6449ebb0918739bec9d7ae9d9c18d9db5e55e020751dc22a485c2397734567\"" Jan 30 13:54:38.933565 containerd[1880]: time="2025-01-30T13:54:38.930368296Z" level=info msg="StartContainer for \"0c6449ebb0918739bec9d7ae9d9c18d9db5e55e020751dc22a485c2397734567\"" Jan 30 13:54:38.978341 systemd[1]: Started cri-containerd-0c6449ebb0918739bec9d7ae9d9c18d9db5e55e020751dc22a485c2397734567.scope - libcontainer container 0c6449ebb0918739bec9d7ae9d9c18d9db5e55e020751dc22a485c2397734567. Jan 30 13:54:39.009392 systemd[1]: cri-containerd-0c6449ebb0918739bec9d7ae9d9c18d9db5e55e020751dc22a485c2397734567.scope: Deactivated successfully. Jan 30 13:54:39.011574 containerd[1880]: time="2025-01-30T13:54:39.011241923Z" level=info msg="StartContainer for \"0c6449ebb0918739bec9d7ae9d9c18d9db5e55e020751dc22a485c2397734567\" returns successfully" Jan 30 13:54:39.057466 containerd[1880]: time="2025-01-30T13:54:39.057394572Z" level=info msg="shim disconnected" id=0c6449ebb0918739bec9d7ae9d9c18d9db5e55e020751dc22a485c2397734567 namespace=k8s.io Jan 30 13:54:39.057466 containerd[1880]: time="2025-01-30T13:54:39.057458588Z" level=warning msg="cleaning up after shim disconnected" id=0c6449ebb0918739bec9d7ae9d9c18d9db5e55e020751dc22a485c2397734567 namespace=k8s.io Jan 30 13:54:39.057466 containerd[1880]: time="2025-01-30T13:54:39.057469881Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:39.433996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c6449ebb0918739bec9d7ae9d9c18d9db5e55e020751dc22a485c2397734567-rootfs.mount: Deactivated successfully. Jan 30 13:54:39.906921 containerd[1880]: time="2025-01-30T13:54:39.906862134Z" level=info msg="CreateContainer within sandbox \"dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:54:39.940631 containerd[1880]: time="2025-01-30T13:54:39.940583394Z" level=info msg="CreateContainer within sandbox \"dca55762996515d65a8d20c18b3af67686719cef39b8592db61f8d6d16aa7484\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"41ad03bb5ec656c3be76a99c213214470e5b3bd5c9159a3388332bab98e4fb96\"" Jan 30 13:54:39.945267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4138182365.mount: Deactivated successfully. Jan 30 13:54:39.947368 containerd[1880]: time="2025-01-30T13:54:39.945367059Z" level=info msg="StartContainer for \"41ad03bb5ec656c3be76a99c213214470e5b3bd5c9159a3388332bab98e4fb96\"" Jan 30 13:54:39.997293 systemd[1]: Started cri-containerd-41ad03bb5ec656c3be76a99c213214470e5b3bd5c9159a3388332bab98e4fb96.scope - libcontainer container 41ad03bb5ec656c3be76a99c213214470e5b3bd5c9159a3388332bab98e4fb96. Jan 30 13:54:40.110097 containerd[1880]: time="2025-01-30T13:54:40.109932281Z" level=info msg="StartContainer for \"41ad03bb5ec656c3be76a99c213214470e5b3bd5c9159a3388332bab98e4fb96\" returns successfully" Jan 30 13:54:40.434345 systemd[1]: run-containerd-runc-k8s.io-41ad03bb5ec656c3be76a99c213214470e5b3bd5c9159a3388332bab98e4fb96-runc.VGUJLq.mount: Deactivated successfully. Jan 30 13:54:40.881181 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 13:54:44.587827 (udev-worker)[5987]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:44.589017 (udev-worker)[5986]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:44.595691 systemd-networkd[1724]: lxc_health: Link UP Jan 30 13:54:44.611778 systemd-networkd[1724]: lxc_health: Gained carrier Jan 30 13:54:45.069758 systemd[1]: run-containerd-runc-k8s.io-41ad03bb5ec656c3be76a99c213214470e5b3bd5c9159a3388332bab98e4fb96-runc.GKP5px.mount: Deactivated successfully. Jan 30 13:54:45.663300 kubelet[3351]: I0130 13:54:45.657598 3351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ln729" podStartSLOduration=10.657575119 podStartE2EDuration="10.657575119s" podCreationTimestamp="2025-01-30 13:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:54:41.00279952 +0000 UTC m=+111.010498626" watchObservedRunningTime="2025-01-30 13:54:45.657575119 +0000 UTC m=+115.665274223" Jan 30 13:54:46.031640 systemd-networkd[1724]: lxc_health: Gained IPv6LL Jan 30 13:54:47.380846 systemd[1]: run-containerd-runc-k8s.io-41ad03bb5ec656c3be76a99c213214470e5b3bd5c9159a3388332bab98e4fb96-runc.oIUZmV.mount: Deactivated successfully. Jan 30 13:54:48.282163 ntpd[1857]: Listen normally on 13 lxc_health [fe80::a084:8fff:fe4d:7850%14]:123 Jan 30 13:54:48.282706 ntpd[1857]: 30 Jan 13:54:48 ntpd[1857]: Listen normally on 13 lxc_health [fe80::a084:8fff:fe4d:7850%14]:123 Jan 30 13:54:49.762266 sshd[5169]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:49.770544 systemd[1]: sshd@28-172.31.18.59:22-139.178.68.195:54784.service: Deactivated successfully. Jan 30 13:54:49.775968 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 13:54:49.783729 systemd-logind[1873]: Session 29 logged out. Waiting for processes to exit. Jan 30 13:54:49.798068 systemd-logind[1873]: Removed session 29. Jan 30 13:54:50.277612 containerd[1880]: time="2025-01-30T13:54:50.277567543Z" level=info msg="StopPodSandbox for \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\"" Jan 30 13:54:50.278324 containerd[1880]: time="2025-01-30T13:54:50.277685876Z" level=info msg="TearDown network for sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" successfully" Jan 30 13:54:50.278324 containerd[1880]: time="2025-01-30T13:54:50.277702770Z" level=info msg="StopPodSandbox for \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" returns successfully" Jan 30 13:54:50.278548 containerd[1880]: time="2025-01-30T13:54:50.278397962Z" level=info msg="RemovePodSandbox for \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\"" Jan 30 13:54:50.278548 containerd[1880]: time="2025-01-30T13:54:50.278487611Z" level=info msg="Forcibly stopping sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\"" Jan 30 13:54:50.278633 containerd[1880]: time="2025-01-30T13:54:50.278588143Z" level=info msg="TearDown network for sandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" successfully" Jan 30 13:54:50.286482 containerd[1880]: time="2025-01-30T13:54:50.286161011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:50.286641 containerd[1880]: time="2025-01-30T13:54:50.286529817Z" level=info msg="RemovePodSandbox \"0cf8d1dab09de915d1ab886d182afe17b3a9d8bc050757900e05ca7c9a4edd70\" returns successfully" Jan 30 13:54:50.287232 containerd[1880]: time="2025-01-30T13:54:50.287198658Z" level=info msg="StopPodSandbox for \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\"" Jan 30 13:54:50.287378 containerd[1880]: time="2025-01-30T13:54:50.287302598Z" level=info msg="TearDown network for sandbox \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\" successfully" Jan 30 13:54:50.287378 containerd[1880]: time="2025-01-30T13:54:50.287319927Z" level=info msg="StopPodSandbox for \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\" returns successfully" Jan 30 13:54:50.287784 containerd[1880]: time="2025-01-30T13:54:50.287753830Z" level=info msg="RemovePodSandbox for \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\"" Jan 30 13:54:50.287860 containerd[1880]: time="2025-01-30T13:54:50.287789463Z" level=info msg="Forcibly stopping sandbox \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\"" Jan 30 13:54:50.287860 containerd[1880]: time="2025-01-30T13:54:50.287850137Z" level=info msg="TearDown network for sandbox \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\" successfully" Jan 30 13:54:50.295508 containerd[1880]: time="2025-01-30T13:54:50.293901033Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:50.295508 containerd[1880]: time="2025-01-30T13:54:50.294224621Z" level=info msg="RemovePodSandbox \"156865a345152e09446debc30dd9e7a3486d16062e95d95b73eeb40bf64286f6\" returns successfully" Jan 30 13:55:04.297917 systemd[1]: cri-containerd-f16e61da3addfe127e63679a84ed7a35cde262e3443de7e2bb4473e04a0cf723.scope: Deactivated successfully. Jan 30 13:55:04.298886 systemd[1]: cri-containerd-f16e61da3addfe127e63679a84ed7a35cde262e3443de7e2bb4473e04a0cf723.scope: Consumed 3.811s CPU time, 25.6M memory peak, 0B memory swap peak. Jan 30 13:55:04.325997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f16e61da3addfe127e63679a84ed7a35cde262e3443de7e2bb4473e04a0cf723-rootfs.mount: Deactivated successfully. Jan 30 13:55:04.350879 containerd[1880]: time="2025-01-30T13:55:04.350760055Z" level=info msg="shim disconnected" id=f16e61da3addfe127e63679a84ed7a35cde262e3443de7e2bb4473e04a0cf723 namespace=k8s.io Jan 30 13:55:04.350879 containerd[1880]: time="2025-01-30T13:55:04.350872425Z" level=warning msg="cleaning up after shim disconnected" id=f16e61da3addfe127e63679a84ed7a35cde262e3443de7e2bb4473e04a0cf723 namespace=k8s.io Jan 30 13:55:04.350879 containerd[1880]: time="2025-01-30T13:55:04.350884578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:55:04.368319 containerd[1880]: time="2025-01-30T13:55:04.368222496Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:55:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:55:05.035429 kubelet[3351]: I0130 13:55:05.035375 3351 scope.go:117] "RemoveContainer" containerID="f16e61da3addfe127e63679a84ed7a35cde262e3443de7e2bb4473e04a0cf723" Jan 30 13:55:05.039617 containerd[1880]: time="2025-01-30T13:55:05.039576000Z" level=info msg="CreateContainer within sandbox \"faf8840955d044d4b8e8d34ab2ce785bf13f92909c0aba4ff42e74826de470b2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 13:55:05.075345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2132028296.mount: Deactivated successfully. Jan 30 13:55:05.078483 containerd[1880]: time="2025-01-30T13:55:05.078432680Z" level=info msg="CreateContainer within sandbox \"faf8840955d044d4b8e8d34ab2ce785bf13f92909c0aba4ff42e74826de470b2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b6fd339bf88108aeba5a37e6d0f1a229f5ee8c862f2bbe0fe32ed6887f59232d\"" Jan 30 13:55:05.079178 containerd[1880]: time="2025-01-30T13:55:05.079013371Z" level=info msg="StartContainer for \"b6fd339bf88108aeba5a37e6d0f1a229f5ee8c862f2bbe0fe32ed6887f59232d\"" Jan 30 13:55:05.131463 systemd[1]: Started cri-containerd-b6fd339bf88108aeba5a37e6d0f1a229f5ee8c862f2bbe0fe32ed6887f59232d.scope - libcontainer container b6fd339bf88108aeba5a37e6d0f1a229f5ee8c862f2bbe0fe32ed6887f59232d. Jan 30 13:55:05.200964 containerd[1880]: time="2025-01-30T13:55:05.200920116Z" level=info msg="StartContainer for \"b6fd339bf88108aeba5a37e6d0f1a229f5ee8c862f2bbe0fe32ed6887f59232d\" returns successfully" Jan 30 13:55:10.457453 systemd[1]: cri-containerd-f4ad814c9e70ade4e20169c90ab5ed1ec24232f02ae3db5318ecd23140c1428a.scope: Deactivated successfully. Jan 30 13:55:10.458429 systemd[1]: cri-containerd-f4ad814c9e70ade4e20169c90ab5ed1ec24232f02ae3db5318ecd23140c1428a.scope: Consumed 1.398s CPU time, 19.0M memory peak, 0B memory swap peak. Jan 30 13:55:10.520283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4ad814c9e70ade4e20169c90ab5ed1ec24232f02ae3db5318ecd23140c1428a-rootfs.mount: Deactivated successfully. Jan 30 13:55:10.540411 containerd[1880]: time="2025-01-30T13:55:10.540031661Z" level=info msg="shim disconnected" id=f4ad814c9e70ade4e20169c90ab5ed1ec24232f02ae3db5318ecd23140c1428a namespace=k8s.io Jan 30 13:55:10.540411 containerd[1880]: time="2025-01-30T13:55:10.540122910Z" level=warning msg="cleaning up after shim disconnected" id=f4ad814c9e70ade4e20169c90ab5ed1ec24232f02ae3db5318ecd23140c1428a namespace=k8s.io Jan 30 13:55:10.540411 containerd[1880]: time="2025-01-30T13:55:10.540137412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:55:11.055153 kubelet[3351]: I0130 13:55:11.055120 3351 scope.go:117] "RemoveContainer" containerID="f4ad814c9e70ade4e20169c90ab5ed1ec24232f02ae3db5318ecd23140c1428a" Jan 30 13:55:11.057675 containerd[1880]: time="2025-01-30T13:55:11.057600414Z" level=info msg="CreateContainer within sandbox \"06bbff7cedd2a0c04553124f40bfbce71117ea7dba3e58cf973a998627bf9e49\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 13:55:11.079572 containerd[1880]: time="2025-01-30T13:55:11.079526854Z" level=info msg="CreateContainer within sandbox \"06bbff7cedd2a0c04553124f40bfbce71117ea7dba3e58cf973a998627bf9e49\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"67309ec0159064c05a6dca3c01410ef38a240bb88bf924b7b3a27e1336ee472f\"" Jan 30 13:55:11.080352 containerd[1880]: time="2025-01-30T13:55:11.080316225Z" level=info msg="StartContainer for \"67309ec0159064c05a6dca3c01410ef38a240bb88bf924b7b3a27e1336ee472f\"" Jan 30 13:55:11.125328 systemd[1]: Started cri-containerd-67309ec0159064c05a6dca3c01410ef38a240bb88bf924b7b3a27e1336ee472f.scope - libcontainer container 67309ec0159064c05a6dca3c01410ef38a240bb88bf924b7b3a27e1336ee472f. Jan 30 13:55:11.181557 containerd[1880]: time="2025-01-30T13:55:11.181508633Z" level=info msg="StartContainer for \"67309ec0159064c05a6dca3c01410ef38a240bb88bf924b7b3a27e1336ee472f\" returns successfully" Jan 30 13:55:12.749805 kubelet[3351]: E0130 13:55:12.749739 3351 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-59?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"