Jan 13 20:55:48.950423 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:55:48.950465 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:55:48.950483 kernel: BIOS-provided physical RAM map: Jan 13 20:55:48.950496 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:55:48.950507 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:55:48.950519 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:55:48.950536 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 13 20:55:48.950550 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 13 20:55:48.950562 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 13 20:55:48.950575 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:55:48.950588 kernel: NX (Execute Disable) protection: active Jan 13 20:55:48.950601 kernel: APIC: Static calls initialized Jan 13 20:55:48.950614 kernel: SMBIOS 2.7 present. Jan 13 20:55:48.950628 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 13 20:55:48.950647 kernel: Hypervisor detected: KVM Jan 13 20:55:48.950661 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:55:48.950674 kernel: kvm-clock: using sched offset of 7576559850 cycles Jan 13 20:55:48.950687 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:55:48.950701 kernel: tsc: Detected 2499.996 MHz processor Jan 13 20:55:48.950714 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:55:48.950727 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:55:48.950743 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 13 20:55:48.950757 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:55:48.950812 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:55:48.950824 kernel: Using GB pages for direct mapping Jan 13 20:55:48.950834 kernel: ACPI: Early table checksum verification disabled Jan 13 20:55:48.950848 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 13 20:55:48.950861 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 13 20:55:48.950874 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:55:48.950887 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 20:55:48.950904 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 13 20:55:48.950915 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:55:48.950929 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:55:48.950942 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 13 20:55:48.950954 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:55:48.950969 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 13 20:55:48.950984 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 13 20:55:48.950998 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:55:48.951013 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 13 20:55:48.951031 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 13 20:55:48.951052 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 13 20:55:48.951069 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 13 20:55:48.951084 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 13 20:55:48.951139 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 13 20:55:48.951155 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 13 20:55:48.951168 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 13 20:55:48.951180 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 13 20:55:48.951194 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 13 20:55:48.951211 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:55:48.951224 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:55:48.951238 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 13 20:55:48.951254 kernel: NUMA: Initialized distance table, cnt=1 Jan 13 20:55:48.951269 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 13 20:55:48.951288 kernel: Zone ranges: Jan 13 20:55:48.951305 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:55:48.951322 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 13 20:55:48.951338 kernel: Normal empty Jan 13 20:55:48.951354 kernel: Movable zone start for each node Jan 13 20:55:48.951371 kernel: Early memory node ranges Jan 13 20:55:48.951387 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:55:48.951402 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 13 20:55:48.951418 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 13 20:55:48.951503 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:55:48.951516 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:55:48.951530 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 13 20:55:48.951545 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 20:55:48.951561 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:55:48.951574 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 13 20:55:48.951652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:55:48.951664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:55:48.951677 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:55:48.951692 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:55:48.951710 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:55:48.951724 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:55:48.951737 kernel: TSC deadline timer available Jan 13 20:55:48.951751 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:55:48.951782 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:55:48.951795 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 13 20:55:48.951808 kernel: Booting paravirtualized kernel on KVM Jan 13 20:55:48.951821 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:55:48.951834 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:55:48.951851 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:55:48.951864 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:55:48.951876 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:55:48.951889 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:55:48.951902 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:55:48.951916 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:55:48.951930 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:55:48.951942 kernel: random: crng init done Jan 13 20:55:48.952067 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:55:48.952085 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:55:48.952099 kernel: Fallback order for Node 0: 0 Jan 13 20:55:48.952112 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 13 20:55:48.952125 kernel: Policy zone: DMA32 Jan 13 20:55:48.952138 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:55:48.952151 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Jan 13 20:55:48.952164 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:55:48.952178 kernel: Kernel/User page tables isolation: enabled Jan 13 20:55:48.952195 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:55:48.952208 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:55:48.952269 kernel: Dynamic Preempt: voluntary Jan 13 20:55:48.952285 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:55:48.952300 kernel: rcu: RCU event tracing is enabled. Jan 13 20:55:48.952313 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:55:48.952367 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:55:48.952384 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:55:48.952398 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:55:48.952451 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:55:48.952466 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:55:48.952479 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:55:48.952492 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:55:48.953474 kernel: Console: colour VGA+ 80x25 Jan 13 20:55:48.953489 kernel: printk: console [ttyS0] enabled Jan 13 20:55:48.953502 kernel: ACPI: Core revision 20230628 Jan 13 20:55:48.953515 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 13 20:55:48.953528 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:55:48.953546 kernel: x2apic enabled Jan 13 20:55:48.953559 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:55:48.953582 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 13 20:55:48.953598 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 13 20:55:48.953611 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 20:55:48.953624 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 20:55:48.953638 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:55:48.953651 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:55:48.953664 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:55:48.953677 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:55:48.953708 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 13 20:55:48.953722 kernel: RETBleed: Vulnerable Jan 13 20:55:48.953736 kernel: Speculative Store Bypass: Vulnerable Jan 13 20:55:48.953754 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:55:48.953801 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:55:48.953814 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 20:55:48.953827 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:55:48.953840 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:55:48.953854 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:55:48.953870 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 20:55:48.953883 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 20:55:48.953897 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 20:55:48.953909 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 20:55:48.953924 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 20:55:48.953938 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 20:55:48.953953 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:55:48.955932 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 20:55:48.956025 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 20:55:48.956045 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 13 20:55:48.956060 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 13 20:55:48.956108 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 13 20:55:48.956123 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 13 20:55:48.956195 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 13 20:55:48.956211 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:55:48.956470 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:55:48.956489 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:55:48.956528 kernel: landlock: Up and running. Jan 13 20:55:48.956544 kernel: SELinux: Initializing. Jan 13 20:55:48.956558 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:55:48.956575 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:55:48.956616 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 13 20:55:48.956636 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:55:48.956793 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:55:48.956814 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:55:48.956828 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 20:55:48.956868 kernel: signal: max sigframe size: 3632 Jan 13 20:55:48.956893 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:55:48.956907 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:55:48.956921 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:55:48.956959 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:55:48.956978 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:55:48.956990 kernel: .... node #0, CPUs: #1 Jan 13 20:55:48.957004 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 20:55:48.957045 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:55:48.957060 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:55:48.957074 kernel: smpboot: Max logical packages: 1 Jan 13 20:55:48.957131 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 13 20:55:48.957148 kernel: devtmpfs: initialized Jan 13 20:55:48.957161 kernel: x86/mm: Memory block size: 128MB Jan 13 20:55:48.957181 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:55:48.957221 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:55:48.957236 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:55:48.957318 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:55:48.957338 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:55:48.957353 kernel: audit: type=2000 audit(1736801748.104:1): state=initialized audit_enabled=0 res=1 Jan 13 20:55:48.957569 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:55:48.957585 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:55:48.957605 kernel: cpuidle: using governor menu Jan 13 20:55:48.957620 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:55:48.957635 kernel: dca service started, version 1.12.1 Jan 13 20:55:48.957648 kernel: PCI: Using configuration type 1 for base access Jan 13 20:55:48.957663 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:55:48.957677 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:55:48.957691 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:55:48.957705 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:55:48.957719 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:55:48.957736 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:55:48.957751 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:55:48.957779 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:55:48.957793 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:55:48.957807 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 20:55:48.957821 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:55:48.957835 kernel: ACPI: Interpreter enabled Jan 13 20:55:48.957849 kernel: ACPI: PM: (supports S0 S5) Jan 13 20:55:48.957862 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:55:48.957876 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:55:48.957894 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:55:48.957907 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 20:55:48.957921 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:55:48.958128 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:55:48.958416 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:55:48.958551 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:55:48.958569 kernel: acpiphp: Slot [3] registered Jan 13 20:55:48.958687 kernel: acpiphp: Slot [4] registered Jan 13 20:55:48.958701 kernel: acpiphp: Slot [5] registered Jan 13 20:55:48.958714 kernel: acpiphp: Slot [6] registered Jan 13 20:55:48.958727 kernel: acpiphp: Slot [7] registered Jan 13 20:55:48.958740 kernel: acpiphp: Slot [8] registered Jan 13 20:55:48.958754 kernel: acpiphp: Slot [9] registered Jan 13 20:55:48.958786 kernel: acpiphp: Slot [10] registered Jan 13 20:55:48.958800 kernel: acpiphp: Slot [11] registered Jan 13 20:55:48.958813 kernel: acpiphp: Slot [12] registered Jan 13 20:55:48.960948 kernel: acpiphp: Slot [13] registered Jan 13 20:55:48.960968 kernel: acpiphp: Slot [14] registered Jan 13 20:55:48.960984 kernel: acpiphp: Slot [15] registered Jan 13 20:55:48.961000 kernel: acpiphp: Slot [16] registered Jan 13 20:55:48.961016 kernel: acpiphp: Slot [17] registered Jan 13 20:55:48.961032 kernel: acpiphp: Slot [18] registered Jan 13 20:55:48.961048 kernel: acpiphp: Slot [19] registered Jan 13 20:55:48.961064 kernel: acpiphp: Slot [20] registered Jan 13 20:55:48.961079 kernel: acpiphp: Slot [21] registered Jan 13 20:55:48.961094 kernel: acpiphp: Slot [22] registered Jan 13 20:55:48.961114 kernel: acpiphp: Slot [23] registered Jan 13 20:55:48.961130 kernel: acpiphp: Slot [24] registered Jan 13 20:55:48.961145 kernel: acpiphp: Slot [25] registered Jan 13 20:55:48.961161 kernel: acpiphp: Slot [26] registered Jan 13 20:55:48.961177 kernel: acpiphp: Slot [27] registered Jan 13 20:55:48.961193 kernel: acpiphp: Slot [28] registered Jan 13 20:55:48.961209 kernel: acpiphp: Slot [29] registered Jan 13 20:55:48.961225 kernel: acpiphp: Slot [30] registered Jan 13 20:55:48.961240 kernel: acpiphp: Slot [31] registered Jan 13 20:55:48.961306 kernel: PCI host bridge to bus 0000:00 Jan 13 20:55:48.961485 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:55:48.961628 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:55:48.961752 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:55:48.961895 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 13 20:55:48.962019 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:55:48.962292 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:55:48.962450 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 20:55:48.962636 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 13 20:55:48.962788 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 20:55:48.962917 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 13 20:55:48.964392 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 13 20:55:48.964556 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 13 20:55:48.965022 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 13 20:55:48.965179 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 13 20:55:48.965400 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 13 20:55:48.965542 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 13 20:55:48.965688 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 13 20:55:48.965934 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 13 20:55:48.966076 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 13 20:55:48.966371 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:55:48.966532 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:55:48.966675 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 13 20:55:48.966923 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:55:48.967182 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 13 20:55:48.967330 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:55:48.967347 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:55:48.967366 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:55:48.967381 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:55:48.967397 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:55:48.967414 kernel: iommu: Default domain type: Translated Jan 13 20:55:48.967429 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:55:48.967446 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:55:48.967470 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:55:48.967486 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:55:48.967502 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 13 20:55:48.967781 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 13 20:55:48.967945 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 13 20:55:48.968326 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:55:48.968391 kernel: vgaarb: loaded Jan 13 20:55:48.968409 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 20:55:48.968427 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 13 20:55:48.968443 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:55:48.968460 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:55:48.968477 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:55:48.968498 kernel: pnp: PnP ACPI init Jan 13 20:55:48.968514 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:55:48.968530 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:55:48.968547 kernel: NET: Registered PF_INET protocol family Jan 13 20:55:48.968563 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:55:48.968581 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 20:55:48.968598 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:55:48.968615 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:55:48.968790 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:55:48.968817 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 20:55:48.968832 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:55:48.968846 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:55:48.968861 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:55:48.968877 kernel: NET: Registered PF_XDP protocol family Jan 13 20:55:48.969079 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:55:48.969218 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:55:48.969464 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:55:48.969620 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 13 20:55:48.969790 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:55:48.969811 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:55:48.969826 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:55:48.969841 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 13 20:55:48.969859 kernel: clocksource: Switched to clocksource tsc Jan 13 20:55:48.969874 kernel: Initialise system trusted keyrings Jan 13 20:55:48.969891 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 20:55:48.969913 kernel: Key type asymmetric registered Jan 13 20:55:48.969929 kernel: Asymmetric key parser 'x509' registered Jan 13 20:55:48.969945 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:55:48.969962 kernel: io scheduler mq-deadline registered Jan 13 20:55:48.969979 kernel: io scheduler kyber registered Jan 13 20:55:48.969995 kernel: io scheduler bfq registered Jan 13 20:55:48.970015 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:55:48.970031 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:55:48.970048 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:55:48.970067 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:55:48.970082 kernel: i8042: Warning: Keylock active Jan 13 20:55:48.970099 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:55:48.970114 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:55:48.970431 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 20:55:48.970739 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 20:55:48.970918 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T20:55:48 UTC (1736801748) Jan 13 20:55:48.971118 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 20:55:48.971149 kernel: intel_pstate: CPU model not supported Jan 13 20:55:48.971166 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:55:48.971183 kernel: Segment Routing with IPv6 Jan 13 20:55:48.971199 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:55:48.971216 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:55:48.971233 kernel: Key type dns_resolver registered Jan 13 20:55:48.971249 kernel: IPI shorthand broadcast: enabled Jan 13 20:55:48.971266 kernel: sched_clock: Marking stable (493001505, 219476161)->(780490463, -68012797) Jan 13 20:55:48.971282 kernel: registered taskstats version 1 Jan 13 20:55:48.971302 kernel: Loading compiled-in X.509 certificates Jan 13 20:55:48.971319 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:55:48.971336 kernel: Key type .fscrypt registered Jan 13 20:55:48.971353 kernel: Key type fscrypt-provisioning registered Jan 13 20:55:48.971371 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:55:48.971387 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:55:48.971403 kernel: ima: No architecture policies found Jan 13 20:55:48.971420 kernel: clk: Disabling unused clocks Jan 13 20:55:48.971436 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:55:48.971464 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:55:48.971482 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:55:48.971499 kernel: Run /init as init process Jan 13 20:55:48.971515 kernel: with arguments: Jan 13 20:55:48.971532 kernel: /init Jan 13 20:55:48.971548 kernel: with environment: Jan 13 20:55:48.971564 kernel: HOME=/ Jan 13 20:55:48.971580 kernel: TERM=linux Jan 13 20:55:48.971596 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:55:48.971619 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:55:48.971658 systemd[1]: Detected virtualization amazon. Jan 13 20:55:48.971680 systemd[1]: Detected architecture x86-64. Jan 13 20:55:48.971697 systemd[1]: Running in initrd. Jan 13 20:55:48.971714 systemd[1]: No hostname configured, using default hostname. Jan 13 20:55:48.971735 systemd[1]: Hostname set to . Jan 13 20:55:48.971754 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:55:48.971817 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:55:48.971834 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:55:48.971851 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:55:48.971871 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:55:48.971890 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:55:48.971908 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:55:48.971932 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:55:48.971953 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:55:48.972040 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:55:48.972059 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:55:48.972078 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:55:48.972097 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:55:48.972115 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:55:48.972137 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:55:48.972157 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:55:48.972176 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:55:48.972194 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:55:48.972213 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:55:48.972231 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:55:48.972249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:55:48.972268 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:55:48.972291 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:55:48.972308 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:55:48.972326 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:55:48.972443 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:55:48.972464 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:55:48.972481 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:55:48.972499 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:55:48.972521 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:55:48.972567 systemd-journald[179]: Collecting audit messages is disabled. Jan 13 20:55:48.972604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:55:48.972625 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:55:48.972642 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:55:48.972659 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:55:48.972677 systemd-journald[179]: Journal started Jan 13 20:55:48.972715 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2efa5ba02aad55b91414804bf41e3f) is 4.8M, max 38.6M, 33.7M free. Jan 13 20:55:48.975903 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:55:48.981877 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:55:48.989138 systemd-modules-load[180]: Inserted module 'overlay' Jan 13 20:55:49.007802 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:55:49.011134 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:55:49.015535 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:55:49.024862 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:55:49.025089 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:55:49.163327 kernel: Bridge firewalling registered Jan 13 20:55:49.026738 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 13 20:55:49.160127 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:55:49.163853 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:55:49.170556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:55:49.180510 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:55:49.190558 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:55:49.193520 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:55:49.202481 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:55:49.207986 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:55:49.210761 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:55:49.213972 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:55:49.230148 dracut-cmdline[214]: dracut-dracut-053 Jan 13 20:55:49.234502 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:55:49.261021 systemd-resolved[213]: Positive Trust Anchors: Jan 13 20:55:49.261039 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:55:49.261101 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:55:49.275504 systemd-resolved[213]: Defaulting to hostname 'linux'. Jan 13 20:55:49.279001 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:55:49.281726 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:55:49.337797 kernel: SCSI subsystem initialized Jan 13 20:55:49.350803 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:55:49.363796 kernel: iscsi: registered transport (tcp) Jan 13 20:55:49.386797 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:55:49.386878 kernel: QLogic iSCSI HBA Driver Jan 13 20:55:49.443733 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:55:49.450931 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:55:49.482303 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:55:49.482378 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:55:49.482392 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:55:49.531814 kernel: raid6: avx512x4 gen() 9812 MB/s Jan 13 20:55:49.548796 kernel: raid6: avx512x2 gen() 11147 MB/s Jan 13 20:55:49.565815 kernel: raid6: avx512x1 gen() 13233 MB/s Jan 13 20:55:49.582798 kernel: raid6: avx2x4 gen() 12846 MB/s Jan 13 20:55:49.599796 kernel: raid6: avx2x2 gen() 13557 MB/s Jan 13 20:55:49.616824 kernel: raid6: avx2x1 gen() 10480 MB/s Jan 13 20:55:49.616898 kernel: raid6: using algorithm avx2x2 gen() 13557 MB/s Jan 13 20:55:49.635800 kernel: raid6: .... xor() 14043 MB/s, rmw enabled Jan 13 20:55:49.636493 kernel: raid6: using avx512x2 recovery algorithm Jan 13 20:55:49.659832 kernel: xor: automatically using best checksumming function avx Jan 13 20:55:49.887795 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:55:49.907814 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:55:49.922036 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:55:49.968542 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 13 20:55:49.977270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:55:49.988735 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:55:50.019756 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Jan 13 20:55:50.069752 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:55:50.082496 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:55:50.155747 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:55:50.173704 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:55:50.229886 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:55:50.237587 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:55:50.237721 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:55:50.244081 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:55:50.249983 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:55:50.284983 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:55:50.305801 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:55:50.327342 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:55:50.327416 kernel: AES CTR mode by8 optimization enabled Jan 13 20:55:50.334041 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:55:50.352034 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:55:50.352236 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 13 20:55:50.352404 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:62:23:cd:fa:0b Jan 13 20:55:50.336072 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:55:50.336245 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:55:50.338179 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:55:50.340030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:55:50.340239 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:55:50.341759 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:55:50.349736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:55:50.358497 (udev-worker)[456]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:55:50.382787 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:55:50.384365 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 20:55:50.393798 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:55:50.400479 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:55:50.400529 kernel: GPT:9289727 != 16777215 Jan 13 20:55:50.400548 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:55:50.400566 kernel: GPT:9289727 != 16777215 Jan 13 20:55:50.400582 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:55:50.400599 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:55:50.476790 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (460) Jan 13 20:55:50.494791 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (459) Jan 13 20:55:50.517983 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:55:50.528977 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:55:50.571894 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:55:50.576121 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:55:50.612205 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:55:50.617904 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:55:50.618042 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:55:50.627828 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:55:50.636954 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:55:50.645694 disk-uuid[628]: Primary Header is updated. Jan 13 20:55:50.645694 disk-uuid[628]: Secondary Entries is updated. Jan 13 20:55:50.645694 disk-uuid[628]: Secondary Header is updated. Jan 13 20:55:50.651791 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:55:51.661861 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:55:51.664171 disk-uuid[629]: The operation has completed successfully. Jan 13 20:55:51.801243 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:55:51.801365 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:55:51.828962 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:55:51.835341 sh[889]: Success Jan 13 20:55:51.864358 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:55:51.980532 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:55:51.994935 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:55:52.002302 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:55:52.047275 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:55:52.047341 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:55:52.047361 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:55:52.048542 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:55:52.049907 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:55:52.152796 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:55:52.172910 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:55:52.175661 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:55:52.188156 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:55:52.194503 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:55:52.211969 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:55:52.212077 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:55:52.212091 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:55:52.216789 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:55:52.231516 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:55:52.231279 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:55:52.246520 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:55:52.258052 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:55:52.333693 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:55:52.346992 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:55:52.390115 systemd-networkd[1082]: lo: Link UP Jan 13 20:55:52.390125 systemd-networkd[1082]: lo: Gained carrier Jan 13 20:55:52.395895 systemd-networkd[1082]: Enumeration completed Jan 13 20:55:52.397468 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:55:52.397477 systemd-networkd[1082]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:55:52.402424 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:55:52.405539 systemd[1]: Reached target network.target - Network. Jan 13 20:55:52.408020 systemd-networkd[1082]: eth0: Link UP Jan 13 20:55:52.408030 systemd-networkd[1082]: eth0: Gained carrier Jan 13 20:55:52.408043 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:55:52.422884 systemd-networkd[1082]: eth0: DHCPv4 address 172.31.31.120/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:55:52.645874 ignition[1012]: Ignition 2.20.0 Jan 13 20:55:52.645930 ignition[1012]: Stage: fetch-offline Jan 13 20:55:52.646467 ignition[1012]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:55:52.646480 ignition[1012]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:55:52.649338 ignition[1012]: Ignition finished successfully Jan 13 20:55:52.653515 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:55:52.662087 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:55:52.680883 ignition[1091]: Ignition 2.20.0 Jan 13 20:55:52.680897 ignition[1091]: Stage: fetch Jan 13 20:55:52.681373 ignition[1091]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:55:52.681394 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:55:52.681505 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:55:52.731201 ignition[1091]: PUT result: OK Jan 13 20:55:52.735420 ignition[1091]: parsed url from cmdline: "" Jan 13 20:55:52.735431 ignition[1091]: no config URL provided Jan 13 20:55:52.735440 ignition[1091]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:55:52.735475 ignition[1091]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:55:52.735498 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:55:52.737847 ignition[1091]: PUT result: OK Jan 13 20:55:52.737901 ignition[1091]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:55:52.740102 ignition[1091]: GET result: OK Jan 13 20:55:52.740805 ignition[1091]: parsing config with SHA512: 63b67e04a4f871a5d4546ba48c55ac7a2c2982016e5796eb00f34c450df85b574bc7da6ba5dc5508f535eb773d3780c54b7e1452b7e43ca877cb134b22a0ab9b Jan 13 20:55:52.749677 unknown[1091]: fetched base config from "system" Jan 13 20:55:52.750811 unknown[1091]: fetched base config from "system" Jan 13 20:55:52.750821 unknown[1091]: fetched user config from "aws" Jan 13 20:55:52.751580 ignition[1091]: fetch: fetch complete Jan 13 20:55:52.751587 ignition[1091]: fetch: fetch passed Jan 13 20:55:52.751643 ignition[1091]: Ignition finished successfully Jan 13 20:55:52.759332 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:55:52.767034 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:55:52.794368 ignition[1098]: Ignition 2.20.0 Jan 13 20:55:52.794384 ignition[1098]: Stage: kargs Jan 13 20:55:52.795417 ignition[1098]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:55:52.795432 ignition[1098]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:55:52.796199 ignition[1098]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:55:52.800111 ignition[1098]: PUT result: OK Jan 13 20:55:52.804229 ignition[1098]: kargs: kargs passed Jan 13 20:55:52.804294 ignition[1098]: Ignition finished successfully Jan 13 20:55:52.807073 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:55:52.816128 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:55:52.833749 ignition[1104]: Ignition 2.20.0 Jan 13 20:55:52.833953 ignition[1104]: Stage: disks Jan 13 20:55:52.834409 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:55:52.834422 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:55:52.834542 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:55:52.835650 ignition[1104]: PUT result: OK Jan 13 20:55:52.841741 ignition[1104]: disks: disks passed Jan 13 20:55:52.841823 ignition[1104]: Ignition finished successfully Jan 13 20:55:52.844609 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:55:52.844954 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:55:52.848206 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:55:52.848718 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:55:52.848756 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:55:52.848951 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:55:52.865225 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:55:52.935510 systemd-fsck[1112]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:55:52.946515 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:55:52.959933 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:55:53.080843 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:55:53.081371 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:55:53.082052 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:55:53.089907 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:55:53.092334 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:55:53.106049 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:55:53.106257 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:55:53.106290 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:55:53.123993 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:55:53.126134 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:55:53.137792 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1131) Jan 13 20:55:53.137849 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:55:53.139329 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:55:53.139364 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:55:53.146787 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:55:53.147372 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:55:53.343795 initrd-setup-root[1155]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:55:53.363340 initrd-setup-root[1162]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:55:53.368599 initrd-setup-root[1169]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:55:53.373337 initrd-setup-root[1176]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:55:53.550121 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:55:53.556354 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:55:53.561951 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:55:53.571032 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:55:53.572144 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:55:53.605332 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:55:53.608479 ignition[1248]: INFO : Ignition 2.20.0 Jan 13 20:55:53.608479 ignition[1248]: INFO : Stage: mount Jan 13 20:55:53.608479 ignition[1248]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:55:53.608479 ignition[1248]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:55:53.612889 ignition[1248]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:55:53.612889 ignition[1248]: INFO : PUT result: OK Jan 13 20:55:53.618848 ignition[1248]: INFO : mount: mount passed Jan 13 20:55:53.620085 ignition[1248]: INFO : Ignition finished successfully Jan 13 20:55:53.624108 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:55:53.636287 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:55:54.037946 systemd-networkd[1082]: eth0: Gained IPv6LL Jan 13 20:55:54.091081 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:55:54.110126 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1260) Jan 13 20:55:54.111907 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:55:54.112018 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:55:54.112043 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:55:54.116790 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:55:54.118578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:55:54.144558 ignition[1277]: INFO : Ignition 2.20.0 Jan 13 20:55:54.144558 ignition[1277]: INFO : Stage: files Jan 13 20:55:54.147512 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:55:54.147512 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:55:54.147512 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:55:54.151786 ignition[1277]: INFO : PUT result: OK Jan 13 20:55:54.155223 ignition[1277]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:55:54.156823 ignition[1277]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:55:54.156823 ignition[1277]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:55:54.181017 ignition[1277]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:55:54.182930 ignition[1277]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:55:54.186290 unknown[1277]: wrote ssh authorized keys file for user: core Jan 13 20:55:54.188058 ignition[1277]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:55:54.190786 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:55:54.190786 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:55:54.190786 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:55:54.190786 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:55:54.347456 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:55:54.548050 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:55:54.554379 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:55:54.554379 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:55:54.986042 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 13 20:55:55.341025 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:55:55.341025 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:55:55.345925 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:55:55.753101 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 13 20:55:56.045398 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:55:56.045398 ignition[1277]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 13 20:55:56.049397 ignition[1277]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:55:56.049397 ignition[1277]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:55:56.049397 ignition[1277]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 13 20:55:56.049397 ignition[1277]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 13 20:55:56.049397 ignition[1277]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:55:56.049397 ignition[1277]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:55:56.049397 ignition[1277]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 13 20:55:56.049397 ignition[1277]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:55:56.049397 ignition[1277]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:55:56.049397 ignition[1277]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:55:56.049397 ignition[1277]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:55:56.049397 ignition[1277]: INFO : files: files passed Jan 13 20:55:56.049397 ignition[1277]: INFO : Ignition finished successfully Jan 13 20:55:56.064504 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:55:56.074974 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:55:56.079754 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:55:56.086688 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:55:56.088514 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:55:56.097139 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:55:56.097139 initrd-setup-root-after-ignition[1306]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:55:56.101337 initrd-setup-root-after-ignition[1310]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:55:56.104828 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:55:56.105473 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:55:56.113927 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:55:56.143632 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:55:56.143735 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:55:56.155215 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:55:56.158583 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:55:56.160296 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:55:56.166960 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:55:56.190756 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:55:56.202049 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:55:56.214623 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:55:56.215249 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:55:56.218659 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:55:56.221860 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:55:56.222000 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:55:56.226380 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:55:56.233332 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:55:56.234535 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:55:56.237616 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:55:56.240075 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:55:56.243280 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:55:56.246024 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:55:56.247799 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:55:56.249039 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:55:56.251636 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:55:56.253641 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:55:56.253919 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:55:56.254409 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:55:56.254965 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:55:56.255384 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:55:56.258316 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:55:56.259907 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:55:56.260146 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:55:56.264578 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:55:56.264822 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:55:56.267747 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:55:56.270792 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:55:56.289963 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:55:56.292177 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:55:56.295637 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:55:56.296876 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:55:56.298376 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:55:56.298480 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:55:56.304388 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:55:56.304483 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:55:56.324358 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:55:56.330388 ignition[1330]: INFO : Ignition 2.20.0 Jan 13 20:55:56.330388 ignition[1330]: INFO : Stage: umount Jan 13 20:55:56.332386 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:55:56.332386 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:55:56.332386 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:55:56.336445 ignition[1330]: INFO : PUT result: OK Jan 13 20:55:56.340725 ignition[1330]: INFO : umount: umount passed Jan 13 20:55:56.340725 ignition[1330]: INFO : Ignition finished successfully Jan 13 20:55:56.343097 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:55:56.343198 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:55:56.353497 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:55:56.353611 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:55:56.354894 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:55:56.355863 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:55:56.357903 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:55:56.357952 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:55:56.359004 systemd[1]: Stopped target network.target - Network. Jan 13 20:55:56.361042 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:55:56.363842 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:55:56.366484 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:55:56.367580 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:55:56.372188 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:55:56.374560 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:55:56.378950 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:55:56.381010 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:55:56.381065 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:55:56.382103 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:55:56.382148 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:55:56.386318 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:55:56.386442 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:55:56.388466 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:55:56.388533 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:55:56.390836 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:55:56.394155 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:55:56.400976 systemd-networkd[1082]: eth0: DHCPv6 lease lost Jan 13 20:55:56.403583 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:55:56.403783 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:55:56.405381 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:55:56.405527 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:55:56.414046 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:55:56.415140 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:55:56.415198 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:55:56.416726 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:55:56.418623 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:55:56.418718 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:55:56.424537 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:55:56.424917 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:55:56.429882 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:55:56.429936 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:55:56.436742 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:55:56.436824 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:55:56.438638 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:55:56.438802 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:55:56.447906 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:55:56.447977 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:55:56.449427 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:55:56.449462 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:55:56.451241 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:55:56.451291 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:55:56.459156 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:55:56.459224 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:55:56.460722 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:55:56.462875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:55:56.475552 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:55:56.478182 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:55:56.479508 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:55:56.484263 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:55:56.484319 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:55:56.488939 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:55:56.489001 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:55:56.491567 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:55:56.491637 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:55:56.497381 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:55:56.497495 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:55:56.503165 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:55:56.503311 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:55:56.506727 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:55:56.506828 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:55:56.511562 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:55:56.515026 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:55:56.516806 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:55:56.522979 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:55:56.548785 systemd[1]: Switching root. Jan 13 20:55:56.579317 systemd-journald[179]: Journal stopped Jan 13 20:55:59.319584 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 13 20:55:59.319727 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:55:59.319760 kernel: SELinux: policy capability open_perms=1 Jan 13 20:55:59.330011 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:55:59.330036 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:55:59.330056 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:55:59.330076 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:55:59.330096 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:55:59.330117 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:55:59.330145 kernel: audit: type=1403 audit(1736801758.020:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:55:59.330168 systemd[1]: Successfully loaded SELinux policy in 59.407ms. Jan 13 20:55:59.330208 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.410ms. Jan 13 20:55:59.330238 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:55:59.330258 systemd[1]: Detected virtualization amazon. Jan 13 20:55:59.330371 systemd[1]: Detected architecture x86-64. Jan 13 20:55:59.330394 systemd[1]: Detected first boot. Jan 13 20:55:59.330421 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:55:59.330443 zram_generator::config[1389]: No configuration found. Jan 13 20:55:59.330470 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:55:59.330492 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:55:59.330514 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:55:59.330538 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:55:59.330560 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:55:59.330583 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:55:59.330606 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:55:59.330628 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:55:59.330653 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:55:59.330681 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:55:59.330702 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:55:59.330725 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:55:59.330748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:55:59.330783 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:55:59.330806 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:55:59.330828 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:55:59.330849 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:55:59.330874 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:55:59.330896 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:55:59.330917 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:55:59.330939 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:55:59.330960 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:55:59.330982 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:55:59.331004 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:55:59.331029 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:55:59.331053 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:55:59.331075 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:55:59.331096 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:55:59.331118 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:55:59.331140 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:55:59.331162 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:55:59.331183 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:55:59.331205 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:55:59.331226 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:55:59.331250 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:55:59.331272 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:55:59.331293 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:55:59.331315 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:55:59.331337 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:55:59.331359 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:55:59.331380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:55:59.331402 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:55:59.331447 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:55:59.331473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:55:59.331494 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:55:59.331516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:55:59.331539 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:55:59.331560 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:55:59.331582 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:55:59.331604 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 20:55:59.331626 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 20:55:59.331650 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:55:59.331672 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:55:59.331693 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:55:59.331715 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:55:59.331737 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:55:59.331760 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:55:59.336532 systemd-journald[1486]: Collecting audit messages is disabled. Jan 13 20:55:59.336583 kernel: fuse: init (API version 7.39) Jan 13 20:55:59.336612 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:55:59.336633 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:55:59.336655 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:55:59.336675 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:55:59.336695 kernel: loop: module loaded Jan 13 20:55:59.336715 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:55:59.336736 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:55:59.336758 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:55:59.336802 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:55:59.336825 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:55:59.336846 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:55:59.336866 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:55:59.336887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:55:59.336908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:55:59.336934 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:55:59.336954 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:55:59.336976 systemd-journald[1486]: Journal started Jan 13 20:55:59.337016 systemd-journald[1486]: Runtime Journal (/run/log/journal/ec2efa5ba02aad55b91414804bf41e3f) is 4.8M, max 38.6M, 33.7M free. Jan 13 20:55:59.343098 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:55:59.342741 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:55:59.343016 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:55:59.344952 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:55:59.347995 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:55:59.356640 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:55:59.375559 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:55:59.406474 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:55:59.425790 kernel: ACPI: bus type drm_connector registered Jan 13 20:55:59.420928 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:55:59.439076 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:55:59.450950 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:55:59.473147 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:55:59.475470 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:55:59.481210 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:55:59.485110 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:55:59.494121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:55:59.516553 systemd-journald[1486]: Time spent on flushing to /var/log/journal/ec2efa5ba02aad55b91414804bf41e3f is 76.008ms for 945 entries. Jan 13 20:55:59.516553 systemd-journald[1486]: System Journal (/var/log/journal/ec2efa5ba02aad55b91414804bf41e3f) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:55:59.602721 systemd-journald[1486]: Received client request to flush runtime journal. Jan 13 20:55:59.501989 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:55:59.510347 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:55:59.512624 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:55:59.512925 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:55:59.515032 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:55:59.519028 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:55:59.539620 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:55:59.542044 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:55:59.592651 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:55:59.614994 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:55:59.625780 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. Jan 13 20:55:59.625816 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. Jan 13 20:55:59.627664 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:55:59.630145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:55:59.653697 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:55:59.671027 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:55:59.678635 udevadm[1546]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:55:59.831620 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:55:59.849004 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:55:59.874850 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Jan 13 20:55:59.875282 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Jan 13 20:55:59.883731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:56:00.633380 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:56:00.643994 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:56:00.674780 systemd-udevd[1566]: Using default interface naming scheme 'v255'. Jan 13 20:56:00.752299 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:56:00.765070 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:56:00.806268 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:56:00.943598 (udev-worker)[1575]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:56:00.948490 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 20:56:00.992947 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:56:01.091788 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 13 20:56:01.102795 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:56:01.125946 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:56:01.126025 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 13 20:56:01.126832 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 20:56:01.155792 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 20:56:01.176827 systemd-networkd[1569]: lo: Link UP Jan 13 20:56:01.176838 systemd-networkd[1569]: lo: Gained carrier Jan 13 20:56:01.181842 systemd-networkd[1569]: Enumeration completed Jan 13 20:56:01.182298 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:56:01.186017 systemd-networkd[1569]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:56:01.186129 systemd-networkd[1569]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:56:01.194006 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:56:01.198852 systemd-networkd[1569]: eth0: Link UP Jan 13 20:56:01.199488 systemd-networkd[1569]: eth0: Gained carrier Jan 13 20:56:01.199516 systemd-networkd[1569]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:56:01.210006 systemd-networkd[1569]: eth0: DHCPv4 address 172.31.31.120/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:56:01.240795 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:56:01.264802 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1573) Jan 13 20:56:01.282353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:56:01.560604 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:56:01.833892 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:56:01.839777 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:56:01.904296 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:56:01.945784 lvm[1690]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:56:01.999202 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:56:02.001549 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:56:02.024049 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:56:02.067443 lvm[1693]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:56:02.149227 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:56:02.158945 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:56:02.161483 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:56:02.161529 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:56:02.172120 systemd[1]: Reached target machines.target - Containers. Jan 13 20:56:02.178194 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:56:02.199048 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:56:02.217111 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:56:02.218694 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:56:02.221039 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:56:02.230524 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:56:02.243017 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:56:02.259148 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:56:02.307710 kernel: loop0: detected capacity change from 0 to 62848 Jan 13 20:56:02.307587 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:56:02.356976 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:56:02.358092 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:56:02.416468 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:56:02.482057 kernel: loop1: detected capacity change from 0 to 140992 Jan 13 20:56:02.593796 kernel: loop2: detected capacity change from 0 to 211296 Jan 13 20:56:02.680058 kernel: loop3: detected capacity change from 0 to 138184 Jan 13 20:56:02.884791 kernel: loop4: detected capacity change from 0 to 62848 Jan 13 20:56:02.928815 kernel: loop5: detected capacity change from 0 to 140992 Jan 13 20:56:02.971121 kernel: loop6: detected capacity change from 0 to 211296 Jan 13 20:56:03.003316 kernel: loop7: detected capacity change from 0 to 138184 Jan 13 20:56:03.062533 systemd-networkd[1569]: eth0: Gained IPv6LL Jan 13 20:56:03.073128 (sd-merge)[1714]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:56:03.076745 (sd-merge)[1714]: Merged extensions into '/usr'. Jan 13 20:56:03.081997 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:56:03.096196 systemd[1]: Reloading requested from client PID 1701 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:56:03.096210 systemd[1]: Reloading... Jan 13 20:56:03.168821 zram_generator::config[1739]: No configuration found. Jan 13 20:56:03.474306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:56:03.574416 systemd[1]: Reloading finished in 477 ms. Jan 13 20:56:03.597003 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:56:03.609015 systemd[1]: Starting ensure-sysext.service... Jan 13 20:56:03.613425 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:56:03.641009 systemd[1]: Reloading requested from client PID 1798 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:56:03.641090 systemd[1]: Reloading... Jan 13 20:56:03.679799 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:56:03.681906 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:56:03.686090 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:56:03.687646 systemd-tmpfiles[1799]: ACLs are not supported, ignoring. Jan 13 20:56:03.687912 systemd-tmpfiles[1799]: ACLs are not supported, ignoring. Jan 13 20:56:03.697095 systemd-tmpfiles[1799]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:56:03.697897 systemd-tmpfiles[1799]: Skipping /boot Jan 13 20:56:03.727069 systemd-tmpfiles[1799]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:56:03.727088 systemd-tmpfiles[1799]: Skipping /boot Jan 13 20:56:03.754796 ldconfig[1697]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:56:03.832843 zram_generator::config[1833]: No configuration found. Jan 13 20:56:04.018582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:56:04.141047 systemd[1]: Reloading finished in 485 ms. Jan 13 20:56:04.168009 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:56:04.178983 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:56:04.204069 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:56:04.214043 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:56:04.218973 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:56:04.231989 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:56:04.248512 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:56:04.276130 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:56:04.276453 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:56:04.286977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:56:04.295428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:56:04.301646 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:56:04.303141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:56:04.303341 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:56:04.309991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:56:04.310260 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:56:04.325510 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:56:04.325748 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:56:04.335467 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:56:04.336565 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:56:04.351270 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:56:04.352682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:56:04.361276 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:56:04.381213 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:56:04.390301 augenrules[1926]: No rules Jan 13 20:56:04.400466 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:56:04.430040 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:56:04.437304 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:56:04.437602 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:56:04.447042 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:56:04.459143 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:56:04.459533 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:56:04.467544 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:56:04.470696 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:56:04.476217 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:56:04.476653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:56:04.478975 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:56:04.479639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:56:04.487883 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:56:04.488520 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:56:04.495385 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:56:04.500760 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:56:04.522801 systemd[1]: Finished ensure-sysext.service. Jan 13 20:56:04.526192 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:56:04.537229 systemd-resolved[1891]: Positive Trust Anchors: Jan 13 20:56:04.537252 systemd-resolved[1891]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:56:04.537383 systemd-resolved[1891]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:56:04.541901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:56:04.542000 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:56:04.545310 systemd-resolved[1891]: Defaulting to hostname 'linux'. Jan 13 20:56:04.550091 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:56:04.552310 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:56:04.552563 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:56:04.562002 systemd[1]: Reached target network.target - Network. Jan 13 20:56:04.564521 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:56:04.566183 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:56:04.578202 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:56:04.580392 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:56:04.581889 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:56:04.583656 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:56:04.585504 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:56:04.587095 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:56:04.588560 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:56:04.590173 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:56:04.590212 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:56:04.592305 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:56:04.594193 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:56:04.597252 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:56:04.600038 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:56:04.602190 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:56:04.603370 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:56:04.604420 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:56:04.605657 systemd[1]: System is tainted: cgroupsv1 Jan 13 20:56:04.605696 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:56:04.605719 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:56:04.613892 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:56:04.627069 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:56:04.630040 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:56:04.640903 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:56:04.645923 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:56:04.647272 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:56:04.655298 jq[1959]: false Jan 13 20:56:04.658158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:56:04.661920 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:56:04.672082 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:56:04.706133 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:56:04.725645 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:56:04.754954 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:56:04.761225 extend-filesystems[1960]: Found loop4 Jan 13 20:56:04.761225 extend-filesystems[1960]: Found loop5 Jan 13 20:56:04.761225 extend-filesystems[1960]: Found loop6 Jan 13 20:56:04.761225 extend-filesystems[1960]: Found loop7 Jan 13 20:56:04.761225 extend-filesystems[1960]: Found nvme0n1 Jan 13 20:56:04.761225 extend-filesystems[1960]: Found nvme0n1p1 Jan 13 20:56:04.761225 extend-filesystems[1960]: Found nvme0n1p2 Jan 13 20:56:04.761225 extend-filesystems[1960]: Found nvme0n1p3 Jan 13 20:56:04.761225 extend-filesystems[1960]: Found usr Jan 13 20:56:04.761225 extend-filesystems[1960]: Found nvme0n1p4 Jan 13 20:56:04.761225 extend-filesystems[1960]: Found nvme0n1p6 Jan 13 20:56:04.761225 extend-filesystems[1960]: Found nvme0n1p7 Jan 13 20:56:04.761225 extend-filesystems[1960]: Found nvme0n1p9 Jan 13 20:56:04.761225 extend-filesystems[1960]: Checking size of /dev/nvme0n1p9 Jan 13 20:56:04.768987 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:56:04.801025 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:56:04.810379 dbus-daemon[1958]: [system] SELinux support is enabled Jan 13 20:56:04.823353 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:56:04.825439 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:56:04.846094 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:56:04.848583 dbus-daemon[1958]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1569 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:56:04.858822 extend-filesystems[1960]: Resized partition /dev/nvme0n1p9 Jan 13 20:56:04.860129 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:56:04.872877 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:22 UTC 2025 (1): Starting Jan 13 20:56:04.872877 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:56:04.872877 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: ---------------------------------------------------- Jan 13 20:56:04.872877 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:56:04.872877 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:56:04.872877 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: corporation. Support and training for ntp-4 are Jan 13 20:56:04.872877 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: available at https://www.nwtime.org/support Jan 13 20:56:04.872877 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: ---------------------------------------------------- Jan 13 20:56:04.869518 ntpd[1966]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:22 UTC 2025 (1): Starting Jan 13 20:56:04.865037 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:56:04.899415 extend-filesystems[1995]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:56:04.920959 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:56:04.869545 ntpd[1966]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:56:04.894651 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:56:04.921199 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: proto: precision = 0.073 usec (-24) Jan 13 20:56:04.921199 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: basedate set to 2025-01-01 Jan 13 20:56:04.921199 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: gps base set to 2025-01-05 (week 2348) Jan 13 20:56:04.869558 ntpd[1966]: ---------------------------------------------------- Jan 13 20:56:04.894997 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:56:04.921648 jq[1993]: true Jan 13 20:56:04.869568 ntpd[1966]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:56:04.898231 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:56:04.869578 ntpd[1966]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:56:04.902914 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:56:04.869587 ntpd[1966]: corporation. Support and training for ntp-4 are Jan 13 20:56:04.869597 ntpd[1966]: available at https://www.nwtime.org/support Jan 13 20:56:04.869607 ntpd[1966]: ---------------------------------------------------- Jan 13 20:56:04.900456 ntpd[1966]: proto: precision = 0.073 usec (-24) Jan 13 20:56:04.908746 ntpd[1966]: basedate set to 2025-01-01 Jan 13 20:56:04.909253 ntpd[1966]: gps base set to 2025-01-05 (week 2348) Jan 13 20:56:04.946800 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:56:04.948956 ntpd[1966]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:56:04.950668 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:56:04.962078 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:56:04.962078 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:56:04.962078 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:56:04.962078 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: Listen normally on 3 eth0 172.31.31.120:123 Jan 13 20:56:04.962078 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: Listen normally on 4 lo [::1]:123 Jan 13 20:56:04.962078 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: Listen normally on 5 eth0 [fe80::462:23ff:fecd:fa0b%2]:123 Jan 13 20:56:04.962078 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: Listening on routing socket on fd #22 for interface updates Jan 13 20:56:04.949031 ntpd[1966]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:56:04.952099 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:56:04.961390 ntpd[1966]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:56:04.961551 ntpd[1966]: Listen normally on 3 eth0 172.31.31.120:123 Jan 13 20:56:04.961613 ntpd[1966]: Listen normally on 4 lo [::1]:123 Jan 13 20:56:04.961673 ntpd[1966]: Listen normally on 5 eth0 [fe80::462:23ff:fecd:fa0b%2]:123 Jan 13 20:56:04.961725 ntpd[1966]: Listening on routing socket on fd #22 for interface updates Jan 13 20:56:04.977445 ntpd[1966]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:56:04.981802 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:56:04.981802 ntpd[1966]: 13 Jan 20:56:04 ntpd[1966]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:56:04.979690 ntpd[1966]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:56:04.995855 update_engine[1991]: I20250113 20:56:04.985575 1991 main.cc:92] Flatcar Update Engine starting Jan 13 20:56:04.995855 update_engine[1991]: I20250113 20:56:04.989332 1991 update_check_scheduler.cc:74] Next update check in 10m27s Jan 13 20:56:05.044123 (ntainerd)[2014]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:56:05.055412 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:56:05.047902 dbus-daemon[1958]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:56:05.066425 jq[2009]: true Jan 13 20:56:05.069175 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:56:05.070730 tar[2001]: linux-amd64/helm Jan 13 20:56:05.069222 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:56:05.090314 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:56:05.092223 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:56:05.096002 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:56:05.096036 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:56:05.100875 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:56:05.177077 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2040) Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.106 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.107 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.109 INFO Fetch successful Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.113 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.124 INFO Fetch successful Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.125 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.127 INFO Fetch successful Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.128 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.137 INFO Fetch successful Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.137 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.138 INFO Fetch failed with 404: resource not found Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.138 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.147 INFO Fetch successful Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.147 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.152 INFO Fetch successful Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.152 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.155 INFO Fetch successful Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.155 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.156 INFO Fetch successful Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.156 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:56:05.177126 coreos-metadata[1956]: Jan 13 20:56:05.158 INFO Fetch successful Jan 13 20:56:05.178069 extend-filesystems[1995]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:56:05.178069 extend-filesystems[1995]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:56:05.178069 extend-filesystems[1995]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:56:05.109076 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:56:05.180687 extend-filesystems[1960]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:56:05.174548 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:56:05.179994 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:56:05.260382 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:56:05.277064 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:56:05.335529 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:56:05.337573 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:56:05.448701 systemd-logind[1987]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:56:05.449116 systemd-logind[1987]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 20:56:05.449142 systemd-logind[1987]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:56:05.451054 systemd-logind[1987]: New seat seat0. Jan 13 20:56:05.452529 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:56:05.495801 bash[2083]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:56:05.480300 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:56:05.496217 systemd[1]: Starting sshkeys.service... Jan 13 20:56:05.559366 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:56:05.573027 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:56:05.575567 dbus-daemon[1958]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:56:05.575990 dbus-daemon[1958]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2036 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:56:05.576705 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:56:05.594371 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:56:05.687369 polkitd[2130]: Started polkitd version 121 Jan 13 20:56:05.695798 amazon-ssm-agent[2063]: Initializing new seelog logger Jan 13 20:56:05.695798 amazon-ssm-agent[2063]: New Seelog Logger Creation Complete Jan 13 20:56:05.695798 amazon-ssm-agent[2063]: 2025/01/13 20:56:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:56:05.695798 amazon-ssm-agent[2063]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:56:05.695798 amazon-ssm-agent[2063]: 2025/01/13 20:56:05 processing appconfig overrides Jan 13 20:56:05.696504 amazon-ssm-agent[2063]: 2025/01/13 20:56:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:56:05.696564 amazon-ssm-agent[2063]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:56:05.696707 amazon-ssm-agent[2063]: 2025/01/13 20:56:05 processing appconfig overrides Jan 13 20:56:05.697122 amazon-ssm-agent[2063]: 2025/01/13 20:56:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:56:05.697203 amazon-ssm-agent[2063]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:56:05.697333 amazon-ssm-agent[2063]: 2025/01/13 20:56:05 processing appconfig overrides Jan 13 20:56:05.697682 amazon-ssm-agent[2063]: 2025-01-13 20:56:05 INFO Proxy environment variables: Jan 13 20:56:05.700819 amazon-ssm-agent[2063]: 2025/01/13 20:56:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:56:05.700819 amazon-ssm-agent[2063]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:56:05.700819 amazon-ssm-agent[2063]: 2025/01/13 20:56:05 processing appconfig overrides Jan 13 20:56:05.740366 polkitd[2130]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:56:05.740518 polkitd[2130]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:56:05.744881 polkitd[2130]: Finished loading, compiling and executing 2 rules Jan 13 20:56:05.746981 dbus-daemon[1958]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:56:05.747194 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:56:05.754842 polkitd[2130]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:56:05.813863 amazon-ssm-agent[2063]: 2025-01-13 20:56:05 INFO no_proxy: Jan 13 20:56:05.886429 coreos-metadata[2120]: Jan 13 20:56:05.886 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:56:05.888807 systemd-resolved[1891]: System hostname changed to 'ip-172-31-31-120'. Jan 13 20:56:05.888988 systemd-hostnamed[2036]: Hostname set to (transient) Jan 13 20:56:05.897864 coreos-metadata[2120]: Jan 13 20:56:05.893 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:56:05.897864 coreos-metadata[2120]: Jan 13 20:56:05.897 INFO Fetch successful Jan 13 20:56:05.897864 coreos-metadata[2120]: Jan 13 20:56:05.897 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:56:05.901058 coreos-metadata[2120]: Jan 13 20:56:05.898 INFO Fetch successful Jan 13 20:56:05.904383 unknown[2120]: wrote ssh authorized keys file for user: core Jan 13 20:56:05.912815 amazon-ssm-agent[2063]: 2025-01-13 20:56:05 INFO https_proxy: Jan 13 20:56:05.936512 locksmithd[2041]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:56:05.968572 update-ssh-keys[2203]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:56:05.965351 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:56:05.975348 systemd[1]: Finished sshkeys.service. Jan 13 20:56:06.010786 amazon-ssm-agent[2063]: 2025-01-13 20:56:05 INFO http_proxy: Jan 13 20:56:06.111593 amazon-ssm-agent[2063]: 2025-01-13 20:56:05 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:56:06.213157 amazon-ssm-agent[2063]: 2025-01-13 20:56:05 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:56:06.260490 sshd_keygen[2004]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:56:06.289790 containerd[2014]: time="2025-01-13T20:56:06.288486003Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:56:06.313951 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:56:06.314544 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO Agent will take identity from EC2 Jan 13 20:56:06.323113 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:56:06.363544 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:56:06.369070 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:56:06.379524 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:56:06.417792 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:56:06.418357 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:56:06.433797 containerd[2014]: time="2025-01-13T20:56:06.430462227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:56:06.433797 containerd[2014]: time="2025-01-13T20:56:06.432435871Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:56:06.433797 containerd[2014]: time="2025-01-13T20:56:06.432473794Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:56:06.433797 containerd[2014]: time="2025-01-13T20:56:06.432497518Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:56:06.433797 containerd[2014]: time="2025-01-13T20:56:06.432680051Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:56:06.433797 containerd[2014]: time="2025-01-13T20:56:06.432701265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:56:06.433190 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:56:06.438130 containerd[2014]: time="2025-01-13T20:56:06.436743118Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:56:06.438130 containerd[2014]: time="2025-01-13T20:56:06.436805753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:56:06.438130 containerd[2014]: time="2025-01-13T20:56:06.437151123Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:56:06.438130 containerd[2014]: time="2025-01-13T20:56:06.437175068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:56:06.438130 containerd[2014]: time="2025-01-13T20:56:06.437200475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:56:06.438130 containerd[2014]: time="2025-01-13T20:56:06.437215811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:56:06.438130 containerd[2014]: time="2025-01-13T20:56:06.437325311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:56:06.438130 containerd[2014]: time="2025-01-13T20:56:06.437576706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:56:06.442794 containerd[2014]: time="2025-01-13T20:56:06.440360336Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:56:06.442794 containerd[2014]: time="2025-01-13T20:56:06.440400257Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:56:06.442794 containerd[2014]: time="2025-01-13T20:56:06.440552519Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:56:06.442794 containerd[2014]: time="2025-01-13T20:56:06.440614327Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:56:06.444682 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:56:06.447366 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:56:06.455582 containerd[2014]: time="2025-01-13T20:56:06.455537333Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:56:06.455889 containerd[2014]: time="2025-01-13T20:56:06.455849689Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:56:06.456016 containerd[2014]: time="2025-01-13T20:56:06.456001765Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:56:06.456166 containerd[2014]: time="2025-01-13T20:56:06.456149443Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:56:06.456264 containerd[2014]: time="2025-01-13T20:56:06.456251193Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:56:06.456695 containerd[2014]: time="2025-01-13T20:56:06.456671824Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:56:06.457257 containerd[2014]: time="2025-01-13T20:56:06.457237105Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:56:06.457512 containerd[2014]: time="2025-01-13T20:56:06.457495970Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:56:06.457612 containerd[2014]: time="2025-01-13T20:56:06.457599065Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:56:06.457711 containerd[2014]: time="2025-01-13T20:56:06.457698324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:56:06.457813 containerd[2014]: time="2025-01-13T20:56:06.457797391Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:56:06.457900 containerd[2014]: time="2025-01-13T20:56:06.457886560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:56:06.458006 containerd[2014]: time="2025-01-13T20:56:06.457991794Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:56:06.458103 containerd[2014]: time="2025-01-13T20:56:06.458088545Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:56:06.458193 containerd[2014]: time="2025-01-13T20:56:06.458179266Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:56:06.458279 containerd[2014]: time="2025-01-13T20:56:06.458266111Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:56:06.458365 containerd[2014]: time="2025-01-13T20:56:06.458352337Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:56:06.458451 containerd[2014]: time="2025-01-13T20:56:06.458438196Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:56:06.458547 containerd[2014]: time="2025-01-13T20:56:06.458534203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.458693 containerd[2014]: time="2025-01-13T20:56:06.458677670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.458801 containerd[2014]: time="2025-01-13T20:56:06.458786374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.458908 containerd[2014]: time="2025-01-13T20:56:06.458893726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.458997 containerd[2014]: time="2025-01-13T20:56:06.458983500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.459094 containerd[2014]: time="2025-01-13T20:56:06.459079083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.459184 containerd[2014]: time="2025-01-13T20:56:06.459170993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.459309 containerd[2014]: time="2025-01-13T20:56:06.459261493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.459309 containerd[2014]: time="2025-01-13T20:56:06.459286632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.459483 containerd[2014]: time="2025-01-13T20:56:06.459429093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.459483 containerd[2014]: time="2025-01-13T20:56:06.459454008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.459662 containerd[2014]: time="2025-01-13T20:56:06.459473315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.459662 containerd[2014]: time="2025-01-13T20:56:06.459615160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.459831 containerd[2014]: time="2025-01-13T20:56:06.459758132Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:56:06.459907 containerd[2014]: time="2025-01-13T20:56:06.459810934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.460061 containerd[2014]: time="2025-01-13T20:56:06.459963957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.460061 containerd[2014]: time="2025-01-13T20:56:06.460000694Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:56:06.460759 containerd[2014]: time="2025-01-13T20:56:06.460716881Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:56:06.461008 containerd[2014]: time="2025-01-13T20:56:06.460905296Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:56:06.461552 containerd[2014]: time="2025-01-13T20:56:06.460930464Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:56:06.461719 containerd[2014]: time="2025-01-13T20:56:06.461643200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:56:06.461719 containerd[2014]: time="2025-01-13T20:56:06.461666138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.461990 containerd[2014]: time="2025-01-13T20:56:06.461857096Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:56:06.461990 containerd[2014]: time="2025-01-13T20:56:06.461879058Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:56:06.461990 containerd[2014]: time="2025-01-13T20:56:06.461896847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:56:06.464554 containerd[2014]: time="2025-01-13T20:56:06.464403582Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:56:06.465105 containerd[2014]: time="2025-01-13T20:56:06.464843282Z" level=info msg="Connect containerd service" Jan 13 20:56:06.465105 containerd[2014]: time="2025-01-13T20:56:06.464931871Z" level=info msg="using legacy CRI server" Jan 13 20:56:06.465105 containerd[2014]: time="2025-01-13T20:56:06.464944176Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:56:06.465611 containerd[2014]: time="2025-01-13T20:56:06.465432999Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:56:06.466810 containerd[2014]: time="2025-01-13T20:56:06.466777060Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:56:06.467493 containerd[2014]: time="2025-01-13T20:56:06.466995532Z" level=info msg="Start subscribing containerd event" Jan 13 20:56:06.467493 containerd[2014]: time="2025-01-13T20:56:06.467054224Z" level=info msg="Start recovering state" Jan 13 20:56:06.467493 containerd[2014]: time="2025-01-13T20:56:06.467137105Z" level=info msg="Start event monitor" Jan 13 20:56:06.467493 containerd[2014]: time="2025-01-13T20:56:06.467160947Z" level=info msg="Start snapshots syncer" Jan 13 20:56:06.467493 containerd[2014]: time="2025-01-13T20:56:06.467173393Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:56:06.467493 containerd[2014]: time="2025-01-13T20:56:06.467185262Z" level=info msg="Start streaming server" Jan 13 20:56:06.468050 containerd[2014]: time="2025-01-13T20:56:06.468030861Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:56:06.468171 containerd[2014]: time="2025-01-13T20:56:06.468157845Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:56:06.468304 containerd[2014]: time="2025-01-13T20:56:06.468291801Z" level=info msg="containerd successfully booted in 0.182189s" Jan 13 20:56:06.468441 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:56:06.517218 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:56:06.618782 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:56:06.717821 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:56:06.820393 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 13 20:56:06.919741 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:56:06.919741 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:56:06.919741 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [Registrar] Starting registrar module Jan 13 20:56:06.919741 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:56:06.920024 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:56:06.920024 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:56:06.920024 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:56:06.920024 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:56:06.920024 amazon-ssm-agent[2063]: 2025-01-13 20:56:06 INFO [CredentialRefresher] Next credential rotation will be in 30.433323694333332 minutes Jan 13 20:56:06.945511 tar[2001]: linux-amd64/LICENSE Jan 13 20:56:06.945948 tar[2001]: linux-amd64/README.md Jan 13 20:56:06.974521 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:56:07.226966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:56:07.230472 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:56:07.232725 systemd[1]: Startup finished in 9.938s (kernel) + 9.268s (userspace) = 19.206s. Jan 13 20:56:07.234018 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:56:07.936512 amazon-ssm-agent[2063]: 2025-01-13 20:56:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:56:08.046351 kubelet[2249]: E0113 20:56:08.040845 2249 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:56:08.047115 amazon-ssm-agent[2063]: 2025-01-13 20:56:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2261) started Jan 13 20:56:08.054487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:56:08.054817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:56:08.142510 amazon-ssm-agent[2063]: 2025-01-13 20:56:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:56:11.574407 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:56:11.582276 systemd[1]: Started sshd@0-172.31.31.120:22-139.178.89.65:52774.service - OpenSSH per-connection server daemon (139.178.89.65:52774). Jan 13 20:56:11.816886 sshd[2274]: Accepted publickey for core from 139.178.89.65 port 52774 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:56:11.822528 sshd-session[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:11.844653 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:56:11.854618 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:56:11.863270 systemd-logind[1987]: New session 1 of user core. Jan 13 20:56:12.186911 systemd-resolved[1891]: Clock change detected. Flushing caches. Jan 13 20:56:12.203262 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:56:12.211843 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:56:12.217016 (systemd)[2280]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:56:12.337201 systemd[2280]: Queued start job for default target default.target. Jan 13 20:56:12.337703 systemd[2280]: Created slice app.slice - User Application Slice. Jan 13 20:56:12.337738 systemd[2280]: Reached target paths.target - Paths. Jan 13 20:56:12.337758 systemd[2280]: Reached target timers.target - Timers. Jan 13 20:56:12.343230 systemd[2280]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:56:12.351235 systemd[2280]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:56:12.351323 systemd[2280]: Reached target sockets.target - Sockets. Jan 13 20:56:12.351343 systemd[2280]: Reached target basic.target - Basic System. Jan 13 20:56:12.351398 systemd[2280]: Reached target default.target - Main User Target. Jan 13 20:56:12.351437 systemd[2280]: Startup finished in 127ms. Jan 13 20:56:12.351582 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:56:12.357232 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:56:12.504122 systemd[1]: Started sshd@1-172.31.31.120:22-139.178.89.65:52786.service - OpenSSH per-connection server daemon (139.178.89.65:52786). Jan 13 20:56:12.659144 sshd[2292]: Accepted publickey for core from 139.178.89.65 port 52786 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:56:12.660542 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:12.665293 systemd-logind[1987]: New session 2 of user core. Jan 13 20:56:12.675433 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:56:12.793135 sshd[2295]: Connection closed by 139.178.89.65 port 52786 Jan 13 20:56:12.794867 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:12.799403 systemd[1]: sshd@1-172.31.31.120:22-139.178.89.65:52786.service: Deactivated successfully. Jan 13 20:56:12.803787 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:56:12.804615 systemd-logind[1987]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:56:12.806098 systemd-logind[1987]: Removed session 2. Jan 13 20:56:12.821794 systemd[1]: Started sshd@2-172.31.31.120:22-139.178.89.65:52788.service - OpenSSH per-connection server daemon (139.178.89.65:52788). Jan 13 20:56:12.978395 sshd[2300]: Accepted publickey for core from 139.178.89.65 port 52788 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:56:12.980042 sshd-session[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:12.985356 systemd-logind[1987]: New session 3 of user core. Jan 13 20:56:12.994452 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:56:13.108161 sshd[2303]: Connection closed by 139.178.89.65 port 52788 Jan 13 20:56:13.109411 sshd-session[2300]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:13.112253 systemd[1]: sshd@2-172.31.31.120:22-139.178.89.65:52788.service: Deactivated successfully. Jan 13 20:56:13.116857 systemd-logind[1987]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:56:13.117698 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:56:13.120031 systemd-logind[1987]: Removed session 3. Jan 13 20:56:13.143482 systemd[1]: Started sshd@3-172.31.31.120:22-139.178.89.65:52792.service - OpenSSH per-connection server daemon (139.178.89.65:52792). Jan 13 20:56:13.298826 sshd[2308]: Accepted publickey for core from 139.178.89.65 port 52792 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:56:13.300254 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:13.305610 systemd-logind[1987]: New session 4 of user core. Jan 13 20:56:13.315480 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:56:13.433138 sshd[2311]: Connection closed by 139.178.89.65 port 52792 Jan 13 20:56:13.433808 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:13.437905 systemd[1]: sshd@3-172.31.31.120:22-139.178.89.65:52792.service: Deactivated successfully. Jan 13 20:56:13.441830 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:56:13.443045 systemd-logind[1987]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:56:13.444106 systemd-logind[1987]: Removed session 4. Jan 13 20:56:13.461414 systemd[1]: Started sshd@4-172.31.31.120:22-139.178.89.65:52798.service - OpenSSH per-connection server daemon (139.178.89.65:52798). Jan 13 20:56:13.621677 sshd[2316]: Accepted publickey for core from 139.178.89.65 port 52798 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:56:13.623509 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:13.629550 systemd-logind[1987]: New session 5 of user core. Jan 13 20:56:13.637659 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:56:13.778916 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:56:13.779380 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:56:13.801238 sudo[2320]: pam_unix(sudo:session): session closed for user root Jan 13 20:56:13.824389 sshd[2319]: Connection closed by 139.178.89.65 port 52798 Jan 13 20:56:13.825526 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:13.835407 systemd[1]: sshd@4-172.31.31.120:22-139.178.89.65:52798.service: Deactivated successfully. Jan 13 20:56:13.841101 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:56:13.841993 systemd-logind[1987]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:56:13.843401 systemd-logind[1987]: Removed session 5. Jan 13 20:56:13.852458 systemd[1]: Started sshd@5-172.31.31.120:22-139.178.89.65:52802.service - OpenSSH per-connection server daemon (139.178.89.65:52802). Jan 13 20:56:14.012362 sshd[2325]: Accepted publickey for core from 139.178.89.65 port 52802 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:56:14.014419 sshd-session[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:14.020334 systemd-logind[1987]: New session 6 of user core. Jan 13 20:56:14.034532 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:56:14.148327 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:56:14.148717 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:56:14.154988 sudo[2330]: pam_unix(sudo:session): session closed for user root Jan 13 20:56:14.161148 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:56:14.162050 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:56:14.192714 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:56:14.255261 augenrules[2352]: No rules Jan 13 20:56:14.257120 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:56:14.257501 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:56:14.261673 sudo[2329]: pam_unix(sudo:session): session closed for user root Jan 13 20:56:14.283858 sshd[2328]: Connection closed by 139.178.89.65 port 52802 Jan 13 20:56:14.285545 sshd-session[2325]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:14.289392 systemd[1]: sshd@5-172.31.31.120:22-139.178.89.65:52802.service: Deactivated successfully. Jan 13 20:56:14.293417 systemd-logind[1987]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:56:14.297466 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:56:14.300948 systemd-logind[1987]: Removed session 6. Jan 13 20:56:14.312573 systemd[1]: Started sshd@6-172.31.31.120:22-139.178.89.65:52808.service - OpenSSH per-connection server daemon (139.178.89.65:52808). Jan 13 20:56:14.473899 sshd[2361]: Accepted publickey for core from 139.178.89.65 port 52808 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:56:14.475466 sshd-session[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:14.481240 systemd-logind[1987]: New session 7 of user core. Jan 13 20:56:14.487657 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:56:14.590748 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:56:14.591154 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:56:15.289432 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:56:15.291279 (dockerd)[2382]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:56:15.851854 dockerd[2382]: time="2025-01-13T20:56:15.851789692Z" level=info msg="Starting up" Jan 13 20:56:16.892356 dockerd[2382]: time="2025-01-13T20:56:16.892308157Z" level=info msg="Loading containers: start." Jan 13 20:56:17.111269 kernel: Initializing XFRM netlink socket Jan 13 20:56:17.141603 (udev-worker)[2404]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:56:17.234855 systemd-networkd[1569]: docker0: Link UP Jan 13 20:56:17.264492 dockerd[2382]: time="2025-01-13T20:56:17.264446088Z" level=info msg="Loading containers: done." Jan 13 20:56:17.291378 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2718246034-merged.mount: Deactivated successfully. Jan 13 20:56:17.293047 dockerd[2382]: time="2025-01-13T20:56:17.291856190Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:56:17.293047 dockerd[2382]: time="2025-01-13T20:56:17.292017677Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:56:17.293047 dockerd[2382]: time="2025-01-13T20:56:17.292168231Z" level=info msg="Daemon has completed initialization" Jan 13 20:56:17.338883 dockerd[2382]: time="2025-01-13T20:56:17.338825395Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:56:17.339840 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:56:18.620655 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:56:18.651802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:56:18.892450 containerd[2014]: time="2025-01-13T20:56:18.892326972Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:56:19.112281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:56:19.121622 (kubelet)[2587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:56:19.193843 kubelet[2587]: E0113 20:56:19.193715 2587 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:56:19.201013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:56:19.201314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:56:19.522828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427328407.mount: Deactivated successfully. Jan 13 20:56:21.433358 containerd[2014]: time="2025-01-13T20:56:21.433308418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:21.435105 containerd[2014]: time="2025-01-13T20:56:21.435033618Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 20:56:21.437227 containerd[2014]: time="2025-01-13T20:56:21.437173687Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:21.440610 containerd[2014]: time="2025-01-13T20:56:21.440556382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:21.443545 containerd[2014]: time="2025-01-13T20:56:21.442174307Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.549796037s" Jan 13 20:56:21.443545 containerd[2014]: time="2025-01-13T20:56:21.442246014Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 20:56:21.468584 containerd[2014]: time="2025-01-13T20:56:21.468358653Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:56:23.701950 containerd[2014]: time="2025-01-13T20:56:23.701905669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:23.703649 containerd[2014]: time="2025-01-13T20:56:23.703544202Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 20:56:23.706739 containerd[2014]: time="2025-01-13T20:56:23.705140746Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:23.708983 containerd[2014]: time="2025-01-13T20:56:23.708946904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:23.710636 containerd[2014]: time="2025-01-13T20:56:23.710522942Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.242129665s" Jan 13 20:56:23.710871 containerd[2014]: time="2025-01-13T20:56:23.710845125Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 20:56:23.740792 containerd[2014]: time="2025-01-13T20:56:23.740756397Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:56:25.160613 containerd[2014]: time="2025-01-13T20:56:25.160546921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:25.162375 containerd[2014]: time="2025-01-13T20:56:25.162326095Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 20:56:25.167106 containerd[2014]: time="2025-01-13T20:56:25.166751960Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:25.171993 containerd[2014]: time="2025-01-13T20:56:25.171932358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:25.173520 containerd[2014]: time="2025-01-13T20:56:25.173334958Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.432348509s" Jan 13 20:56:25.173520 containerd[2014]: time="2025-01-13T20:56:25.173374410Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 20:56:25.211658 containerd[2014]: time="2025-01-13T20:56:25.211569032Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:56:26.547735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3905514610.mount: Deactivated successfully. Jan 13 20:56:27.296176 containerd[2014]: time="2025-01-13T20:56:27.296120252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:27.298784 containerd[2014]: time="2025-01-13T20:56:27.297873025Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:56:27.300048 containerd[2014]: time="2025-01-13T20:56:27.299987118Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:27.302855 containerd[2014]: time="2025-01-13T20:56:27.302794067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:27.304641 containerd[2014]: time="2025-01-13T20:56:27.304022681Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.092352768s" Jan 13 20:56:27.304641 containerd[2014]: time="2025-01-13T20:56:27.304066754Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:56:27.336968 containerd[2014]: time="2025-01-13T20:56:27.336940744Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:56:27.975019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362142441.mount: Deactivated successfully. Jan 13 20:56:29.319906 containerd[2014]: time="2025-01-13T20:56:29.319741053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:29.332285 containerd[2014]: time="2025-01-13T20:56:29.332210631Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:56:29.336687 containerd[2014]: time="2025-01-13T20:56:29.336607451Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:29.342482 containerd[2014]: time="2025-01-13T20:56:29.342105867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:29.343318 containerd[2014]: time="2025-01-13T20:56:29.343281269Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.006148767s" Jan 13 20:56:29.343410 containerd[2014]: time="2025-01-13T20:56:29.343328549Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:56:29.378032 containerd[2014]: time="2025-01-13T20:56:29.377988110Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:56:29.450340 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:56:29.458329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:56:30.331542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:56:30.348710 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:56:30.358929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1750465422.mount: Deactivated successfully. Jan 13 20:56:30.366953 containerd[2014]: time="2025-01-13T20:56:30.366901008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:30.368963 containerd[2014]: time="2025-01-13T20:56:30.368849386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:56:30.373303 containerd[2014]: time="2025-01-13T20:56:30.370988186Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:30.377585 containerd[2014]: time="2025-01-13T20:56:30.377541781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:30.383117 containerd[2014]: time="2025-01-13T20:56:30.382054507Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.00402407s" Jan 13 20:56:30.383117 containerd[2014]: time="2025-01-13T20:56:30.382594147Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:56:30.429905 containerd[2014]: time="2025-01-13T20:56:30.429861841Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:56:30.480343 kubelet[2745]: E0113 20:56:30.480259 2745 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:56:30.485591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:56:30.486095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:56:31.013871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount822291839.mount: Deactivated successfully. Jan 13 20:56:33.808660 containerd[2014]: time="2025-01-13T20:56:33.808607412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:33.810866 containerd[2014]: time="2025-01-13T20:56:33.810793549Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 20:56:33.813199 containerd[2014]: time="2025-01-13T20:56:33.813144705Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:33.817907 containerd[2014]: time="2025-01-13T20:56:33.817545922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:56:33.818750 containerd[2014]: time="2025-01-13T20:56:33.818714504Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.388490933s" Jan 13 20:56:33.818821 containerd[2014]: time="2025-01-13T20:56:33.818757008Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 20:56:36.238738 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:56:37.964798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:56:37.974422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:56:38.005561 systemd[1]: Reloading requested from client PID 2880 ('systemctl') (unit session-7.scope)... Jan 13 20:56:38.005575 systemd[1]: Reloading... Jan 13 20:56:38.112137 zram_generator::config[2919]: No configuration found. Jan 13 20:56:38.261521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:56:38.358593 systemd[1]: Reloading finished in 352 ms. Jan 13 20:56:38.394246 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:56:38.394440 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:56:38.395037 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:56:38.402548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:56:38.846292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:56:38.858711 (kubelet)[2987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:56:38.927163 kubelet[2987]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:56:38.927684 kubelet[2987]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:56:38.927684 kubelet[2987]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:56:38.933071 kubelet[2987]: I0113 20:56:38.931613 2987 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:56:39.151415 kubelet[2987]: I0113 20:56:39.150874 2987 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:56:39.151415 kubelet[2987]: I0113 20:56:39.150912 2987 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:56:39.151415 kubelet[2987]: I0113 20:56:39.151314 2987 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:56:39.190626 kubelet[2987]: E0113 20:56:39.190536 2987 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.31.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:39.191006 kubelet[2987]: I0113 20:56:39.190819 2987 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:56:39.208107 kubelet[2987]: I0113 20:56:39.207860 2987 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:56:39.208342 kubelet[2987]: I0113 20:56:39.208321 2987 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:56:39.209877 kubelet[2987]: I0113 20:56:39.209839 2987 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:56:39.210465 kubelet[2987]: I0113 20:56:39.210445 2987 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:56:39.210531 kubelet[2987]: I0113 20:56:39.210470 2987 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:56:39.210631 kubelet[2987]: I0113 20:56:39.210612 2987 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:56:39.211687 kubelet[2987]: I0113 20:56:39.211667 2987 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:56:39.211768 kubelet[2987]: I0113 20:56:39.211695 2987 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:56:39.211768 kubelet[2987]: I0113 20:56:39.211729 2987 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:56:39.211768 kubelet[2987]: I0113 20:56:39.211745 2987 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:56:39.214352 kubelet[2987]: W0113 20:56:39.213350 2987 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.31.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:39.214352 kubelet[2987]: E0113 20:56:39.213415 2987 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:39.214352 kubelet[2987]: W0113 20:56:39.213506 2987 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.31.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-120&limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:39.214352 kubelet[2987]: E0113 20:56:39.213553 2987 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-120&limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:39.215537 kubelet[2987]: I0113 20:56:39.215203 2987 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:56:39.221831 kubelet[2987]: I0113 20:56:39.221802 2987 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:56:39.221955 kubelet[2987]: W0113 20:56:39.221887 2987 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:56:39.222628 kubelet[2987]: I0113 20:56:39.222608 2987 server.go:1256] "Started kubelet" Jan 13 20:56:39.224110 kubelet[2987]: I0113 20:56:39.223301 2987 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:56:39.225117 kubelet[2987]: I0113 20:56:39.225071 2987 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:56:39.225311 kubelet[2987]: I0113 20:56:39.225296 2987 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:56:39.229360 kubelet[2987]: I0113 20:56:39.229332 2987 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:56:39.229586 kubelet[2987]: I0113 20:56:39.229569 2987 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:56:39.237011 kubelet[2987]: I0113 20:56:39.236976 2987 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:56:39.241663 kubelet[2987]: I0113 20:56:39.241628 2987 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:56:39.241804 kubelet[2987]: I0113 20:56:39.241703 2987 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:56:39.245655 kubelet[2987]: E0113 20:56:39.244808 2987 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.120:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.120:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-120.181a5c08cfeb6582 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-120,UID:ip-172-31-31-120,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-120,},FirstTimestamp:2025-01-13 20:56:39.222551938 +0000 UTC m=+0.357729921,LastTimestamp:2025-01-13 20:56:39.222551938 +0000 UTC m=+0.357729921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-120,}" Jan 13 20:56:39.245655 kubelet[2987]: E0113 20:56:39.244992 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-120?timeout=10s\": dial tcp 172.31.31.120:6443: connect: connection refused" interval="200ms" Jan 13 20:56:39.247277 kubelet[2987]: I0113 20:56:39.247248 2987 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:56:39.250173 kubelet[2987]: W0113 20:56:39.250125 2987 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.31.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:39.250173 kubelet[2987]: E0113 20:56:39.250177 2987 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:39.252059 kubelet[2987]: I0113 20:56:39.251439 2987 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:56:39.252059 kubelet[2987]: I0113 20:56:39.251460 2987 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:56:39.260323 kubelet[2987]: I0113 20:56:39.260183 2987 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:56:39.262737 kubelet[2987]: I0113 20:56:39.262714 2987 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:56:39.263174 kubelet[2987]: I0113 20:56:39.262884 2987 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:56:39.263174 kubelet[2987]: I0113 20:56:39.262911 2987 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:56:39.263174 kubelet[2987]: E0113 20:56:39.262970 2987 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:56:39.272856 kubelet[2987]: W0113 20:56:39.272798 2987 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.31.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:39.275632 kubelet[2987]: E0113 20:56:39.275608 2987 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:39.276481 kubelet[2987]: E0113 20:56:39.276200 2987 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:56:39.295037 kubelet[2987]: I0113 20:56:39.295016 2987 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:56:39.295472 kubelet[2987]: I0113 20:56:39.295251 2987 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:56:39.295472 kubelet[2987]: I0113 20:56:39.295275 2987 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:56:39.297954 kubelet[2987]: I0113 20:56:39.297857 2987 policy_none.go:49] "None policy: Start" Jan 13 20:56:39.298724 kubelet[2987]: I0113 20:56:39.298705 2987 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:56:39.299076 kubelet[2987]: I0113 20:56:39.298809 2987 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:56:39.307118 kubelet[2987]: I0113 20:56:39.305731 2987 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:56:39.307118 kubelet[2987]: I0113 20:56:39.305947 2987 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:56:39.310915 kubelet[2987]: E0113 20:56:39.310896 2987 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-120\" not found" Jan 13 20:56:39.338925 kubelet[2987]: I0113 20:56:39.338900 2987 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-120" Jan 13 20:56:39.339481 kubelet[2987]: E0113 20:56:39.339458 2987 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.120:6443/api/v1/nodes\": dial tcp 172.31.31.120:6443: connect: connection refused" node="ip-172-31-31-120" Jan 13 20:56:39.364025 kubelet[2987]: I0113 20:56:39.363988 2987 topology_manager.go:215] "Topology Admit Handler" podUID="e5b4533c936ca422bc6990e6098c24ed" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-31-120" Jan 13 20:56:39.365760 kubelet[2987]: I0113 20:56:39.365554 2987 topology_manager.go:215] "Topology Admit Handler" podUID="24eae76b6eceeb61a91291b092406096" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-31-120" Jan 13 20:56:39.367067 kubelet[2987]: I0113 20:56:39.366888 2987 topology_manager.go:215] "Topology Admit Handler" podUID="533b0db962a5bd3fb57c71e0e986da36" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-31-120" Jan 13 20:56:39.444051 kubelet[2987]: I0113 20:56:39.443738 2987 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/24eae76b6eceeb61a91291b092406096-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-120\" (UID: \"24eae76b6eceeb61a91291b092406096\") " pod="kube-system/kube-controller-manager-ip-172-31-31-120" Jan 13 20:56:39.444051 kubelet[2987]: I0113 20:56:39.443787 2987 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24eae76b6eceeb61a91291b092406096-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-120\" (UID: \"24eae76b6eceeb61a91291b092406096\") " pod="kube-system/kube-controller-manager-ip-172-31-31-120" Jan 13 20:56:39.444051 kubelet[2987]: I0113 20:56:39.443818 2987 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5b4533c936ca422bc6990e6098c24ed-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-120\" (UID: \"e5b4533c936ca422bc6990e6098c24ed\") " pod="kube-system/kube-apiserver-ip-172-31-31-120" Jan 13 20:56:39.444051 kubelet[2987]: I0113 20:56:39.443860 2987 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5b4533c936ca422bc6990e6098c24ed-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-120\" (UID: \"e5b4533c936ca422bc6990e6098c24ed\") " pod="kube-system/kube-apiserver-ip-172-31-31-120" Jan 13 20:56:39.444051 kubelet[2987]: I0113 20:56:39.443905 2987 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24eae76b6eceeb61a91291b092406096-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-120\" (UID: \"24eae76b6eceeb61a91291b092406096\") " pod="kube-system/kube-controller-manager-ip-172-31-31-120" Jan 13 20:56:39.444288 kubelet[2987]: I0113 20:56:39.443927 2987 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24eae76b6eceeb61a91291b092406096-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-120\" (UID: \"24eae76b6eceeb61a91291b092406096\") " pod="kube-system/kube-controller-manager-ip-172-31-31-120" Jan 13 20:56:39.444288 kubelet[2987]: I0113 20:56:39.443955 2987 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24eae76b6eceeb61a91291b092406096-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-120\" (UID: \"24eae76b6eceeb61a91291b092406096\") " pod="kube-system/kube-controller-manager-ip-172-31-31-120" Jan 13 20:56:39.444288 kubelet[2987]: I0113 20:56:39.443995 2987 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/533b0db962a5bd3fb57c71e0e986da36-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-120\" (UID: \"533b0db962a5bd3fb57c71e0e986da36\") " pod="kube-system/kube-scheduler-ip-172-31-31-120" Jan 13 20:56:39.444288 kubelet[2987]: I0113 20:56:39.444034 2987 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5b4533c936ca422bc6990e6098c24ed-ca-certs\") pod \"kube-apiserver-ip-172-31-31-120\" (UID: \"e5b4533c936ca422bc6990e6098c24ed\") " pod="kube-system/kube-apiserver-ip-172-31-31-120" Jan 13 20:56:39.446025 kubelet[2987]: E0113 20:56:39.445965 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-120?timeout=10s\": dial tcp 172.31.31.120:6443: connect: connection refused" interval="400ms" Jan 13 20:56:39.541687 kubelet[2987]: I0113 20:56:39.541655 2987 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-120" Jan 13 20:56:39.545270 kubelet[2987]: E0113 20:56:39.545044 2987 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.120:6443/api/v1/nodes\": dial tcp 172.31.31.120:6443: connect: connection refused" node="ip-172-31-31-120" Jan 13 20:56:39.675613 containerd[2014]: time="2025-01-13T20:56:39.675566189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-120,Uid:533b0db962a5bd3fb57c71e0e986da36,Namespace:kube-system,Attempt:0,}" Jan 13 20:56:39.680223 containerd[2014]: time="2025-01-13T20:56:39.680145123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-120,Uid:e5b4533c936ca422bc6990e6098c24ed,Namespace:kube-system,Attempt:0,}" Jan 13 20:56:39.680745 containerd[2014]: time="2025-01-13T20:56:39.680701909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-120,Uid:24eae76b6eceeb61a91291b092406096,Namespace:kube-system,Attempt:0,}" Jan 13 20:56:39.847989 kubelet[2987]: E0113 20:56:39.847309 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-120?timeout=10s\": dial tcp 172.31.31.120:6443: connect: connection refused" interval="800ms" Jan 13 20:56:39.947146 kubelet[2987]: I0113 20:56:39.947114 2987 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-120" Jan 13 20:56:39.947671 kubelet[2987]: E0113 20:56:39.947650 2987 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.120:6443/api/v1/nodes\": dial tcp 172.31.31.120:6443: connect: connection refused" node="ip-172-31-31-120" Jan 13 20:56:40.318121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054497373.mount: Deactivated successfully. Jan 13 20:56:40.335125 kubelet[2987]: W0113 20:56:40.335008 2987 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.31.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:40.335125 kubelet[2987]: E0113 20:56:40.335102 2987 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:40.339495 containerd[2014]: time="2025-01-13T20:56:40.339107334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:56:40.344674 containerd[2014]: time="2025-01-13T20:56:40.344621006Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:56:40.346909 containerd[2014]: time="2025-01-13T20:56:40.346871324Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:56:40.348880 containerd[2014]: time="2025-01-13T20:56:40.348838638Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:56:40.353633 containerd[2014]: time="2025-01-13T20:56:40.352422161Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:56:40.354673 containerd[2014]: time="2025-01-13T20:56:40.354636792Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:56:40.356936 containerd[2014]: time="2025-01-13T20:56:40.356893595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:56:40.357796 containerd[2014]: time="2025-01-13T20:56:40.357765223Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 676.968897ms" Jan 13 20:56:40.359584 containerd[2014]: time="2025-01-13T20:56:40.358809827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:56:40.363153 containerd[2014]: time="2025-01-13T20:56:40.363118958Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 682.875145ms" Jan 13 20:56:40.367713 containerd[2014]: time="2025-01-13T20:56:40.367668982Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 690.635033ms" Jan 13 20:56:40.424910 kubelet[2987]: W0113 20:56:40.424823 2987 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.31.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-120&limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:40.424910 kubelet[2987]: E0113 20:56:40.424914 2987 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-120&limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:40.594018 containerd[2014]: time="2025-01-13T20:56:40.592826924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:56:40.594018 containerd[2014]: time="2025-01-13T20:56:40.592909903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:56:40.594018 containerd[2014]: time="2025-01-13T20:56:40.592927464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:56:40.594018 containerd[2014]: time="2025-01-13T20:56:40.593027112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:56:40.598108 containerd[2014]: time="2025-01-13T20:56:40.596519152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:56:40.598108 containerd[2014]: time="2025-01-13T20:56:40.596586507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:56:40.598108 containerd[2014]: time="2025-01-13T20:56:40.596612119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:56:40.598108 containerd[2014]: time="2025-01-13T20:56:40.596721841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:56:40.609647 containerd[2014]: time="2025-01-13T20:56:40.609314457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:56:40.609647 containerd[2014]: time="2025-01-13T20:56:40.609393014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:56:40.609647 containerd[2014]: time="2025-01-13T20:56:40.609413422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:56:40.609647 containerd[2014]: time="2025-01-13T20:56:40.609538995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:56:40.648393 kubelet[2987]: E0113 20:56:40.647756 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-120?timeout=10s\": dial tcp 172.31.31.120:6443: connect: connection refused" interval="1.6s" Jan 13 20:56:40.678104 kubelet[2987]: W0113 20:56:40.673316 2987 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.31.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:40.678104 kubelet[2987]: E0113 20:56:40.673443 2987 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:40.749545 containerd[2014]: time="2025-01-13T20:56:40.749493914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-120,Uid:24eae76b6eceeb61a91291b092406096,Namespace:kube-system,Attempt:0,} returns sandbox id \"71dcbdd32019c2984173499565e42c5c3b5ee7995b16c6dfbb397641c2dc0e2b\"" Jan 13 20:56:40.760284 kubelet[2987]: I0113 20:56:40.760254 2987 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-120" Jan 13 20:56:40.760585 kubelet[2987]: E0113 20:56:40.760558 2987 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.120:6443/api/v1/nodes\": dial tcp 172.31.31.120:6443: connect: connection refused" node="ip-172-31-31-120" Jan 13 20:56:40.761372 containerd[2014]: time="2025-01-13T20:56:40.761320031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-120,Uid:533b0db962a5bd3fb57c71e0e986da36,Namespace:kube-system,Attempt:0,} returns sandbox id \"6921ea3cb5e676ae2cf24d9309714314ffc88bbe8c64af66c6f902d7d0bbc128\"" Jan 13 20:56:40.763373 containerd[2014]: time="2025-01-13T20:56:40.763335152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-120,Uid:e5b4533c936ca422bc6990e6098c24ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"b18e06524bc983bbb98040460b35f4b78ab4a5d061fa94525fc7d2a4fbfd6df6\"" Jan 13 20:56:40.769613 kubelet[2987]: W0113 20:56:40.764730 2987 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.31.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:40.769613 kubelet[2987]: E0113 20:56:40.769280 2987 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:40.769859 containerd[2014]: time="2025-01-13T20:56:40.769460578Z" level=info msg="CreateContainer within sandbox \"71dcbdd32019c2984173499565e42c5c3b5ee7995b16c6dfbb397641c2dc0e2b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:56:40.773306 containerd[2014]: time="2025-01-13T20:56:40.773273328Z" level=info msg="CreateContainer within sandbox \"6921ea3cb5e676ae2cf24d9309714314ffc88bbe8c64af66c6f902d7d0bbc128\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:56:40.773564 containerd[2014]: time="2025-01-13T20:56:40.773512194Z" level=info msg="CreateContainer within sandbox \"b18e06524bc983bbb98040460b35f4b78ab4a5d061fa94525fc7d2a4fbfd6df6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:56:40.830222 containerd[2014]: time="2025-01-13T20:56:40.829923378Z" level=info msg="CreateContainer within sandbox \"71dcbdd32019c2984173499565e42c5c3b5ee7995b16c6dfbb397641c2dc0e2b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b95b9f0ed478fcb995f61398b4465f99c7b85b400376720e91f28e52f2bfc7df\"" Jan 13 20:56:40.831892 containerd[2014]: time="2025-01-13T20:56:40.831866070Z" level=info msg="CreateContainer within sandbox \"b18e06524bc983bbb98040460b35f4b78ab4a5d061fa94525fc7d2a4fbfd6df6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"05a80cb16f8423a52cd36a37833fedcce4b9c3f152c1d9592b7f447b5048539d\"" Jan 13 20:56:40.833373 containerd[2014]: time="2025-01-13T20:56:40.832256711Z" level=info msg="StartContainer for \"b95b9f0ed478fcb995f61398b4465f99c7b85b400376720e91f28e52f2bfc7df\"" Jan 13 20:56:40.835249 containerd[2014]: time="2025-01-13T20:56:40.835215867Z" level=info msg="CreateContainer within sandbox \"6921ea3cb5e676ae2cf24d9309714314ffc88bbe8c64af66c6f902d7d0bbc128\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3e555d4584cc59368dd12ba2596d42d195e8031097904899d0a63fba28807697\"" Jan 13 20:56:40.835421 containerd[2014]: time="2025-01-13T20:56:40.835394634Z" level=info msg="StartContainer for \"05a80cb16f8423a52cd36a37833fedcce4b9c3f152c1d9592b7f447b5048539d\"" Jan 13 20:56:40.848304 containerd[2014]: time="2025-01-13T20:56:40.846863049Z" level=info msg="StartContainer for \"3e555d4584cc59368dd12ba2596d42d195e8031097904899d0a63fba28807697\"" Jan 13 20:56:40.982768 containerd[2014]: time="2025-01-13T20:56:40.982720698Z" level=info msg="StartContainer for \"05a80cb16f8423a52cd36a37833fedcce4b9c3f152c1d9592b7f447b5048539d\" returns successfully" Jan 13 20:56:41.001534 containerd[2014]: time="2025-01-13T20:56:41.001397404Z" level=info msg="StartContainer for \"b95b9f0ed478fcb995f61398b4465f99c7b85b400376720e91f28e52f2bfc7df\" returns successfully" Jan 13 20:56:41.017711 containerd[2014]: time="2025-01-13T20:56:41.017592219Z" level=info msg="StartContainer for \"3e555d4584cc59368dd12ba2596d42d195e8031097904899d0a63fba28807697\" returns successfully" Jan 13 20:56:41.242417 kubelet[2987]: E0113 20:56:41.242385 2987 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.120:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.120:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-120.181a5c08cfeb6582 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-120,UID:ip-172-31-31-120,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-120,},FirstTimestamp:2025-01-13 20:56:39.222551938 +0000 UTC m=+0.357729921,LastTimestamp:2025-01-13 20:56:39.222551938 +0000 UTC m=+0.357729921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-120,}" Jan 13 20:56:41.337072 kubelet[2987]: E0113 20:56:41.337031 2987 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.31.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.31.120:6443: connect: connection refused Jan 13 20:56:42.364337 kubelet[2987]: I0113 20:56:42.364307 2987 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-120" Jan 13 20:56:43.791922 kubelet[2987]: E0113 20:56:43.791889 2987 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-120\" not found" node="ip-172-31-31-120" Jan 13 20:56:43.846102 kubelet[2987]: I0113 20:56:43.844996 2987 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-31-120" Jan 13 20:56:44.217118 kubelet[2987]: I0113 20:56:44.216954 2987 apiserver.go:52] "Watching apiserver" Jan 13 20:56:44.242424 kubelet[2987]: I0113 20:56:44.242372 2987 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:56:47.039799 systemd[1]: Reloading requested from client PID 3258 ('systemctl') (unit session-7.scope)... Jan 13 20:56:47.039879 systemd[1]: Reloading... Jan 13 20:56:47.152122 zram_generator::config[3298]: No configuration found. Jan 13 20:56:47.369597 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:56:47.474281 systemd[1]: Reloading finished in 433 ms. Jan 13 20:56:47.520564 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:56:47.523103 kubelet[2987]: I0113 20:56:47.521213 2987 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:56:47.530732 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:56:47.531363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:56:47.538859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:56:47.966369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:56:47.966617 (kubelet)[3365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:56:48.080566 kubelet[3365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:56:48.080566 kubelet[3365]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:56:48.080566 kubelet[3365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:56:48.082149 kubelet[3365]: I0113 20:56:48.082075 3365 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:56:48.088437 kubelet[3365]: I0113 20:56:48.088414 3365 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:56:48.089137 kubelet[3365]: I0113 20:56:48.088552 3365 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:56:48.089137 kubelet[3365]: I0113 20:56:48.088764 3365 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:56:48.092921 kubelet[3365]: I0113 20:56:48.092889 3365 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:56:48.109593 kubelet[3365]: I0113 20:56:48.107586 3365 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:56:48.116280 sudo[3377]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:56:48.117287 sudo[3377]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:56:48.121510 kubelet[3365]: I0113 20:56:48.121035 3365 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:56:48.121818 kubelet[3365]: I0113 20:56:48.121702 3365 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:56:48.121978 kubelet[3365]: I0113 20:56:48.121950 3365 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:56:48.122141 kubelet[3365]: I0113 20:56:48.121986 3365 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:56:48.122141 kubelet[3365]: I0113 20:56:48.122001 3365 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:56:48.122141 kubelet[3365]: I0113 20:56:48.122045 3365 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:56:48.122831 kubelet[3365]: I0113 20:56:48.122442 3365 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:56:48.122831 kubelet[3365]: I0113 20:56:48.122576 3365 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:56:48.122831 kubelet[3365]: I0113 20:56:48.122655 3365 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:56:48.122831 kubelet[3365]: I0113 20:56:48.122676 3365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:56:48.131224 kubelet[3365]: I0113 20:56:48.129260 3365 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:56:48.131224 kubelet[3365]: I0113 20:56:48.129484 3365 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:56:48.131224 kubelet[3365]: I0113 20:56:48.129968 3365 server.go:1256] "Started kubelet" Jan 13 20:56:48.135278 kubelet[3365]: I0113 20:56:48.135254 3365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:56:48.143889 kubelet[3365]: I0113 20:56:48.143814 3365 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:56:48.146752 kubelet[3365]: I0113 20:56:48.146647 3365 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:56:48.159407 kubelet[3365]: I0113 20:56:48.157345 3365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:56:48.159407 kubelet[3365]: I0113 20:56:48.157574 3365 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:56:48.162416 kubelet[3365]: I0113 20:56:48.161846 3365 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:56:48.162416 kubelet[3365]: I0113 20:56:48.162264 3365 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:56:48.162586 kubelet[3365]: I0113 20:56:48.162424 3365 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:56:48.178065 kubelet[3365]: I0113 20:56:48.178028 3365 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:56:48.178557 kubelet[3365]: I0113 20:56:48.178529 3365 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:56:48.187448 kubelet[3365]: I0113 20:56:48.187426 3365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:56:48.188186 kubelet[3365]: I0113 20:56:48.188169 3365 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:56:48.197960 kubelet[3365]: I0113 20:56:48.197811 3365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:56:48.198379 kubelet[3365]: I0113 20:56:48.198342 3365 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:56:48.198697 kubelet[3365]: I0113 20:56:48.198626 3365 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:56:48.199825 kubelet[3365]: E0113 20:56:48.199200 3365 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:56:48.273144 kubelet[3365]: I0113 20:56:48.273116 3365 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-120" Jan 13 20:56:48.290247 kubelet[3365]: I0113 20:56:48.288657 3365 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-31-120" Jan 13 20:56:48.290247 kubelet[3365]: I0113 20:56:48.288739 3365 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-31-120" Jan 13 20:56:48.300421 kubelet[3365]: E0113 20:56:48.299805 3365 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:56:48.362680 kubelet[3365]: I0113 20:56:48.361977 3365 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:56:48.362680 kubelet[3365]: I0113 20:56:48.362001 3365 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:56:48.362680 kubelet[3365]: I0113 20:56:48.362019 3365 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:56:48.362680 kubelet[3365]: I0113 20:56:48.362231 3365 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:56:48.362680 kubelet[3365]: I0113 20:56:48.362259 3365 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:56:48.362680 kubelet[3365]: I0113 20:56:48.362269 3365 policy_none.go:49] "None policy: Start" Jan 13 20:56:48.365267 kubelet[3365]: I0113 20:56:48.363829 3365 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:56:48.365267 kubelet[3365]: I0113 20:56:48.363856 3365 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:56:48.365267 kubelet[3365]: I0113 20:56:48.364029 3365 state_mem.go:75] "Updated machine memory state" Jan 13 20:56:48.367296 kubelet[3365]: I0113 20:56:48.367278 3365 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:56:48.371010 kubelet[3365]: I0113 20:56:48.370991 3365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:56:48.500307 kubelet[3365]: I0113 20:56:48.500269 3365 topology_manager.go:215] "Topology Admit Handler" podUID="e5b4533c936ca422bc6990e6098c24ed" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-31-120" Jan 13 20:56:48.500454 kubelet[3365]: I0113 20:56:48.500375 3365 topology_manager.go:215] "Topology Admit Handler" podUID="24eae76b6eceeb61a91291b092406096" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-31-120" Jan 13 20:56:48.500454 kubelet[3365]: I0113 20:56:48.500435 3365 topology_manager.go:215] "Topology Admit Handler" podUID="533b0db962a5bd3fb57c71e0e986da36" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-31-120" Jan 13 20:56:48.564349 kubelet[3365]: I0113 20:56:48.564244 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24eae76b6eceeb61a91291b092406096-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-120\" (UID: \"24eae76b6eceeb61a91291b092406096\") " pod="kube-system/kube-controller-manager-ip-172-31-31-120" Jan 13 20:56:48.564349 kubelet[3365]: I0113 20:56:48.564303 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5b4533c936ca422bc6990e6098c24ed-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-120\" (UID: \"e5b4533c936ca422bc6990e6098c24ed\") " pod="kube-system/kube-apiserver-ip-172-31-31-120" Jan 13 20:56:48.566095 kubelet[3365]: I0113 20:56:48.564331 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24eae76b6eceeb61a91291b092406096-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-120\" (UID: \"24eae76b6eceeb61a91291b092406096\") " pod="kube-system/kube-controller-manager-ip-172-31-31-120" Jan 13 20:56:48.566095 kubelet[3365]: I0113 20:56:48.564673 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/24eae76b6eceeb61a91291b092406096-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-120\" (UID: \"24eae76b6eceeb61a91291b092406096\") " pod="kube-system/kube-controller-manager-ip-172-31-31-120" Jan 13 20:56:48.566095 kubelet[3365]: I0113 20:56:48.564834 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24eae76b6eceeb61a91291b092406096-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-120\" (UID: \"24eae76b6eceeb61a91291b092406096\") " pod="kube-system/kube-controller-manager-ip-172-31-31-120" Jan 13 20:56:48.566095 kubelet[3365]: I0113 20:56:48.564973 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/533b0db962a5bd3fb57c71e0e986da36-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-120\" (UID: \"533b0db962a5bd3fb57c71e0e986da36\") " pod="kube-system/kube-scheduler-ip-172-31-31-120" Jan 13 20:56:48.566095 kubelet[3365]: I0113 20:56:48.565044 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5b4533c936ca422bc6990e6098c24ed-ca-certs\") pod \"kube-apiserver-ip-172-31-31-120\" (UID: \"e5b4533c936ca422bc6990e6098c24ed\") " pod="kube-system/kube-apiserver-ip-172-31-31-120" Jan 13 20:56:48.566379 kubelet[3365]: I0113 20:56:48.565229 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5b4533c936ca422bc6990e6098c24ed-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-120\" (UID: \"e5b4533c936ca422bc6990e6098c24ed\") " pod="kube-system/kube-apiserver-ip-172-31-31-120" Jan 13 20:56:48.566379 kubelet[3365]: I0113 20:56:48.565288 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24eae76b6eceeb61a91291b092406096-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-120\" (UID: \"24eae76b6eceeb61a91291b092406096\") " pod="kube-system/kube-controller-manager-ip-172-31-31-120" Jan 13 20:56:48.876677 sudo[3377]: pam_unix(sudo:session): session closed for user root Jan 13 20:56:49.134184 kubelet[3365]: I0113 20:56:49.134059 3365 apiserver.go:52] "Watching apiserver" Jan 13 20:56:49.163099 kubelet[3365]: I0113 20:56:49.163045 3365 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:56:49.255922 kubelet[3365]: E0113 20:56:49.255886 3365 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-120\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-120" Jan 13 20:56:49.303926 kubelet[3365]: I0113 20:56:49.303888 3365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-120" podStartSLOduration=1.30383182 podStartE2EDuration="1.30383182s" podCreationTimestamp="2025-01-13 20:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:56:49.302601238 +0000 UTC m=+1.309959842" watchObservedRunningTime="2025-01-13 20:56:49.30383182 +0000 UTC m=+1.311190401" Jan 13 20:56:49.322873 kubelet[3365]: I0113 20:56:49.320948 3365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-120" podStartSLOduration=1.320900261 podStartE2EDuration="1.320900261s" podCreationTimestamp="2025-01-13 20:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:56:49.320901362 +0000 UTC m=+1.328259963" watchObservedRunningTime="2025-01-13 20:56:49.320900261 +0000 UTC m=+1.328258860" Jan 13 20:56:49.361590 kubelet[3365]: I0113 20:56:49.361537 3365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-120" podStartSLOduration=1.361497038 podStartE2EDuration="1.361497038s" podCreationTimestamp="2025-01-13 20:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:56:49.337451094 +0000 UTC m=+1.344809694" watchObservedRunningTime="2025-01-13 20:56:49.361497038 +0000 UTC m=+1.368855639" Jan 13 20:56:50.343784 update_engine[1991]: I20250113 20:56:50.343714 1991 update_attempter.cc:509] Updating boot flags... Jan 13 20:56:50.456103 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3431) Jan 13 20:56:50.708878 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3433) Jan 13 20:56:50.940234 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3433) Jan 13 20:56:51.287739 sudo[2365]: pam_unix(sudo:session): session closed for user root Jan 13 20:56:51.310277 sshd[2364]: Connection closed by 139.178.89.65 port 52808 Jan 13 20:56:51.312254 sshd-session[2361]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:51.322562 systemd[1]: sshd@6-172.31.31.120:22-139.178.89.65:52808.service: Deactivated successfully. Jan 13 20:56:51.339587 systemd-logind[1987]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:56:51.340973 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:56:51.347238 systemd-logind[1987]: Removed session 7. Jan 13 20:56:59.037328 kubelet[3365]: I0113 20:56:59.037290 3365 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:56:59.038300 containerd[2014]: time="2025-01-13T20:56:59.037844732Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:56:59.039978 kubelet[3365]: I0113 20:56:59.039028 3365 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:56:59.928746 kubelet[3365]: I0113 20:56:59.928702 3365 topology_manager.go:215] "Topology Admit Handler" podUID="ff79c9c2-f54b-4e7a-af5a-e25e046fecae" podNamespace="kube-system" podName="kube-proxy-xmdz9" Jan 13 20:56:59.942181 kubelet[3365]: I0113 20:56:59.939480 3365 topology_manager.go:215] "Topology Admit Handler" podUID="56ccc1a6-1542-49ad-8b9d-0457be27a4b3" podNamespace="kube-system" podName="cilium-c2q6s" Jan 13 20:56:59.959932 kubelet[3365]: I0113 20:56:59.955855 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cilium-cgroup\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.959932 kubelet[3365]: I0113 20:56:59.955902 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-lib-modules\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.959932 kubelet[3365]: I0113 20:56:59.955959 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-clustermesh-secrets\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.959932 kubelet[3365]: I0113 20:56:59.955990 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff79c9c2-f54b-4e7a-af5a-e25e046fecae-kube-proxy\") pod \"kube-proxy-xmdz9\" (UID: \"ff79c9c2-f54b-4e7a-af5a-e25e046fecae\") " pod="kube-system/kube-proxy-xmdz9" Jan 13 20:56:59.959932 kubelet[3365]: I0113 20:56:59.956020 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cni-path\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.959932 kubelet[3365]: I0113 20:56:59.956115 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-etc-cni-netd\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.960446 kubelet[3365]: I0113 20:56:59.956142 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cilium-config-path\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.960446 kubelet[3365]: I0113 20:56:59.956167 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff79c9c2-f54b-4e7a-af5a-e25e046fecae-lib-modules\") pod \"kube-proxy-xmdz9\" (UID: \"ff79c9c2-f54b-4e7a-af5a-e25e046fecae\") " pod="kube-system/kube-proxy-xmdz9" Jan 13 20:56:59.960446 kubelet[3365]: I0113 20:56:59.956195 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff79c9c2-f54b-4e7a-af5a-e25e046fecae-xtables-lock\") pod \"kube-proxy-xmdz9\" (UID: \"ff79c9c2-f54b-4e7a-af5a-e25e046fecae\") " pod="kube-system/kube-proxy-xmdz9" Jan 13 20:56:59.960446 kubelet[3365]: I0113 20:56:59.956225 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-host-proc-sys-kernel\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.960446 kubelet[3365]: I0113 20:56:59.956258 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krppj\" (UniqueName: \"kubernetes.io/projected/ff79c9c2-f54b-4e7a-af5a-e25e046fecae-kube-api-access-krppj\") pod \"kube-proxy-xmdz9\" (UID: \"ff79c9c2-f54b-4e7a-af5a-e25e046fecae\") " pod="kube-system/kube-proxy-xmdz9" Jan 13 20:56:59.960657 kubelet[3365]: I0113 20:56:59.956357 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cilium-run\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.960657 kubelet[3365]: I0113 20:56:59.956386 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-hubble-tls\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.960657 kubelet[3365]: I0113 20:56:59.956415 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-bpf-maps\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.960657 kubelet[3365]: I0113 20:56:59.956440 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-xtables-lock\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.960657 kubelet[3365]: I0113 20:56:59.956473 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75ccr\" (UniqueName: \"kubernetes.io/projected/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-kube-api-access-75ccr\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.960657 kubelet[3365]: I0113 20:56:59.956501 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-hostproc\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:56:59.961035 kubelet[3365]: I0113 20:56:59.956533 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-host-proc-sys-net\") pod \"cilium-c2q6s\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " pod="kube-system/cilium-c2q6s" Jan 13 20:57:00.249393 containerd[2014]: time="2025-01-13T20:57:00.249351222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xmdz9,Uid:ff79c9c2-f54b-4e7a-af5a-e25e046fecae,Namespace:kube-system,Attempt:0,}" Jan 13 20:57:00.267880 containerd[2014]: time="2025-01-13T20:57:00.267827285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c2q6s,Uid:56ccc1a6-1542-49ad-8b9d-0457be27a4b3,Namespace:kube-system,Attempt:0,}" Jan 13 20:57:00.323928 kubelet[3365]: I0113 20:57:00.323549 3365 topology_manager.go:215] "Topology Admit Handler" podUID="ea5d2761-b18b-4191-bd76-c3732e2cf6c5" podNamespace="kube-system" podName="cilium-operator-5cc964979-q9d6q" Jan 13 20:57:00.368616 kubelet[3365]: I0113 20:57:00.368431 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea5d2761-b18b-4191-bd76-c3732e2cf6c5-cilium-config-path\") pod \"cilium-operator-5cc964979-q9d6q\" (UID: \"ea5d2761-b18b-4191-bd76-c3732e2cf6c5\") " pod="kube-system/cilium-operator-5cc964979-q9d6q" Jan 13 20:57:00.368616 kubelet[3365]: I0113 20:57:00.368496 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26x4p\" (UniqueName: \"kubernetes.io/projected/ea5d2761-b18b-4191-bd76-c3732e2cf6c5-kube-api-access-26x4p\") pod \"cilium-operator-5cc964979-q9d6q\" (UID: \"ea5d2761-b18b-4191-bd76-c3732e2cf6c5\") " pod="kube-system/cilium-operator-5cc964979-q9d6q" Jan 13 20:57:00.401155 containerd[2014]: time="2025-01-13T20:57:00.400557870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:57:00.401155 containerd[2014]: time="2025-01-13T20:57:00.400656397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:57:00.401155 containerd[2014]: time="2025-01-13T20:57:00.400676995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:57:00.401155 containerd[2014]: time="2025-01-13T20:57:00.400801744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:57:00.414356 containerd[2014]: time="2025-01-13T20:57:00.414240225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:57:00.414356 containerd[2014]: time="2025-01-13T20:57:00.414312241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:57:00.415099 containerd[2014]: time="2025-01-13T20:57:00.414334790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:57:00.415099 containerd[2014]: time="2025-01-13T20:57:00.415014684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:57:00.532613 containerd[2014]: time="2025-01-13T20:57:00.531629279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c2q6s,Uid:56ccc1a6-1542-49ad-8b9d-0457be27a4b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\"" Jan 13 20:57:00.535616 containerd[2014]: time="2025-01-13T20:57:00.535581560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xmdz9,Uid:ff79c9c2-f54b-4e7a-af5a-e25e046fecae,Namespace:kube-system,Attempt:0,} returns sandbox id \"35c3f84e69aaeb5a4256376238ad343341e978e4c66b1cae3a922af01fb19758\"" Jan 13 20:57:00.537073 containerd[2014]: time="2025-01-13T20:57:00.535900226Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:57:00.540280 containerd[2014]: time="2025-01-13T20:57:00.540145836Z" level=info msg="CreateContainer within sandbox \"35c3f84e69aaeb5a4256376238ad343341e978e4c66b1cae3a922af01fb19758\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:57:00.572471 containerd[2014]: time="2025-01-13T20:57:00.572432587Z" level=info msg="CreateContainer within sandbox \"35c3f84e69aaeb5a4256376238ad343341e978e4c66b1cae3a922af01fb19758\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2fee15eaf0a93d11b79d36579d37aced39d107781fb9c17d2f5834f16d9d0998\"" Jan 13 20:57:00.573955 containerd[2014]: time="2025-01-13T20:57:00.573194520Z" level=info msg="StartContainer for \"2fee15eaf0a93d11b79d36579d37aced39d107781fb9c17d2f5834f16d9d0998\"" Jan 13 20:57:00.637413 containerd[2014]: time="2025-01-13T20:57:00.637361510Z" level=info msg="StartContainer for \"2fee15eaf0a93d11b79d36579d37aced39d107781fb9c17d2f5834f16d9d0998\" returns successfully" Jan 13 20:57:00.654218 containerd[2014]: time="2025-01-13T20:57:00.654175971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-q9d6q,Uid:ea5d2761-b18b-4191-bd76-c3732e2cf6c5,Namespace:kube-system,Attempt:0,}" Jan 13 20:57:00.696672 containerd[2014]: time="2025-01-13T20:57:00.696560730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:57:00.696672 containerd[2014]: time="2025-01-13T20:57:00.696620070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:57:00.696672 containerd[2014]: time="2025-01-13T20:57:00.696635705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:57:00.697234 containerd[2014]: time="2025-01-13T20:57:00.696809684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:57:00.778660 containerd[2014]: time="2025-01-13T20:57:00.778623458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-q9d6q,Uid:ea5d2761-b18b-4191-bd76-c3732e2cf6c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\"" Jan 13 20:57:08.278027 kubelet[3365]: I0113 20:57:08.277986 3365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xmdz9" podStartSLOduration=9.277945824 podStartE2EDuration="9.277945824s" podCreationTimestamp="2025-01-13 20:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:57:01.304607 +0000 UTC m=+13.311965602" watchObservedRunningTime="2025-01-13 20:57:08.277945824 +0000 UTC m=+20.285304424" Jan 13 20:57:08.634974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22454834.mount: Deactivated successfully. Jan 13 20:57:15.921483 containerd[2014]: time="2025-01-13T20:57:15.921428441Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:57:15.982604 containerd[2014]: time="2025-01-13T20:57:15.982555581Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166733503" Jan 13 20:57:15.997980 containerd[2014]: time="2025-01-13T20:57:15.997924362Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:57:16.000205 containerd[2014]: time="2025-01-13T20:57:16.000158644Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.464005454s" Jan 13 20:57:16.000205 containerd[2014]: time="2025-01-13T20:57:16.000207263Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:57:16.002250 containerd[2014]: time="2025-01-13T20:57:16.001532419Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:57:16.002992 containerd[2014]: time="2025-01-13T20:57:16.002959190Z" level=info msg="CreateContainer within sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:57:16.110287 containerd[2014]: time="2025-01-13T20:57:16.110230118Z" level=info msg="CreateContainer within sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da\"" Jan 13 20:57:16.114355 containerd[2014]: time="2025-01-13T20:57:16.114129392Z" level=info msg="StartContainer for \"1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da\"" Jan 13 20:57:16.297459 containerd[2014]: time="2025-01-13T20:57:16.297218188Z" level=info msg="StartContainer for \"1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da\" returns successfully" Jan 13 20:57:16.360971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da-rootfs.mount: Deactivated successfully. Jan 13 20:57:17.144502 containerd[2014]: time="2025-01-13T20:57:17.105394364Z" level=info msg="shim disconnected" id=1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da namespace=k8s.io Jan 13 20:57:17.145430 containerd[2014]: time="2025-01-13T20:57:17.144510105Z" level=warning msg="cleaning up after shim disconnected" id=1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da namespace=k8s.io Jan 13 20:57:17.145430 containerd[2014]: time="2025-01-13T20:57:17.144530632Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:57:17.431938 containerd[2014]: time="2025-01-13T20:57:17.431690399Z" level=info msg="CreateContainer within sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:57:17.535487 containerd[2014]: time="2025-01-13T20:57:17.535438701Z" level=info msg="CreateContainer within sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544\"" Jan 13 20:57:17.541587 containerd[2014]: time="2025-01-13T20:57:17.541214244Z" level=info msg="StartContainer for \"823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544\"" Jan 13 20:57:17.551517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1336870450.mount: Deactivated successfully. Jan 13 20:57:17.704939 containerd[2014]: time="2025-01-13T20:57:17.704825323Z" level=info msg="StartContainer for \"823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544\" returns successfully" Jan 13 20:57:17.729573 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:57:17.730058 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:57:17.730164 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:57:17.742171 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:57:17.802694 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:57:17.862924 containerd[2014]: time="2025-01-13T20:57:17.862860836Z" level=info msg="shim disconnected" id=823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544 namespace=k8s.io Jan 13 20:57:17.862924 containerd[2014]: time="2025-01-13T20:57:17.862915356Z" level=warning msg="cleaning up after shim disconnected" id=823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544 namespace=k8s.io Jan 13 20:57:17.862924 containerd[2014]: time="2025-01-13T20:57:17.862928630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:57:18.420143 containerd[2014]: time="2025-01-13T20:57:18.420101221Z" level=info msg="CreateContainer within sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:57:18.497921 containerd[2014]: time="2025-01-13T20:57:18.497792572Z" level=info msg="CreateContainer within sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322\"" Jan 13 20:57:18.499895 containerd[2014]: time="2025-01-13T20:57:18.499365112Z" level=info msg="StartContainer for \"12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322\"" Jan 13 20:57:18.514622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544-rootfs.mount: Deactivated successfully. Jan 13 20:57:18.674308 containerd[2014]: time="2025-01-13T20:57:18.671750331Z" level=info msg="StartContainer for \"12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322\" returns successfully" Jan 13 20:57:18.730279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322-rootfs.mount: Deactivated successfully. Jan 13 20:57:18.760721 containerd[2014]: time="2025-01-13T20:57:18.760588393Z" level=info msg="shim disconnected" id=12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322 namespace=k8s.io Jan 13 20:57:18.760721 containerd[2014]: time="2025-01-13T20:57:18.760648395Z" level=warning msg="cleaning up after shim disconnected" id=12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322 namespace=k8s.io Jan 13 20:57:18.760721 containerd[2014]: time="2025-01-13T20:57:18.760660242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:57:18.887967 containerd[2014]: time="2025-01-13T20:57:18.887920431Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:57:18.889782 containerd[2014]: time="2025-01-13T20:57:18.889609482Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907213" Jan 13 20:57:18.892378 containerd[2014]: time="2025-01-13T20:57:18.892036098Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:57:18.893697 containerd[2014]: time="2025-01-13T20:57:18.893661836Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.892095573s" Jan 13 20:57:18.893785 containerd[2014]: time="2025-01-13T20:57:18.893696278Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:57:18.896576 containerd[2014]: time="2025-01-13T20:57:18.896539699Z" level=info msg="CreateContainer within sandbox \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:57:18.915263 containerd[2014]: time="2025-01-13T20:57:18.915215342Z" level=info msg="CreateContainer within sandbox \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\"" Jan 13 20:57:18.917442 containerd[2014]: time="2025-01-13T20:57:18.915931659Z" level=info msg="StartContainer for \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\"" Jan 13 20:57:19.030873 containerd[2014]: time="2025-01-13T20:57:19.030830495Z" level=info msg="StartContainer for \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\" returns successfully" Jan 13 20:57:19.441170 containerd[2014]: time="2025-01-13T20:57:19.440144578Z" level=info msg="CreateContainer within sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:57:19.480220 containerd[2014]: time="2025-01-13T20:57:19.479701536Z" level=info msg="CreateContainer within sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b\"" Jan 13 20:57:19.484104 containerd[2014]: time="2025-01-13T20:57:19.481265525Z" level=info msg="StartContainer for \"b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b\"" Jan 13 20:57:19.731139 containerd[2014]: time="2025-01-13T20:57:19.731095040Z" level=info msg="StartContainer for \"b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b\" returns successfully" Jan 13 20:57:19.771386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b-rootfs.mount: Deactivated successfully. Jan 13 20:57:19.784630 containerd[2014]: time="2025-01-13T20:57:19.784228241Z" level=info msg="shim disconnected" id=b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b namespace=k8s.io Jan 13 20:57:19.784630 containerd[2014]: time="2025-01-13T20:57:19.784293244Z" level=warning msg="cleaning up after shim disconnected" id=b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b namespace=k8s.io Jan 13 20:57:19.784630 containerd[2014]: time="2025-01-13T20:57:19.784305587Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:57:19.818781 kubelet[3365]: I0113 20:57:19.816851 3365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-q9d6q" podStartSLOduration=1.703021896 podStartE2EDuration="19.816795514s" podCreationTimestamp="2025-01-13 20:57:00 +0000 UTC" firstStartedPulling="2025-01-13 20:57:00.780443487 +0000 UTC m=+12.787802066" lastFinishedPulling="2025-01-13 20:57:18.894217086 +0000 UTC m=+30.901575684" observedRunningTime="2025-01-13 20:57:19.569211708 +0000 UTC m=+31.576570311" watchObservedRunningTime="2025-01-13 20:57:19.816795514 +0000 UTC m=+31.824154139" Jan 13 20:57:20.484450 containerd[2014]: time="2025-01-13T20:57:20.484407757Z" level=info msg="CreateContainer within sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:57:20.574196 containerd[2014]: time="2025-01-13T20:57:20.565608744Z" level=info msg="CreateContainer within sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\"" Jan 13 20:57:20.579992 containerd[2014]: time="2025-01-13T20:57:20.579929857Z" level=info msg="StartContainer for \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\"" Jan 13 20:57:20.659371 containerd[2014]: time="2025-01-13T20:57:20.659321928Z" level=info msg="StartContainer for \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\" returns successfully" Jan 13 20:57:21.025440 kubelet[3365]: I0113 20:57:21.025388 3365 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:57:21.100822 kubelet[3365]: I0113 20:57:21.100771 3365 topology_manager.go:215] "Topology Admit Handler" podUID="50d7d21f-2266-445c-a977-0a37d4b5701d" podNamespace="kube-system" podName="coredns-76f75df574-4f5nt" Jan 13 20:57:21.113508 kubelet[3365]: I0113 20:57:21.113469 3365 topology_manager.go:215] "Topology Admit Handler" podUID="44fe1692-17ea-44ab-8346-41c3a470b6d5" podNamespace="kube-system" podName="coredns-76f75df574-bbj7g" Jan 13 20:57:21.295361 kubelet[3365]: I0113 20:57:21.293676 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44fe1692-17ea-44ab-8346-41c3a470b6d5-config-volume\") pod \"coredns-76f75df574-bbj7g\" (UID: \"44fe1692-17ea-44ab-8346-41c3a470b6d5\") " pod="kube-system/coredns-76f75df574-bbj7g" Jan 13 20:57:21.297175 kubelet[3365]: I0113 20:57:21.296955 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50d7d21f-2266-445c-a977-0a37d4b5701d-config-volume\") pod \"coredns-76f75df574-4f5nt\" (UID: \"50d7d21f-2266-445c-a977-0a37d4b5701d\") " pod="kube-system/coredns-76f75df574-4f5nt" Jan 13 20:57:21.297175 kubelet[3365]: I0113 20:57:21.297062 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97fcw\" (UniqueName: \"kubernetes.io/projected/44fe1692-17ea-44ab-8346-41c3a470b6d5-kube-api-access-97fcw\") pod \"coredns-76f75df574-bbj7g\" (UID: \"44fe1692-17ea-44ab-8346-41c3a470b6d5\") " pod="kube-system/coredns-76f75df574-bbj7g" Jan 13 20:57:21.297718 kubelet[3365]: I0113 20:57:21.297655 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppw5l\" (UniqueName: \"kubernetes.io/projected/50d7d21f-2266-445c-a977-0a37d4b5701d-kube-api-access-ppw5l\") pod \"coredns-76f75df574-4f5nt\" (UID: \"50d7d21f-2266-445c-a977-0a37d4b5701d\") " pod="kube-system/coredns-76f75df574-4f5nt" Jan 13 20:57:21.437637 containerd[2014]: time="2025-01-13T20:57:21.437396293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bbj7g,Uid:44fe1692-17ea-44ab-8346-41c3a470b6d5,Namespace:kube-system,Attempt:0,}" Jan 13 20:57:21.596616 systemd[1]: run-containerd-runc-k8s.io-578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d-runc.Fa7wYQ.mount: Deactivated successfully. Jan 13 20:57:21.720394 containerd[2014]: time="2025-01-13T20:57:21.720329535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4f5nt,Uid:50d7d21f-2266-445c-a977-0a37d4b5701d,Namespace:kube-system,Attempt:0,}" Jan 13 20:57:23.549898 systemd-networkd[1569]: cilium_host: Link UP Jan 13 20:57:23.550466 (udev-worker)[4393]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:57:23.551131 systemd-networkd[1569]: cilium_net: Link UP Jan 13 20:57:23.551386 systemd-networkd[1569]: cilium_net: Gained carrier Jan 13 20:57:23.551597 systemd-networkd[1569]: cilium_host: Gained carrier Jan 13 20:57:23.553258 (udev-worker)[4413]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:57:23.688750 (udev-worker)[4463]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:57:23.699505 systemd-networkd[1569]: cilium_vxlan: Link UP Jan 13 20:57:23.699517 systemd-networkd[1569]: cilium_vxlan: Gained carrier Jan 13 20:57:23.867007 systemd-networkd[1569]: cilium_host: Gained IPv6LL Jan 13 20:57:24.200606 kernel: NET: Registered PF_ALG protocol family Jan 13 20:57:24.403345 systemd-networkd[1569]: cilium_net: Gained IPv6LL Jan 13 20:57:25.102828 systemd-networkd[1569]: lxc_health: Link UP Jan 13 20:57:25.106650 (udev-worker)[4466]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:57:25.117650 systemd-networkd[1569]: lxc_health: Gained carrier Jan 13 20:57:25.390988 systemd-networkd[1569]: lxcae09ebf63efe: Link UP Jan 13 20:57:25.394239 kernel: eth0: renamed from tmpe697a Jan 13 20:57:25.401484 systemd-networkd[1569]: lxcae09ebf63efe: Gained carrier Jan 13 20:57:25.682872 systemd-networkd[1569]: cilium_vxlan: Gained IPv6LL Jan 13 20:57:25.749359 systemd-networkd[1569]: lxc9b7d471c2a0f: Link UP Jan 13 20:57:25.754103 kernel: eth0: renamed from tmp56d4c Jan 13 20:57:25.772106 systemd-networkd[1569]: lxc9b7d471c2a0f: Gained carrier Jan 13 20:57:26.311374 kubelet[3365]: I0113 20:57:26.311327 3365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-c2q6s" podStartSLOduration=11.845543765 podStartE2EDuration="27.311224866s" podCreationTimestamp="2025-01-13 20:56:59 +0000 UTC" firstStartedPulling="2025-01-13 20:57:00.534961843 +0000 UTC m=+12.542320424" lastFinishedPulling="2025-01-13 20:57:16.000642905 +0000 UTC m=+28.008001525" observedRunningTime="2025-01-13 20:57:21.60831218 +0000 UTC m=+33.615670784" watchObservedRunningTime="2025-01-13 20:57:26.311224866 +0000 UTC m=+38.318583469" Jan 13 20:57:26.581838 systemd-networkd[1569]: lxc_health: Gained IPv6LL Jan 13 20:57:26.641227 systemd-networkd[1569]: lxcae09ebf63efe: Gained IPv6LL Jan 13 20:57:27.025247 systemd-networkd[1569]: lxc9b7d471c2a0f: Gained IPv6LL Jan 13 20:57:29.185340 ntpd[1966]: Listen normally on 6 cilium_host 192.168.0.52:123 Jan 13 20:57:29.186223 ntpd[1966]: 13 Jan 20:57:29 ntpd[1966]: Listen normally on 6 cilium_host 192.168.0.52:123 Jan 13 20:57:29.186223 ntpd[1966]: 13 Jan 20:57:29 ntpd[1966]: Listen normally on 7 cilium_net [fe80::34bc:5aff:fe65:4bcc%4]:123 Jan 13 20:57:29.186223 ntpd[1966]: 13 Jan 20:57:29 ntpd[1966]: Listen normally on 8 cilium_host [fe80::2499:baff:fe19:e6d5%5]:123 Jan 13 20:57:29.186223 ntpd[1966]: 13 Jan 20:57:29 ntpd[1966]: Listen normally on 9 cilium_vxlan [fe80::c41c:15ff:fe62:53a8%6]:123 Jan 13 20:57:29.186223 ntpd[1966]: 13 Jan 20:57:29 ntpd[1966]: Listen normally on 10 lxc_health [fe80::587a:b8ff:fe78:584%8]:123 Jan 13 20:57:29.186223 ntpd[1966]: 13 Jan 20:57:29 ntpd[1966]: Listen normally on 11 lxcae09ebf63efe [fe80::a41d:e6ff:fef0:1b8a%10]:123 Jan 13 20:57:29.186223 ntpd[1966]: 13 Jan 20:57:29 ntpd[1966]: Listen normally on 12 lxc9b7d471c2a0f [fe80::e860:d0ff:fee7:9889%12]:123 Jan 13 20:57:29.185418 ntpd[1966]: Listen normally on 7 cilium_net [fe80::34bc:5aff:fe65:4bcc%4]:123 Jan 13 20:57:29.185460 ntpd[1966]: Listen normally on 8 cilium_host [fe80::2499:baff:fe19:e6d5%5]:123 Jan 13 20:57:29.185488 ntpd[1966]: Listen normally on 9 cilium_vxlan [fe80::c41c:15ff:fe62:53a8%6]:123 Jan 13 20:57:29.185515 ntpd[1966]: Listen normally on 10 lxc_health [fe80::587a:b8ff:fe78:584%8]:123 Jan 13 20:57:29.185542 ntpd[1966]: Listen normally on 11 lxcae09ebf63efe [fe80::a41d:e6ff:fef0:1b8a%10]:123 Jan 13 20:57:29.185568 ntpd[1966]: Listen normally on 12 lxc9b7d471c2a0f [fe80::e860:d0ff:fee7:9889%12]:123 Jan 13 20:57:31.367129 containerd[2014]: time="2025-01-13T20:57:31.365856587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:57:31.367129 containerd[2014]: time="2025-01-13T20:57:31.366935038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:57:31.367676 containerd[2014]: time="2025-01-13T20:57:31.367365757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:57:31.371863 containerd[2014]: time="2025-01-13T20:57:31.370940586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:57:31.418102 containerd[2014]: time="2025-01-13T20:57:31.412120186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:57:31.418102 containerd[2014]: time="2025-01-13T20:57:31.412203715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:57:31.418102 containerd[2014]: time="2025-01-13T20:57:31.412222216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:57:31.418102 containerd[2014]: time="2025-01-13T20:57:31.412345778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:57:31.428955 systemd[1]: run-containerd-runc-k8s.io-e697ab3ff3a3c83b781036c149414167c533c60600ba63bf80218866bc420079-runc.Osyk7C.mount: Deactivated successfully. Jan 13 20:57:31.621337 containerd[2014]: time="2025-01-13T20:57:31.618780953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bbj7g,Uid:44fe1692-17ea-44ab-8346-41c3a470b6d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"56d4ca83cb5f638307443470502ff3f4bd9cbf468323d5cef318eaed963a8a3d\"" Jan 13 20:57:31.628399 containerd[2014]: time="2025-01-13T20:57:31.626574109Z" level=info msg="CreateContainer within sandbox \"56d4ca83cb5f638307443470502ff3f4bd9cbf468323d5cef318eaed963a8a3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:57:31.641880 containerd[2014]: time="2025-01-13T20:57:31.641660968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4f5nt,Uid:50d7d21f-2266-445c-a977-0a37d4b5701d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e697ab3ff3a3c83b781036c149414167c533c60600ba63bf80218866bc420079\"" Jan 13 20:57:31.650675 containerd[2014]: time="2025-01-13T20:57:31.650428187Z" level=info msg="CreateContainer within sandbox \"e697ab3ff3a3c83b781036c149414167c533c60600ba63bf80218866bc420079\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:57:31.701564 containerd[2014]: time="2025-01-13T20:57:31.701359334Z" level=info msg="CreateContainer within sandbox \"56d4ca83cb5f638307443470502ff3f4bd9cbf468323d5cef318eaed963a8a3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b582f177aa71d880224501af330859b0227c492ad9706bb5e8ea22c0102b46f5\"" Jan 13 20:57:31.704561 containerd[2014]: time="2025-01-13T20:57:31.703736035Z" level=info msg="StartContainer for \"b582f177aa71d880224501af330859b0227c492ad9706bb5e8ea22c0102b46f5\"" Jan 13 20:57:31.710963 containerd[2014]: time="2025-01-13T20:57:31.710778775Z" level=info msg="CreateContainer within sandbox \"e697ab3ff3a3c83b781036c149414167c533c60600ba63bf80218866bc420079\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d530d6820f739d0002016bd0d67a5afea88944906a43915c65b31c1a91fcc5d8\"" Jan 13 20:57:31.712773 containerd[2014]: time="2025-01-13T20:57:31.712742806Z" level=info msg="StartContainer for \"d530d6820f739d0002016bd0d67a5afea88944906a43915c65b31c1a91fcc5d8\"" Jan 13 20:57:31.828687 containerd[2014]: time="2025-01-13T20:57:31.828636409Z" level=info msg="StartContainer for \"d530d6820f739d0002016bd0d67a5afea88944906a43915c65b31c1a91fcc5d8\" returns successfully" Jan 13 20:57:31.828813 containerd[2014]: time="2025-01-13T20:57:31.828733672Z" level=info msg="StartContainer for \"b582f177aa71d880224501af330859b0227c492ad9706bb5e8ea22c0102b46f5\" returns successfully" Jan 13 20:57:32.388503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403316538.mount: Deactivated successfully. Jan 13 20:57:32.549631 kubelet[3365]: I0113 20:57:32.549592 3365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4f5nt" podStartSLOduration=32.549548344 podStartE2EDuration="32.549548344s" podCreationTimestamp="2025-01-13 20:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:57:32.547314054 +0000 UTC m=+44.554672657" watchObservedRunningTime="2025-01-13 20:57:32.549548344 +0000 UTC m=+44.556906943" Jan 13 20:57:33.183488 systemd[1]: Started sshd@7-172.31.31.120:22-139.178.89.65:60586.service - OpenSSH per-connection server daemon (139.178.89.65:60586). Jan 13 20:57:33.421098 sshd[4980]: Accepted publickey for core from 139.178.89.65 port 60586 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:57:33.422251 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:57:33.446075 systemd-logind[1987]: New session 8 of user core. Jan 13 20:57:33.453449 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:57:34.296999 sshd[4983]: Connection closed by 139.178.89.65 port 60586 Jan 13 20:57:34.299188 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Jan 13 20:57:34.305375 systemd[1]: sshd@7-172.31.31.120:22-139.178.89.65:60586.service: Deactivated successfully. Jan 13 20:57:34.311691 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:57:34.313543 systemd-logind[1987]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:57:34.317156 systemd-logind[1987]: Removed session 8. Jan 13 20:57:39.333959 systemd[1]: Started sshd@8-172.31.31.120:22-139.178.89.65:60588.service - OpenSSH per-connection server daemon (139.178.89.65:60588). Jan 13 20:57:39.525534 sshd[4999]: Accepted publickey for core from 139.178.89.65 port 60588 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:57:39.527596 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:57:39.547053 systemd-logind[1987]: New session 9 of user core. Jan 13 20:57:39.554372 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:57:39.762215 sshd[5002]: Connection closed by 139.178.89.65 port 60588 Jan 13 20:57:39.763950 sshd-session[4999]: pam_unix(sshd:session): session closed for user core Jan 13 20:57:39.769042 systemd[1]: sshd@8-172.31.31.120:22-139.178.89.65:60588.service: Deactivated successfully. Jan 13 20:57:39.773936 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:57:39.775168 systemd-logind[1987]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:57:39.776726 systemd-logind[1987]: Removed session 9. Jan 13 20:57:44.799224 systemd[1]: Started sshd@9-172.31.31.120:22-139.178.89.65:48548.service - OpenSSH per-connection server daemon (139.178.89.65:48548). Jan 13 20:57:44.995473 sshd[5014]: Accepted publickey for core from 139.178.89.65 port 48548 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:57:44.997463 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:57:45.006446 systemd-logind[1987]: New session 10 of user core. Jan 13 20:57:45.012580 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:57:45.229840 sshd[5017]: Connection closed by 139.178.89.65 port 48548 Jan 13 20:57:45.231377 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Jan 13 20:57:45.244316 systemd[1]: sshd@9-172.31.31.120:22-139.178.89.65:48548.service: Deactivated successfully. Jan 13 20:57:45.244393 systemd-logind[1987]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:57:45.249892 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:57:45.251372 systemd-logind[1987]: Removed session 10. Jan 13 20:57:50.262661 systemd[1]: Started sshd@10-172.31.31.120:22-139.178.89.65:48550.service - OpenSSH per-connection server daemon (139.178.89.65:48550). Jan 13 20:57:50.459027 sshd[5031]: Accepted publickey for core from 139.178.89.65 port 48550 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:57:50.459694 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:57:50.466747 systemd-logind[1987]: New session 11 of user core. Jan 13 20:57:50.472705 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:57:50.731766 sshd[5034]: Connection closed by 139.178.89.65 port 48550 Jan 13 20:57:50.731611 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Jan 13 20:57:50.759132 systemd[1]: sshd@10-172.31.31.120:22-139.178.89.65:48550.service: Deactivated successfully. Jan 13 20:57:50.759367 systemd-logind[1987]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:57:50.768898 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:57:50.779882 systemd[1]: Started sshd@11-172.31.31.120:22-139.178.89.65:48562.service - OpenSSH per-connection server daemon (139.178.89.65:48562). Jan 13 20:57:50.782591 systemd-logind[1987]: Removed session 11. Jan 13 20:57:50.962844 sshd[5046]: Accepted publickey for core from 139.178.89.65 port 48562 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:57:50.964425 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:57:50.970884 systemd-logind[1987]: New session 12 of user core. Jan 13 20:57:50.978646 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:57:51.245899 sshd[5049]: Connection closed by 139.178.89.65 port 48562 Jan 13 20:57:51.247035 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Jan 13 20:57:51.258993 systemd[1]: sshd@11-172.31.31.120:22-139.178.89.65:48562.service: Deactivated successfully. Jan 13 20:57:51.270429 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:57:51.274794 systemd-logind[1987]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:57:51.285475 systemd[1]: Started sshd@12-172.31.31.120:22-139.178.89.65:33584.service - OpenSSH per-connection server daemon (139.178.89.65:33584). Jan 13 20:57:51.287838 systemd-logind[1987]: Removed session 12. Jan 13 20:57:51.477064 sshd[5058]: Accepted publickey for core from 139.178.89.65 port 33584 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:57:51.479022 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:57:51.491445 systemd-logind[1987]: New session 13 of user core. Jan 13 20:57:51.500580 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:57:51.735473 sshd[5061]: Connection closed by 139.178.89.65 port 33584 Jan 13 20:57:51.736245 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Jan 13 20:57:51.742631 systemd[1]: sshd@12-172.31.31.120:22-139.178.89.65:33584.service: Deactivated successfully. Jan 13 20:57:51.747348 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:57:51.749041 systemd-logind[1987]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:57:51.750613 systemd-logind[1987]: Removed session 13. Jan 13 20:57:56.770371 systemd[1]: Started sshd@13-172.31.31.120:22-139.178.89.65:33588.service - OpenSSH per-connection server daemon (139.178.89.65:33588). Jan 13 20:57:56.936043 sshd[5073]: Accepted publickey for core from 139.178.89.65 port 33588 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:57:56.936709 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:57:56.942371 systemd-logind[1987]: New session 14 of user core. Jan 13 20:57:56.945478 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:57:57.204919 sshd[5076]: Connection closed by 139.178.89.65 port 33588 Jan 13 20:57:57.205588 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Jan 13 20:57:57.212903 systemd[1]: sshd@13-172.31.31.120:22-139.178.89.65:33588.service: Deactivated successfully. Jan 13 20:57:57.218112 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:57:57.228378 systemd-logind[1987]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:57:57.231036 systemd-logind[1987]: Removed session 14. Jan 13 20:58:02.234952 systemd[1]: Started sshd@14-172.31.31.120:22-139.178.89.65:45376.service - OpenSSH per-connection server daemon (139.178.89.65:45376). Jan 13 20:58:02.448856 sshd[5091]: Accepted publickey for core from 139.178.89.65 port 45376 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:02.449777 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:02.460619 systemd-logind[1987]: New session 15 of user core. Jan 13 20:58:02.471613 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:58:02.848844 sshd[5094]: Connection closed by 139.178.89.65 port 45376 Jan 13 20:58:02.852960 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:02.863267 systemd[1]: sshd@14-172.31.31.120:22-139.178.89.65:45376.service: Deactivated successfully. Jan 13 20:58:02.876598 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:58:02.877135 systemd-logind[1987]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:58:02.879457 systemd-logind[1987]: Removed session 15. Jan 13 20:58:07.887060 systemd[1]: Started sshd@15-172.31.31.120:22-139.178.89.65:45386.service - OpenSSH per-connection server daemon (139.178.89.65:45386). Jan 13 20:58:08.091777 sshd[5105]: Accepted publickey for core from 139.178.89.65 port 45386 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:08.093827 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:08.099726 systemd-logind[1987]: New session 16 of user core. Jan 13 20:58:08.105465 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:58:08.359970 sshd[5108]: Connection closed by 139.178.89.65 port 45386 Jan 13 20:58:08.361583 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:08.365877 systemd-logind[1987]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:58:08.367633 systemd[1]: sshd@15-172.31.31.120:22-139.178.89.65:45386.service: Deactivated successfully. Jan 13 20:58:08.372833 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:58:08.374853 systemd-logind[1987]: Removed session 16. Jan 13 20:58:08.389488 systemd[1]: Started sshd@16-172.31.31.120:22-139.178.89.65:45388.service - OpenSSH per-connection server daemon (139.178.89.65:45388). Jan 13 20:58:08.581638 sshd[5119]: Accepted publickey for core from 139.178.89.65 port 45388 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:08.583655 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:08.603361 systemd-logind[1987]: New session 17 of user core. Jan 13 20:58:08.612628 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:58:09.484445 sshd[5122]: Connection closed by 139.178.89.65 port 45388 Jan 13 20:58:09.488718 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:09.502114 systemd-logind[1987]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:58:09.503986 systemd[1]: sshd@16-172.31.31.120:22-139.178.89.65:45388.service: Deactivated successfully. Jan 13 20:58:09.530385 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:58:09.548880 systemd[1]: Started sshd@17-172.31.31.120:22-139.178.89.65:45404.service - OpenSSH per-connection server daemon (139.178.89.65:45404). Jan 13 20:58:09.550059 systemd-logind[1987]: Removed session 17. Jan 13 20:58:09.724285 sshd[5131]: Accepted publickey for core from 139.178.89.65 port 45404 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:09.725750 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:09.731536 systemd-logind[1987]: New session 18 of user core. Jan 13 20:58:09.736590 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:58:11.965005 sshd[5134]: Connection closed by 139.178.89.65 port 45404 Jan 13 20:58:11.966119 sshd-session[5131]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:11.974893 systemd[1]: sshd@17-172.31.31.120:22-139.178.89.65:45404.service: Deactivated successfully. Jan 13 20:58:11.983052 systemd-logind[1987]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:58:11.990021 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:58:11.999193 systemd[1]: Started sshd@18-172.31.31.120:22-139.178.89.65:51412.service - OpenSSH per-connection server daemon (139.178.89.65:51412). Jan 13 20:58:12.001620 systemd-logind[1987]: Removed session 18. Jan 13 20:58:12.172120 sshd[5151]: Accepted publickey for core from 139.178.89.65 port 51412 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:12.174004 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:12.180158 systemd-logind[1987]: New session 19 of user core. Jan 13 20:58:12.186789 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:58:12.605040 sshd[5154]: Connection closed by 139.178.89.65 port 51412 Jan 13 20:58:12.606321 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:12.610777 systemd[1]: sshd@18-172.31.31.120:22-139.178.89.65:51412.service: Deactivated successfully. Jan 13 20:58:12.625331 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:58:12.627399 systemd-logind[1987]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:58:12.640648 systemd[1]: Started sshd@19-172.31.31.120:22-139.178.89.65:51414.service - OpenSSH per-connection server daemon (139.178.89.65:51414). Jan 13 20:58:12.644656 systemd-logind[1987]: Removed session 19. Jan 13 20:58:12.828175 sshd[5163]: Accepted publickey for core from 139.178.89.65 port 51414 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:12.829988 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:12.844901 systemd-logind[1987]: New session 20 of user core. Jan 13 20:58:12.852033 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:58:13.067464 sshd[5166]: Connection closed by 139.178.89.65 port 51414 Jan 13 20:58:13.068283 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:13.073950 systemd[1]: sshd@19-172.31.31.120:22-139.178.89.65:51414.service: Deactivated successfully. Jan 13 20:58:13.081289 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:58:13.082843 systemd-logind[1987]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:58:13.084839 systemd-logind[1987]: Removed session 20. Jan 13 20:58:18.095463 systemd[1]: Started sshd@20-172.31.31.120:22-139.178.89.65:51418.service - OpenSSH per-connection server daemon (139.178.89.65:51418). Jan 13 20:58:18.258894 sshd[5177]: Accepted publickey for core from 139.178.89.65 port 51418 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:18.259656 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:18.264458 systemd-logind[1987]: New session 21 of user core. Jan 13 20:58:18.269851 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:58:18.546278 sshd[5180]: Connection closed by 139.178.89.65 port 51418 Jan 13 20:58:18.550711 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:18.559835 systemd[1]: sshd@20-172.31.31.120:22-139.178.89.65:51418.service: Deactivated successfully. Jan 13 20:58:18.579926 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:58:18.582558 systemd-logind[1987]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:58:18.584442 systemd-logind[1987]: Removed session 21. Jan 13 20:58:23.574958 systemd[1]: Started sshd@21-172.31.31.120:22-139.178.89.65:46610.service - OpenSSH per-connection server daemon (139.178.89.65:46610). Jan 13 20:58:23.747760 sshd[5194]: Accepted publickey for core from 139.178.89.65 port 46610 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:23.748551 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:23.756718 systemd-logind[1987]: New session 22 of user core. Jan 13 20:58:23.760438 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:58:23.963949 sshd[5197]: Connection closed by 139.178.89.65 port 46610 Jan 13 20:58:23.965709 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:23.971874 systemd[1]: sshd@21-172.31.31.120:22-139.178.89.65:46610.service: Deactivated successfully. Jan 13 20:58:23.977051 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:58:23.978133 systemd-logind[1987]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:58:23.979354 systemd-logind[1987]: Removed session 22. Jan 13 20:58:29.005750 systemd[1]: Started sshd@22-172.31.31.120:22-139.178.89.65:46616.service - OpenSSH per-connection server daemon (139.178.89.65:46616). Jan 13 20:58:29.178883 sshd[5208]: Accepted publickey for core from 139.178.89.65 port 46616 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:29.180737 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:29.188991 systemd-logind[1987]: New session 23 of user core. Jan 13 20:58:29.192524 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:58:29.420817 sshd[5211]: Connection closed by 139.178.89.65 port 46616 Jan 13 20:58:29.421529 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:29.426912 systemd[1]: sshd@22-172.31.31.120:22-139.178.89.65:46616.service: Deactivated successfully. Jan 13 20:58:29.433567 systemd-logind[1987]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:58:29.433965 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:58:29.438588 systemd-logind[1987]: Removed session 23. Jan 13 20:58:34.452431 systemd[1]: Started sshd@23-172.31.31.120:22-139.178.89.65:38900.service - OpenSSH per-connection server daemon (139.178.89.65:38900). Jan 13 20:58:34.621949 sshd[5225]: Accepted publickey for core from 139.178.89.65 port 38900 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:34.625905 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:34.631468 systemd-logind[1987]: New session 24 of user core. Jan 13 20:58:34.637364 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:58:34.857233 sshd[5228]: Connection closed by 139.178.89.65 port 38900 Jan 13 20:58:34.858983 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:34.863743 systemd[1]: sshd@23-172.31.31.120:22-139.178.89.65:38900.service: Deactivated successfully. Jan 13 20:58:34.869560 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:58:34.870775 systemd-logind[1987]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:58:34.872193 systemd-logind[1987]: Removed session 24. Jan 13 20:58:34.884449 systemd[1]: Started sshd@24-172.31.31.120:22-139.178.89.65:38902.service - OpenSSH per-connection server daemon (139.178.89.65:38902). Jan 13 20:58:35.063780 sshd[5239]: Accepted publickey for core from 139.178.89.65 port 38902 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:35.065272 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:35.070828 systemd-logind[1987]: New session 25 of user core. Jan 13 20:58:35.075440 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:58:36.697585 kubelet[3365]: I0113 20:58:36.697484 3365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bbj7g" podStartSLOduration=96.697429897 podStartE2EDuration="1m36.697429897s" podCreationTimestamp="2025-01-13 20:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:57:32.617748967 +0000 UTC m=+44.625107569" watchObservedRunningTime="2025-01-13 20:58:36.697429897 +0000 UTC m=+108.704788520" Jan 13 20:58:36.764842 containerd[2014]: time="2025-01-13T20:58:36.764764973Z" level=info msg="StopContainer for \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\" with timeout 30 (s)" Jan 13 20:58:36.773398 containerd[2014]: time="2025-01-13T20:58:36.772896194Z" level=info msg="Stop container \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\" with signal terminated" Jan 13 20:58:36.892978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55-rootfs.mount: Deactivated successfully. Jan 13 20:58:36.905219 containerd[2014]: time="2025-01-13T20:58:36.905144807Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:58:36.911820 containerd[2014]: time="2025-01-13T20:58:36.911543176Z" level=info msg="StopContainer for \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\" with timeout 2 (s)" Jan 13 20:58:36.913059 containerd[2014]: time="2025-01-13T20:58:36.912861738Z" level=info msg="Stop container \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\" with signal terminated" Jan 13 20:58:36.925254 systemd-networkd[1569]: lxc_health: Link DOWN Jan 13 20:58:36.925263 systemd-networkd[1569]: lxc_health: Lost carrier Jan 13 20:58:36.936675 containerd[2014]: time="2025-01-13T20:58:36.936239115Z" level=info msg="shim disconnected" id=ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55 namespace=k8s.io Jan 13 20:58:36.938145 containerd[2014]: time="2025-01-13T20:58:36.937416444Z" level=warning msg="cleaning up after shim disconnected" id=ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55 namespace=k8s.io Jan 13 20:58:36.938368 containerd[2014]: time="2025-01-13T20:58:36.938177683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:58:37.011012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d-rootfs.mount: Deactivated successfully. Jan 13 20:58:37.028523 containerd[2014]: time="2025-01-13T20:58:37.028283667Z" level=info msg="shim disconnected" id=578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d namespace=k8s.io Jan 13 20:58:37.028523 containerd[2014]: time="2025-01-13T20:58:37.028343920Z" level=warning msg="cleaning up after shim disconnected" id=578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d namespace=k8s.io Jan 13 20:58:37.028523 containerd[2014]: time="2025-01-13T20:58:37.028356436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:58:37.034122 containerd[2014]: time="2025-01-13T20:58:37.033317461Z" level=info msg="StopContainer for \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\" returns successfully" Jan 13 20:58:37.034488 containerd[2014]: time="2025-01-13T20:58:37.034416011Z" level=info msg="StopPodSandbox for \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\"" Jan 13 20:58:37.048888 containerd[2014]: time="2025-01-13T20:58:37.048841534Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:58:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:58:37.055501 containerd[2014]: time="2025-01-13T20:58:37.041552007Z" level=info msg="Container to stop \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:58:37.060828 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e-shm.mount: Deactivated successfully. Jan 13 20:58:37.061309 containerd[2014]: time="2025-01-13T20:58:37.060978368Z" level=info msg="StopContainer for \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\" returns successfully" Jan 13 20:58:37.063958 containerd[2014]: time="2025-01-13T20:58:37.063699678Z" level=info msg="StopPodSandbox for \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\"" Jan 13 20:58:37.068100 containerd[2014]: time="2025-01-13T20:58:37.063845797Z" level=info msg="Container to stop \"1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:58:37.068100 containerd[2014]: time="2025-01-13T20:58:37.064237264Z" level=info msg="Container to stop \"12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:58:37.068100 containerd[2014]: time="2025-01-13T20:58:37.064252207Z" level=info msg="Container to stop \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:58:37.068100 containerd[2014]: time="2025-01-13T20:58:37.064264728Z" level=info msg="Container to stop \"823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:58:37.068100 containerd[2014]: time="2025-01-13T20:58:37.064277740Z" level=info msg="Container to stop \"b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:58:37.070963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8-shm.mount: Deactivated successfully. Jan 13 20:58:37.159876 containerd[2014]: time="2025-01-13T20:58:37.159547505Z" level=info msg="shim disconnected" id=ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8 namespace=k8s.io Jan 13 20:58:37.159876 containerd[2014]: time="2025-01-13T20:58:37.159769288Z" level=warning msg="cleaning up after shim disconnected" id=ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8 namespace=k8s.io Jan 13 20:58:37.159876 containerd[2014]: time="2025-01-13T20:58:37.159781444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:58:37.160804 containerd[2014]: time="2025-01-13T20:58:37.160410267Z" level=info msg="shim disconnected" id=bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e namespace=k8s.io Jan 13 20:58:37.160804 containerd[2014]: time="2025-01-13T20:58:37.160454216Z" level=warning msg="cleaning up after shim disconnected" id=bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e namespace=k8s.io Jan 13 20:58:37.160804 containerd[2014]: time="2025-01-13T20:58:37.160465252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:58:37.202936 containerd[2014]: time="2025-01-13T20:58:37.202881399Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:58:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:58:37.204078 containerd[2014]: time="2025-01-13T20:58:37.204041626Z" level=info msg="TearDown network for sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" successfully" Jan 13 20:58:37.204078 containerd[2014]: time="2025-01-13T20:58:37.204073320Z" level=info msg="StopPodSandbox for \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" returns successfully" Jan 13 20:58:37.205496 containerd[2014]: time="2025-01-13T20:58:37.205457773Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:58:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:58:37.206912 containerd[2014]: time="2025-01-13T20:58:37.206880990Z" level=info msg="TearDown network for sandbox \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\" successfully" Jan 13 20:58:37.206912 containerd[2014]: time="2025-01-13T20:58:37.206909409Z" level=info msg="StopPodSandbox for \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\" returns successfully" Jan 13 20:58:37.424123 kubelet[3365]: I0113 20:58:37.423527 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-host-proc-sys-kernel\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.424123 kubelet[3365]: I0113 20:58:37.423900 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cilium-cgroup\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.424123 kubelet[3365]: I0113 20:58:37.423943 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-hostproc\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.424123 kubelet[3365]: I0113 20:58:37.423978 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75ccr\" (UniqueName: \"kubernetes.io/projected/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-kube-api-access-75ccr\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.424123 kubelet[3365]: I0113 20:58:37.424006 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-lib-modules\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.424123 kubelet[3365]: I0113 20:58:37.424027 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cni-path\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.425102 kubelet[3365]: I0113 20:58:37.424058 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cilium-config-path\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.425102 kubelet[3365]: I0113 20:58:37.424095 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cilium-run\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.425102 kubelet[3365]: I0113 20:58:37.424120 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-xtables-lock\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.425102 kubelet[3365]: I0113 20:58:37.424147 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-etc-cni-netd\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.425102 kubelet[3365]: I0113 20:58:37.424174 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-hubble-tls\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.425102 kubelet[3365]: I0113 20:58:37.424429 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea5d2761-b18b-4191-bd76-c3732e2cf6c5-cilium-config-path\") pod \"ea5d2761-b18b-4191-bd76-c3732e2cf6c5\" (UID: \"ea5d2761-b18b-4191-bd76-c3732e2cf6c5\") " Jan 13 20:58:37.425625 kubelet[3365]: I0113 20:58:37.424464 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-host-proc-sys-net\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.425625 kubelet[3365]: I0113 20:58:37.424489 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-bpf-maps\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.425625 kubelet[3365]: I0113 20:58:37.424528 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-clustermesh-secrets\") pod \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\" (UID: \"56ccc1a6-1542-49ad-8b9d-0457be27a4b3\") " Jan 13 20:58:37.425625 kubelet[3365]: I0113 20:58:37.424556 3365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26x4p\" (UniqueName: \"kubernetes.io/projected/ea5d2761-b18b-4191-bd76-c3732e2cf6c5-kube-api-access-26x4p\") pod \"ea5d2761-b18b-4191-bd76-c3732e2cf6c5\" (UID: \"ea5d2761-b18b-4191-bd76-c3732e2cf6c5\") " Jan 13 20:58:37.433728 kubelet[3365]: I0113 20:58:37.423890 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:58:37.433728 kubelet[3365]: I0113 20:58:37.433380 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:58:37.433728 kubelet[3365]: I0113 20:58:37.433405 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:58:37.433728 kubelet[3365]: I0113 20:58:37.433419 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-hostproc" (OuterVolumeSpecName: "hostproc") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:58:37.439398 kubelet[3365]: I0113 20:58:37.439352 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-kube-api-access-75ccr" (OuterVolumeSpecName: "kube-api-access-75ccr") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "kube-api-access-75ccr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:58:37.439610 kubelet[3365]: I0113 20:58:37.439587 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:58:37.439734 kubelet[3365]: I0113 20:58:37.439718 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cni-path" (OuterVolumeSpecName: "cni-path") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:58:37.440208 kubelet[3365]: I0113 20:58:37.440165 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea5d2761-b18b-4191-bd76-c3732e2cf6c5-kube-api-access-26x4p" (OuterVolumeSpecName: "kube-api-access-26x4p") pod "ea5d2761-b18b-4191-bd76-c3732e2cf6c5" (UID: "ea5d2761-b18b-4191-bd76-c3732e2cf6c5"). InnerVolumeSpecName "kube-api-access-26x4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:58:37.440307 kubelet[3365]: I0113 20:58:37.440213 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:58:37.440307 kubelet[3365]: I0113 20:58:37.440238 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:58:37.444951 kubelet[3365]: I0113 20:58:37.444865 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:58:37.446974 kubelet[3365]: I0113 20:58:37.446564 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:58:37.446974 kubelet[3365]: I0113 20:58:37.446648 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:58:37.446974 kubelet[3365]: I0113 20:58:37.446675 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:58:37.448547 kubelet[3365]: I0113 20:58:37.448512 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea5d2761-b18b-4191-bd76-c3732e2cf6c5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ea5d2761-b18b-4191-bd76-c3732e2cf6c5" (UID: "ea5d2761-b18b-4191-bd76-c3732e2cf6c5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:58:37.451453 kubelet[3365]: I0113 20:58:37.451417 3365 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "56ccc1a6-1542-49ad-8b9d-0457be27a4b3" (UID: "56ccc1a6-1542-49ad-8b9d-0457be27a4b3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:58:37.525128 kubelet[3365]: I0113 20:58:37.525073 3365 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-bpf-maps\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.525128 kubelet[3365]: I0113 20:58:37.525130 3365 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-host-proc-sys-net\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.525466 kubelet[3365]: I0113 20:58:37.525145 3365 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-clustermesh-secrets\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.525466 kubelet[3365]: I0113 20:58:37.525166 3365 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-26x4p\" (UniqueName: \"kubernetes.io/projected/ea5d2761-b18b-4191-bd76-c3732e2cf6c5-kube-api-access-26x4p\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.525466 kubelet[3365]: I0113 20:58:37.525179 3365 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-host-proc-sys-kernel\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.525466 kubelet[3365]: I0113 20:58:37.525224 3365 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-hostproc\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.525466 kubelet[3365]: I0113 20:58:37.525240 3365 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cilium-cgroup\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.525466 kubelet[3365]: I0113 20:58:37.525254 3365 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-lib-modules\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.525466 kubelet[3365]: I0113 20:58:37.525267 3365 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cni-path\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.525466 kubelet[3365]: I0113 20:58:37.525290 3365 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-75ccr\" (UniqueName: \"kubernetes.io/projected/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-kube-api-access-75ccr\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.526305 kubelet[3365]: I0113 20:58:37.525385 3365 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cilium-config-path\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.526305 kubelet[3365]: I0113 20:58:37.525442 3365 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-cilium-run\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.526305 kubelet[3365]: I0113 20:58:37.525461 3365 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-xtables-lock\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.526305 kubelet[3365]: I0113 20:58:37.525477 3365 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-etc-cni-netd\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.526305 kubelet[3365]: I0113 20:58:37.525491 3365 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56ccc1a6-1542-49ad-8b9d-0457be27a4b3-hubble-tls\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.526305 kubelet[3365]: I0113 20:58:37.525508 3365 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea5d2761-b18b-4191-bd76-c3732e2cf6c5-cilium-config-path\") on node \"ip-172-31-31-120\" DevicePath \"\"" Jan 13 20:58:37.838929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e-rootfs.mount: Deactivated successfully. Jan 13 20:58:37.839629 systemd[1]: var-lib-kubelet-pods-ea5d2761\x2db18b\x2d4191\x2dbd76\x2dc3732e2cf6c5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d26x4p.mount: Deactivated successfully. Jan 13 20:58:37.840444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8-rootfs.mount: Deactivated successfully. Jan 13 20:58:37.840787 systemd[1]: var-lib-kubelet-pods-56ccc1a6\x2d1542\x2d49ad\x2d8b9d\x2d0457be27a4b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d75ccr.mount: Deactivated successfully. Jan 13 20:58:37.841039 systemd[1]: var-lib-kubelet-pods-56ccc1a6\x2d1542\x2d49ad\x2d8b9d\x2d0457be27a4b3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:58:37.841286 kubelet[3365]: I0113 20:58:37.841255 3365 scope.go:117] "RemoveContainer" containerID="ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55" Jan 13 20:58:37.842360 systemd[1]: var-lib-kubelet-pods-56ccc1a6\x2d1542\x2d49ad\x2d8b9d\x2d0457be27a4b3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:58:37.879027 containerd[2014]: time="2025-01-13T20:58:37.878970772Z" level=info msg="RemoveContainer for \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\"" Jan 13 20:58:37.886531 containerd[2014]: time="2025-01-13T20:58:37.886488738Z" level=info msg="RemoveContainer for \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\" returns successfully" Jan 13 20:58:37.888660 kubelet[3365]: I0113 20:58:37.888168 3365 scope.go:117] "RemoveContainer" containerID="ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55" Jan 13 20:58:37.889631 containerd[2014]: time="2025-01-13T20:58:37.889589290Z" level=error msg="ContainerStatus for \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\": not found" Jan 13 20:58:37.909950 kubelet[3365]: E0113 20:58:37.909903 3365 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\": not found" containerID="ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55" Jan 13 20:58:37.931071 kubelet[3365]: I0113 20:58:37.931028 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55"} err="failed to get container status \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff1aecc63dc2a460a64dd42333825d7caed415b192725909c076b83f8c201f55\": not found" Jan 13 20:58:37.931071 kubelet[3365]: I0113 20:58:37.931124 3365 scope.go:117] "RemoveContainer" containerID="578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d" Jan 13 20:58:37.932573 containerd[2014]: time="2025-01-13T20:58:37.932531616Z" level=info msg="RemoveContainer for \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\"" Jan 13 20:58:37.936576 containerd[2014]: time="2025-01-13T20:58:37.936533317Z" level=info msg="RemoveContainer for \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\" returns successfully" Jan 13 20:58:37.936781 kubelet[3365]: I0113 20:58:37.936732 3365 scope.go:117] "RemoveContainer" containerID="b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b" Jan 13 20:58:37.938646 containerd[2014]: time="2025-01-13T20:58:37.938616520Z" level=info msg="RemoveContainer for \"b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b\"" Jan 13 20:58:37.943785 containerd[2014]: time="2025-01-13T20:58:37.943746801Z" level=info msg="RemoveContainer for \"b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b\" returns successfully" Jan 13 20:58:37.944104 kubelet[3365]: I0113 20:58:37.944056 3365 scope.go:117] "RemoveContainer" containerID="12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322" Jan 13 20:58:37.945468 containerd[2014]: time="2025-01-13T20:58:37.945194457Z" level=info msg="RemoveContainer for \"12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322\"" Jan 13 20:58:37.950054 containerd[2014]: time="2025-01-13T20:58:37.950013169Z" level=info msg="RemoveContainer for \"12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322\" returns successfully" Jan 13 20:58:37.950374 kubelet[3365]: I0113 20:58:37.950341 3365 scope.go:117] "RemoveContainer" containerID="823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544" Jan 13 20:58:37.952906 containerd[2014]: time="2025-01-13T20:58:37.952553220Z" level=info msg="RemoveContainer for \"823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544\"" Jan 13 20:58:37.956943 containerd[2014]: time="2025-01-13T20:58:37.956912746Z" level=info msg="RemoveContainer for \"823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544\" returns successfully" Jan 13 20:58:37.957165 kubelet[3365]: I0113 20:58:37.957135 3365 scope.go:117] "RemoveContainer" containerID="1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da" Jan 13 20:58:37.958273 containerd[2014]: time="2025-01-13T20:58:37.958240511Z" level=info msg="RemoveContainer for \"1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da\"" Jan 13 20:58:37.962252 containerd[2014]: time="2025-01-13T20:58:37.962132135Z" level=info msg="RemoveContainer for \"1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da\" returns successfully" Jan 13 20:58:37.962622 kubelet[3365]: I0113 20:58:37.962509 3365 scope.go:117] "RemoveContainer" containerID="578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d" Jan 13 20:58:37.962780 containerd[2014]: time="2025-01-13T20:58:37.962753229Z" level=error msg="ContainerStatus for \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\": not found" Jan 13 20:58:37.963209 kubelet[3365]: E0113 20:58:37.963022 3365 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\": not found" containerID="578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d" Jan 13 20:58:37.963209 kubelet[3365]: I0113 20:58:37.963062 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d"} err="failed to get container status \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\": rpc error: code = NotFound desc = an error occurred when try to find container \"578a1d8cff89b05a5dbab69f411b9d1dfd3a2e39ec411f9eec8f28fa649ad68d\": not found" Jan 13 20:58:37.963209 kubelet[3365]: I0113 20:58:37.963073 3365 scope.go:117] "RemoveContainer" containerID="b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b" Jan 13 20:58:37.963609 containerd[2014]: time="2025-01-13T20:58:37.963335119Z" level=error msg="ContainerStatus for \"b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b\": not found" Jan 13 20:58:37.964370 kubelet[3365]: E0113 20:58:37.964345 3365 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b\": not found" containerID="b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b" Jan 13 20:58:37.964452 kubelet[3365]: I0113 20:58:37.964388 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b"} err="failed to get container status \"b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b529667aee55c60309586891f93b5f8826e6cd73229ddd4d793a5a4101b7a73b\": not found" Jan 13 20:58:37.964452 kubelet[3365]: I0113 20:58:37.964420 3365 scope.go:117] "RemoveContainer" containerID="12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322" Jan 13 20:58:37.964664 containerd[2014]: time="2025-01-13T20:58:37.964627799Z" level=error msg="ContainerStatus for \"12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322\": not found" Jan 13 20:58:37.964898 kubelet[3365]: E0113 20:58:37.964874 3365 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322\": not found" containerID="12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322" Jan 13 20:58:37.964959 kubelet[3365]: I0113 20:58:37.964908 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322"} err="failed to get container status \"12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322\": rpc error: code = NotFound desc = an error occurred when try to find container \"12078aa618d309f0b95b9663a71c2d4c38b7c82f6245efaf82e4d75b037d1322\": not found" Jan 13 20:58:37.964959 kubelet[3365]: I0113 20:58:37.964921 3365 scope.go:117] "RemoveContainer" containerID="823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544" Jan 13 20:58:37.965154 containerd[2014]: time="2025-01-13T20:58:37.965120171Z" level=error msg="ContainerStatus for \"823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544\": not found" Jan 13 20:58:37.965289 kubelet[3365]: E0113 20:58:37.965268 3365 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544\": not found" containerID="823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544" Jan 13 20:58:37.965364 kubelet[3365]: I0113 20:58:37.965301 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544"} err="failed to get container status \"823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544\": rpc error: code = NotFound desc = an error occurred when try to find container \"823aa74c6c186c568d113c2f26566b689d47226583c3f916e0cfc4f31d2a6544\": not found" Jan 13 20:58:37.965364 kubelet[3365]: I0113 20:58:37.965314 3365 scope.go:117] "RemoveContainer" containerID="1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da" Jan 13 20:58:37.965504 containerd[2014]: time="2025-01-13T20:58:37.965473654Z" level=error msg="ContainerStatus for \"1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da\": not found" Jan 13 20:58:37.966009 kubelet[3365]: E0113 20:58:37.965589 3365 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da\": not found" containerID="1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da" Jan 13 20:58:37.966009 kubelet[3365]: I0113 20:58:37.965614 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da"} err="failed to get container status \"1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ca4edaebf4a93cd43b266d2cd4d1d7135c2281d8d495ba4695be02ec9ccb0da\": not found" Jan 13 20:58:38.202258 kubelet[3365]: I0113 20:58:38.202151 3365 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="56ccc1a6-1542-49ad-8b9d-0457be27a4b3" path="/var/lib/kubelet/pods/56ccc1a6-1542-49ad-8b9d-0457be27a4b3/volumes" Jan 13 20:58:38.203023 kubelet[3365]: I0113 20:58:38.202995 3365 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ea5d2761-b18b-4191-bd76-c3732e2cf6c5" path="/var/lib/kubelet/pods/ea5d2761-b18b-4191-bd76-c3732e2cf6c5/volumes" Jan 13 20:58:38.427976 kubelet[3365]: E0113 20:58:38.427927 3365 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:58:38.621402 sshd[5242]: Connection closed by 139.178.89.65 port 38902 Jan 13 20:58:38.622199 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:38.625875 systemd[1]: sshd@24-172.31.31.120:22-139.178.89.65:38902.service: Deactivated successfully. Jan 13 20:58:38.632430 systemd-logind[1987]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:58:38.633416 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:58:38.635050 systemd-logind[1987]: Removed session 25. Jan 13 20:58:38.651793 systemd[1]: Started sshd@25-172.31.31.120:22-139.178.89.65:38908.service - OpenSSH per-connection server daemon (139.178.89.65:38908). Jan 13 20:58:38.831457 sshd[5409]: Accepted publickey for core from 139.178.89.65 port 38908 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:38.845587 sshd-session[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:38.860958 systemd-logind[1987]: New session 26 of user core. Jan 13 20:58:38.866604 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:58:39.185472 ntpd[1966]: Deleting interface #10 lxc_health, fe80::587a:b8ff:fe78:584%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs Jan 13 20:58:39.186052 ntpd[1966]: 13 Jan 20:58:39 ntpd[1966]: Deleting interface #10 lxc_health, fe80::587a:b8ff:fe78:584%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs Jan 13 20:58:39.528109 sshd[5412]: Connection closed by 139.178.89.65 port 38908 Jan 13 20:58:39.531337 sshd-session[5409]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:39.545715 systemd[1]: sshd@25-172.31.31.120:22-139.178.89.65:38908.service: Deactivated successfully. Jan 13 20:58:39.546219 systemd-logind[1987]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:58:39.556409 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:58:39.569728 systemd-logind[1987]: Removed session 26. Jan 13 20:58:39.584508 systemd[1]: Started sshd@26-172.31.31.120:22-139.178.89.65:38910.service - OpenSSH per-connection server daemon (139.178.89.65:38910). Jan 13 20:58:39.618749 kubelet[3365]: I0113 20:58:39.618709 3365 topology_manager.go:215] "Topology Admit Handler" podUID="483b7b15-689b-4aac-b0ff-d9e6d60f2ce7" podNamespace="kube-system" podName="cilium-xj6w7" Jan 13 20:58:39.623574 kubelet[3365]: E0113 20:58:39.623427 3365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56ccc1a6-1542-49ad-8b9d-0457be27a4b3" containerName="apply-sysctl-overwrites" Jan 13 20:58:39.623574 kubelet[3365]: E0113 20:58:39.623479 3365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56ccc1a6-1542-49ad-8b9d-0457be27a4b3" containerName="mount-bpf-fs" Jan 13 20:58:39.623574 kubelet[3365]: E0113 20:58:39.623491 3365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea5d2761-b18b-4191-bd76-c3732e2cf6c5" containerName="cilium-operator" Jan 13 20:58:39.623574 kubelet[3365]: E0113 20:58:39.623501 3365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56ccc1a6-1542-49ad-8b9d-0457be27a4b3" containerName="clean-cilium-state" Jan 13 20:58:39.623574 kubelet[3365]: E0113 20:58:39.623512 3365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56ccc1a6-1542-49ad-8b9d-0457be27a4b3" containerName="mount-cgroup" Jan 13 20:58:39.623574 kubelet[3365]: E0113 20:58:39.623521 3365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56ccc1a6-1542-49ad-8b9d-0457be27a4b3" containerName="cilium-agent" Jan 13 20:58:39.625160 kubelet[3365]: I0113 20:58:39.625130 3365 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ccc1a6-1542-49ad-8b9d-0457be27a4b3" containerName="cilium-agent" Jan 13 20:58:39.625160 kubelet[3365]: I0113 20:58:39.625161 3365 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea5d2761-b18b-4191-bd76-c3732e2cf6c5" containerName="cilium-operator" Jan 13 20:58:39.784643 kubelet[3365]: I0113 20:58:39.784524 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-cni-path\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.786183 kubelet[3365]: I0113 20:58:39.785243 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-etc-cni-netd\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.786183 kubelet[3365]: I0113 20:58:39.785998 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-host-proc-sys-kernel\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.786183 kubelet[3365]: I0113 20:58:39.786060 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlsd\" (UniqueName: \"kubernetes.io/projected/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-kube-api-access-pmlsd\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.786812 kubelet[3365]: I0113 20:58:39.786593 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-bpf-maps\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.786812 kubelet[3365]: I0113 20:58:39.786683 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-hostproc\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.786812 kubelet[3365]: I0113 20:58:39.786747 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-host-proc-sys-net\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.787243 kubelet[3365]: I0113 20:58:39.786906 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-hubble-tls\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.787243 kubelet[3365]: I0113 20:58:39.787058 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-cilium-cgroup\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.787587 kubelet[3365]: I0113 20:58:39.787359 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-lib-modules\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.787587 kubelet[3365]: I0113 20:58:39.787431 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-clustermesh-secrets\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.787587 kubelet[3365]: I0113 20:58:39.787459 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-cilium-ipsec-secrets\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.787587 kubelet[3365]: I0113 20:58:39.787519 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-xtables-lock\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.787867 kubelet[3365]: I0113 20:58:39.787546 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-cilium-config-path\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.787997 kubelet[3365]: I0113 20:58:39.787928 3365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/483b7b15-689b-4aac-b0ff-d9e6d60f2ce7-cilium-run\") pod \"cilium-xj6w7\" (UID: \"483b7b15-689b-4aac-b0ff-d9e6d60f2ce7\") " pod="kube-system/cilium-xj6w7" Jan 13 20:58:39.844814 sshd[5423]: Accepted publickey for core from 139.178.89.65 port 38910 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:39.847418 sshd-session[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:39.853345 systemd-logind[1987]: New session 27 of user core. Jan 13 20:58:39.857802 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:58:39.972571 sshd[5426]: Connection closed by 139.178.89.65 port 38910 Jan 13 20:58:39.974177 sshd-session[5423]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:39.978157 systemd[1]: sshd@26-172.31.31.120:22-139.178.89.65:38910.service: Deactivated successfully. Jan 13 20:58:39.983196 systemd-logind[1987]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:58:39.983905 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:58:39.985694 systemd-logind[1987]: Removed session 27. Jan 13 20:58:39.986578 containerd[2014]: time="2025-01-13T20:58:39.986539991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xj6w7,Uid:483b7b15-689b-4aac-b0ff-d9e6d60f2ce7,Namespace:kube-system,Attempt:0,}" Jan 13 20:58:40.003547 systemd[1]: Started sshd@27-172.31.31.120:22-139.178.89.65:38926.service - OpenSSH per-connection server daemon (139.178.89.65:38926). Jan 13 20:58:40.027019 containerd[2014]: time="2025-01-13T20:58:40.026849070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:58:40.027322 containerd[2014]: time="2025-01-13T20:58:40.027257141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:58:40.027491 containerd[2014]: time="2025-01-13T20:58:40.027444003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:58:40.029171 containerd[2014]: time="2025-01-13T20:58:40.029125923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:58:40.081750 containerd[2014]: time="2025-01-13T20:58:40.080203298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xj6w7,Uid:483b7b15-689b-4aac-b0ff-d9e6d60f2ce7,Namespace:kube-system,Attempt:0,} returns sandbox id \"02774e57c7aee916a5c92a4dbb0b2a72415ae169ec6aad0eb560d6d5d62c253c\"" Jan 13 20:58:40.085689 containerd[2014]: time="2025-01-13T20:58:40.085511117Z" level=info msg="CreateContainer within sandbox \"02774e57c7aee916a5c92a4dbb0b2a72415ae169ec6aad0eb560d6d5d62c253c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:58:40.107810 containerd[2014]: time="2025-01-13T20:58:40.107770813Z" level=info msg="CreateContainer within sandbox \"02774e57c7aee916a5c92a4dbb0b2a72415ae169ec6aad0eb560d6d5d62c253c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"484b29b042357160e61c764f60b168708a43074c8682387957ad373a9cb87e54\"" Jan 13 20:58:40.108438 containerd[2014]: time="2025-01-13T20:58:40.108415001Z" level=info msg="StartContainer for \"484b29b042357160e61c764f60b168708a43074c8682387957ad373a9cb87e54\"" Jan 13 20:58:40.173809 containerd[2014]: time="2025-01-13T20:58:40.173749969Z" level=info msg="StartContainer for \"484b29b042357160e61c764f60b168708a43074c8682387957ad373a9cb87e54\" returns successfully" Jan 13 20:58:40.195057 sshd[5436]: Accepted publickey for core from 139.178.89.65 port 38926 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:58:40.194754 sshd-session[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:58:40.211004 systemd-logind[1987]: New session 28 of user core. Jan 13 20:58:40.215783 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:58:40.261228 containerd[2014]: time="2025-01-13T20:58:40.261159779Z" level=info msg="shim disconnected" id=484b29b042357160e61c764f60b168708a43074c8682387957ad373a9cb87e54 namespace=k8s.io Jan 13 20:58:40.261228 containerd[2014]: time="2025-01-13T20:58:40.261223926Z" level=warning msg="cleaning up after shim disconnected" id=484b29b042357160e61c764f60b168708a43074c8682387957ad373a9cb87e54 namespace=k8s.io Jan 13 20:58:40.261658 containerd[2014]: time="2025-01-13T20:58:40.261238684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:58:40.854596 containerd[2014]: time="2025-01-13T20:58:40.853777020Z" level=info msg="CreateContainer within sandbox \"02774e57c7aee916a5c92a4dbb0b2a72415ae169ec6aad0eb560d6d5d62c253c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:58:40.873105 containerd[2014]: time="2025-01-13T20:58:40.870158561Z" level=info msg="CreateContainer within sandbox \"02774e57c7aee916a5c92a4dbb0b2a72415ae169ec6aad0eb560d6d5d62c253c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1eb59d3b8507725288cc73c98277d5f81475fe66a980756e7d1edaa7349b0a9d\"" Jan 13 20:58:40.873105 containerd[2014]: time="2025-01-13T20:58:40.871385071Z" level=info msg="StartContainer for \"1eb59d3b8507725288cc73c98277d5f81475fe66a980756e7d1edaa7349b0a9d\"" Jan 13 20:58:40.970670 containerd[2014]: time="2025-01-13T20:58:40.970611700Z" level=info msg="StartContainer for \"1eb59d3b8507725288cc73c98277d5f81475fe66a980756e7d1edaa7349b0a9d\" returns successfully" Jan 13 20:58:41.015730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1eb59d3b8507725288cc73c98277d5f81475fe66a980756e7d1edaa7349b0a9d-rootfs.mount: Deactivated successfully. Jan 13 20:58:41.026034 containerd[2014]: time="2025-01-13T20:58:41.025949889Z" level=info msg="shim disconnected" id=1eb59d3b8507725288cc73c98277d5f81475fe66a980756e7d1edaa7349b0a9d namespace=k8s.io Jan 13 20:58:41.026034 containerd[2014]: time="2025-01-13T20:58:41.026014946Z" level=warning msg="cleaning up after shim disconnected" id=1eb59d3b8507725288cc73c98277d5f81475fe66a980756e7d1edaa7349b0a9d namespace=k8s.io Jan 13 20:58:41.026034 containerd[2014]: time="2025-01-13T20:58:41.026029382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:58:41.253884 kubelet[3365]: I0113 20:58:41.253846 3365 setters.go:568] "Node became not ready" node="ip-172-31-31-120" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:58:41Z","lastTransitionTime":"2025-01-13T20:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:58:41.857515 containerd[2014]: time="2025-01-13T20:58:41.857333978Z" level=info msg="CreateContainer within sandbox \"02774e57c7aee916a5c92a4dbb0b2a72415ae169ec6aad0eb560d6d5d62c253c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:58:41.888811 containerd[2014]: time="2025-01-13T20:58:41.888765216Z" level=info msg="CreateContainer within sandbox \"02774e57c7aee916a5c92a4dbb0b2a72415ae169ec6aad0eb560d6d5d62c253c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ebffb2fa3446d49808840dd7e32a59af83a05ae6654188939bc376203aa8e337\"" Jan 13 20:58:41.889889 containerd[2014]: time="2025-01-13T20:58:41.889617415Z" level=info msg="StartContainer for \"ebffb2fa3446d49808840dd7e32a59af83a05ae6654188939bc376203aa8e337\"" Jan 13 20:58:41.963546 containerd[2014]: time="2025-01-13T20:58:41.963498762Z" level=info msg="StartContainer for \"ebffb2fa3446d49808840dd7e32a59af83a05ae6654188939bc376203aa8e337\" returns successfully" Jan 13 20:58:42.003833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebffb2fa3446d49808840dd7e32a59af83a05ae6654188939bc376203aa8e337-rootfs.mount: Deactivated successfully. Jan 13 20:58:42.007470 containerd[2014]: time="2025-01-13T20:58:42.007392054Z" level=info msg="shim disconnected" id=ebffb2fa3446d49808840dd7e32a59af83a05ae6654188939bc376203aa8e337 namespace=k8s.io Jan 13 20:58:42.007470 containerd[2014]: time="2025-01-13T20:58:42.007467070Z" level=warning msg="cleaning up after shim disconnected" id=ebffb2fa3446d49808840dd7e32a59af83a05ae6654188939bc376203aa8e337 namespace=k8s.io Jan 13 20:58:42.007793 containerd[2014]: time="2025-01-13T20:58:42.007484609Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:58:42.863154 containerd[2014]: time="2025-01-13T20:58:42.863109019Z" level=info msg="CreateContainer within sandbox \"02774e57c7aee916a5c92a4dbb0b2a72415ae169ec6aad0eb560d6d5d62c253c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:58:42.889179 containerd[2014]: time="2025-01-13T20:58:42.889133180Z" level=info msg="CreateContainer within sandbox \"02774e57c7aee916a5c92a4dbb0b2a72415ae169ec6aad0eb560d6d5d62c253c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"64ee84b7c4e626c30ba076f7d71eb3e804d8711aa6d5eca518dddc22cb663ac7\"" Jan 13 20:58:42.889982 containerd[2014]: time="2025-01-13T20:58:42.889844544Z" level=info msg="StartContainer for \"64ee84b7c4e626c30ba076f7d71eb3e804d8711aa6d5eca518dddc22cb663ac7\"" Jan 13 20:58:42.982211 containerd[2014]: time="2025-01-13T20:58:42.981767070Z" level=info msg="StartContainer for \"64ee84b7c4e626c30ba076f7d71eb3e804d8711aa6d5eca518dddc22cb663ac7\" returns successfully" Jan 13 20:58:43.005340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64ee84b7c4e626c30ba076f7d71eb3e804d8711aa6d5eca518dddc22cb663ac7-rootfs.mount: Deactivated successfully. Jan 13 20:58:43.017734 containerd[2014]: time="2025-01-13T20:58:43.017609047Z" level=info msg="shim disconnected" id=64ee84b7c4e626c30ba076f7d71eb3e804d8711aa6d5eca518dddc22cb663ac7 namespace=k8s.io Jan 13 20:58:43.018160 containerd[2014]: time="2025-01-13T20:58:43.017731220Z" level=warning msg="cleaning up after shim disconnected" id=64ee84b7c4e626c30ba076f7d71eb3e804d8711aa6d5eca518dddc22cb663ac7 namespace=k8s.io Jan 13 20:58:43.018160 containerd[2014]: time="2025-01-13T20:58:43.017778907Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:58:43.429689 kubelet[3365]: E0113 20:58:43.429657 3365 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:58:43.876800 containerd[2014]: time="2025-01-13T20:58:43.876616559Z" level=info msg="CreateContainer within sandbox \"02774e57c7aee916a5c92a4dbb0b2a72415ae169ec6aad0eb560d6d5d62c253c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:58:43.900510 containerd[2014]: time="2025-01-13T20:58:43.900460287Z" level=info msg="CreateContainer within sandbox \"02774e57c7aee916a5c92a4dbb0b2a72415ae169ec6aad0eb560d6d5d62c253c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2214159ae6418489c4815d776435b30b0377cedd3a545518feb65be75df36dd6\"" Jan 13 20:58:43.902784 containerd[2014]: time="2025-01-13T20:58:43.901250827Z" level=info msg="StartContainer for \"2214159ae6418489c4815d776435b30b0377cedd3a545518feb65be75df36dd6\"" Jan 13 20:58:43.988106 containerd[2014]: time="2025-01-13T20:58:43.987243782Z" level=info msg="StartContainer for \"2214159ae6418489c4815d776435b30b0377cedd3a545518feb65be75df36dd6\" returns successfully" Jan 13 20:58:44.017843 systemd[1]: run-containerd-runc-k8s.io-2214159ae6418489c4815d776435b30b0377cedd3a545518feb65be75df36dd6-runc.WM8g3w.mount: Deactivated successfully. Jan 13 20:58:44.779139 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 20:58:46.200251 kubelet[3365]: E0113 20:58:46.199984 3365 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-4f5nt" podUID="50d7d21f-2266-445c-a977-0a37d4b5701d" Jan 13 20:58:47.140860 systemd[1]: run-containerd-runc-k8s.io-2214159ae6418489c4815d776435b30b0377cedd3a545518feb65be75df36dd6-runc.Nm7Qii.mount: Deactivated successfully. Jan 13 20:58:48.200511 kubelet[3365]: E0113 20:58:48.200471 3365 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-4f5nt" podUID="50d7d21f-2266-445c-a977-0a37d4b5701d" Jan 13 20:58:48.203135 kubelet[3365]: E0113 20:58:48.202068 3365 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-bbj7g" podUID="44fe1692-17ea-44ab-8346-41c3a470b6d5" Jan 13 20:58:48.220226 containerd[2014]: time="2025-01-13T20:58:48.220182653Z" level=info msg="StopPodSandbox for \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\"" Jan 13 20:58:48.220712 containerd[2014]: time="2025-01-13T20:58:48.220290683Z" level=info msg="TearDown network for sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" successfully" Jan 13 20:58:48.220712 containerd[2014]: time="2025-01-13T20:58:48.220308151Z" level=info msg="StopPodSandbox for \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" returns successfully" Jan 13 20:58:48.222349 containerd[2014]: time="2025-01-13T20:58:48.221264275Z" level=info msg="RemovePodSandbox for \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\"" Jan 13 20:58:48.222349 containerd[2014]: time="2025-01-13T20:58:48.221310563Z" level=info msg="Forcibly stopping sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\"" Jan 13 20:58:48.222349 containerd[2014]: time="2025-01-13T20:58:48.221373451Z" level=info msg="TearDown network for sandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" successfully" Jan 13 20:58:48.232758 containerd[2014]: time="2025-01-13T20:58:48.232707136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:58:48.232924 containerd[2014]: time="2025-01-13T20:58:48.232791814Z" level=info msg="RemovePodSandbox \"ab41aac2128d1a7c0327ac25c1179acc39a7ae085302109497d89ab3d290b3d8\" returns successfully" Jan 13 20:58:48.234288 containerd[2014]: time="2025-01-13T20:58:48.234252942Z" level=info msg="StopPodSandbox for \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\"" Jan 13 20:58:48.234412 containerd[2014]: time="2025-01-13T20:58:48.234352970Z" level=info msg="TearDown network for sandbox \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\" successfully" Jan 13 20:58:48.234412 containerd[2014]: time="2025-01-13T20:58:48.234370324Z" level=info msg="StopPodSandbox for \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\" returns successfully" Jan 13 20:58:48.237400 containerd[2014]: time="2025-01-13T20:58:48.237221187Z" level=info msg="RemovePodSandbox for \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\"" Jan 13 20:58:48.237518 containerd[2014]: time="2025-01-13T20:58:48.237412464Z" level=info msg="Forcibly stopping sandbox \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\"" Jan 13 20:58:48.237575 containerd[2014]: time="2025-01-13T20:58:48.237487174Z" level=info msg="TearDown network for sandbox \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\" successfully" Jan 13 20:58:48.250618 containerd[2014]: time="2025-01-13T20:58:48.250562743Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:58:48.250761 containerd[2014]: time="2025-01-13T20:58:48.250654785Z" level=info msg="RemovePodSandbox \"bc34a64490ff16c37afdc865dc6b22a6b885cd1854b1ddfc6e9c58532e61f41e\" returns successfully" Jan 13 20:58:48.259748 systemd-networkd[1569]: lxc_health: Link UP Jan 13 20:58:48.268135 systemd-networkd[1569]: lxc_health: Gained carrier Jan 13 20:58:48.289993 (udev-worker)[6298]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:58:49.831256 systemd[1]: run-containerd-runc-k8s.io-2214159ae6418489c4815d776435b30b0377cedd3a545518feb65be75df36dd6-runc.2i0npm.mount: Deactivated successfully. Jan 13 20:58:50.021963 kubelet[3365]: I0113 20:58:50.021749 3365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xj6w7" podStartSLOduration=11.021673737 podStartE2EDuration="11.021673737s" podCreationTimestamp="2025-01-13 20:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:58:44.924141477 +0000 UTC m=+116.931500080" watchObservedRunningTime="2025-01-13 20:58:50.021673737 +0000 UTC m=+122.029032339" Jan 13 20:58:50.033338 systemd-networkd[1569]: lxc_health: Gained IPv6LL Jan 13 20:58:52.186306 ntpd[1966]: Listen normally on 13 lxc_health [fe80::b02f:eaff:fee2:8d1c%14]:123 Jan 13 20:58:52.188579 ntpd[1966]: 13 Jan 20:58:52 ntpd[1966]: Listen normally on 13 lxc_health [fe80::b02f:eaff:fee2:8d1c%14]:123 Jan 13 20:58:59.291794 sshd[5527]: Connection closed by 139.178.89.65 port 38926 Jan 13 20:58:59.292967 sshd-session[5436]: pam_unix(sshd:session): session closed for user core Jan 13 20:58:59.297628 systemd[1]: sshd@27-172.31.31.120:22-139.178.89.65:38926.service: Deactivated successfully. Jan 13 20:58:59.304355 systemd-logind[1987]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:58:59.305822 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:58:59.309404 systemd-logind[1987]: Removed session 28. Jan 13 20:59:12.962737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b95b9f0ed478fcb995f61398b4465f99c7b85b400376720e91f28e52f2bfc7df-rootfs.mount: Deactivated successfully. Jan 13 20:59:12.979267 containerd[2014]: time="2025-01-13T20:59:12.979186238Z" level=info msg="shim disconnected" id=b95b9f0ed478fcb995f61398b4465f99c7b85b400376720e91f28e52f2bfc7df namespace=k8s.io Jan 13 20:59:12.979267 containerd[2014]: time="2025-01-13T20:59:12.979243380Z" level=warning msg="cleaning up after shim disconnected" id=b95b9f0ed478fcb995f61398b4465f99c7b85b400376720e91f28e52f2bfc7df namespace=k8s.io Jan 13 20:59:12.979267 containerd[2014]: time="2025-01-13T20:59:12.979256133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:59:13.033098 kubelet[3365]: I0113 20:59:13.033051 3365 scope.go:117] "RemoveContainer" containerID="b95b9f0ed478fcb995f61398b4465f99c7b85b400376720e91f28e52f2bfc7df" Jan 13 20:59:13.041421 containerd[2014]: time="2025-01-13T20:59:13.041380013Z" level=info msg="CreateContainer within sandbox \"71dcbdd32019c2984173499565e42c5c3b5ee7995b16c6dfbb397641c2dc0e2b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 20:59:13.062196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount48185787.mount: Deactivated successfully. Jan 13 20:59:13.063479 containerd[2014]: time="2025-01-13T20:59:13.063226394Z" level=info msg="CreateContainer within sandbox \"71dcbdd32019c2984173499565e42c5c3b5ee7995b16c6dfbb397641c2dc0e2b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ec17f56447998d4cabc2c6949015afbe2755805bc89c782cf26fc6a52c890da3\"" Jan 13 20:59:13.064107 containerd[2014]: time="2025-01-13T20:59:13.063984071Z" level=info msg="StartContainer for \"ec17f56447998d4cabc2c6949015afbe2755805bc89c782cf26fc6a52c890da3\"" Jan 13 20:59:13.153149 containerd[2014]: time="2025-01-13T20:59:13.153064274Z" level=info msg="StartContainer for \"ec17f56447998d4cabc2c6949015afbe2755805bc89c782cf26fc6a52c890da3\" returns successfully" Jan 13 20:59:18.170817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e555d4584cc59368dd12ba2596d42d195e8031097904899d0a63fba28807697-rootfs.mount: Deactivated successfully. Jan 13 20:59:18.187937 containerd[2014]: time="2025-01-13T20:59:18.187873811Z" level=info msg="shim disconnected" id=3e555d4584cc59368dd12ba2596d42d195e8031097904899d0a63fba28807697 namespace=k8s.io Jan 13 20:59:18.187937 containerd[2014]: time="2025-01-13T20:59:18.187931149Z" level=warning msg="cleaning up after shim disconnected" id=3e555d4584cc59368dd12ba2596d42d195e8031097904899d0a63fba28807697 namespace=k8s.io Jan 13 20:59:18.187937 containerd[2014]: time="2025-01-13T20:59:18.187942464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:59:19.064223 kubelet[3365]: I0113 20:59:19.064178 3365 scope.go:117] "RemoveContainer" containerID="3e555d4584cc59368dd12ba2596d42d195e8031097904899d0a63fba28807697" Jan 13 20:59:19.069918 containerd[2014]: time="2025-01-13T20:59:19.069257107Z" level=info msg="CreateContainer within sandbox \"6921ea3cb5e676ae2cf24d9309714314ffc88bbe8c64af66c6f902d7d0bbc128\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 20:59:19.101970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074332305.mount: Deactivated successfully. Jan 13 20:59:19.104047 containerd[2014]: time="2025-01-13T20:59:19.104009074Z" level=info msg="CreateContainer within sandbox \"6921ea3cb5e676ae2cf24d9309714314ffc88bbe8c64af66c6f902d7d0bbc128\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"54db008850ea75c1bab89d30365eeff5c17193a6d2f94bc6ed0847ed8a9ca9f2\"" Jan 13 20:59:19.105107 containerd[2014]: time="2025-01-13T20:59:19.104725274Z" level=info msg="StartContainer for \"54db008850ea75c1bab89d30365eeff5c17193a6d2f94bc6ed0847ed8a9ca9f2\"" Jan 13 20:59:19.222432 containerd[2014]: time="2025-01-13T20:59:19.222274716Z" level=info msg="StartContainer for \"54db008850ea75c1bab89d30365eeff5c17193a6d2f94bc6ed0847ed8a9ca9f2\" returns successfully" Jan 13 20:59:21.048148 kubelet[3365]: E0113 20:59:21.048104 3365 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-120?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:59:31.048825 kubelet[3365]: E0113 20:59:31.048555 3365 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-120?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"