Mar 17 17:59:02.748529 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 17:59:02.748576 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:59:02.748593 kernel: BIOS-provided physical RAM map: Mar 17 17:59:02.750726 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:59:02.750747 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:59:02.750760 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:59:02.753302 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Mar 17 17:59:02.753320 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Mar 17 17:59:02.753333 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Mar 17 17:59:02.753346 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:59:02.753358 kernel: NX (Execute Disable) protection: active Mar 17 17:59:02.753371 kernel: APIC: Static calls initialized Mar 17 17:59:02.753383 kernel: SMBIOS 2.7 present. Mar 17 17:59:02.753397 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 17 17:59:02.753417 kernel: Hypervisor detected: KVM Mar 17 17:59:02.753431 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:59:02.753445 kernel: kvm-clock: using sched offset of 8133931601 cycles Mar 17 17:59:02.753461 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:59:02.753477 kernel: tsc: Detected 2499.996 MHz processor Mar 17 17:59:02.753526 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:59:02.753545 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:59:02.753564 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Mar 17 17:59:02.753580 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:59:02.753737 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:59:02.753755 kernel: Using GB pages for direct mapping Mar 17 17:59:02.753769 kernel: ACPI: Early table checksum verification disabled Mar 17 17:59:02.753784 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Mar 17 17:59:02.753799 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Mar 17 17:59:02.753813 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 17:59:02.753897 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 17 17:59:02.753919 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Mar 17 17:59:02.753995 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 17 17:59:02.754015 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 17:59:02.754030 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 17 17:59:02.754044 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 17:59:02.754058 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 17 17:59:02.754113 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 17 17:59:02.754127 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 17 17:59:02.754142 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Mar 17 17:59:02.754162 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Mar 17 17:59:02.754184 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Mar 17 17:59:02.754258 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Mar 17 17:59:02.754275 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Mar 17 17:59:02.754292 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Mar 17 17:59:02.754311 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Mar 17 17:59:02.754326 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Mar 17 17:59:02.754341 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Mar 17 17:59:02.754356 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Mar 17 17:59:02.754371 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 17:59:02.754386 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 17:59:02.754401 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 17 17:59:02.754417 kernel: NUMA: Initialized distance table, cnt=1 Mar 17 17:59:02.754432 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Mar 17 17:59:02.754450 kernel: Zone ranges: Mar 17 17:59:02.754465 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:59:02.754480 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Mar 17 17:59:02.754526 kernel: Normal empty Mar 17 17:59:02.754544 kernel: Movable zone start for each node Mar 17 17:59:02.754559 kernel: Early memory node ranges Mar 17 17:59:02.754573 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:59:02.754585 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Mar 17 17:59:02.754598 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Mar 17 17:59:02.755433 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:59:02.755457 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:59:02.757681 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Mar 17 17:59:02.757712 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 17 17:59:02.757728 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:59:02.757744 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 17 17:59:02.757759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:59:02.757773 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:59:02.757788 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:59:02.757803 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:59:02.757824 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:59:02.757839 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:59:02.757853 kernel: TSC deadline timer available Mar 17 17:59:02.757868 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:59:02.757883 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:59:02.757898 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Mar 17 17:59:02.757913 kernel: Booting paravirtualized kernel on KVM Mar 17 17:59:02.757928 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:59:02.758015 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:59:02.758038 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:59:02.758052 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:59:02.758067 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:59:02.758081 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:59:02.758096 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:59:02.758113 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:59:02.758129 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:59:02.758143 kernel: random: crng init done Mar 17 17:59:02.758160 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:59:02.758175 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:59:02.758254 kernel: Fallback order for Node 0: 0 Mar 17 17:59:02.758272 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Mar 17 17:59:02.758287 kernel: Policy zone: DMA32 Mar 17 17:59:02.758302 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:59:02.758318 kernel: Memory: 1930300K/2057760K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 127200K reserved, 0K cma-reserved) Mar 17 17:59:02.758333 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:59:02.758347 kernel: Kernel/User page tables isolation: enabled Mar 17 17:59:02.758366 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 17:59:02.758380 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:59:02.758395 kernel: Dynamic Preempt: voluntary Mar 17 17:59:02.758409 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:59:02.758425 kernel: rcu: RCU event tracing is enabled. Mar 17 17:59:02.758440 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:59:02.758454 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:59:02.758469 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:59:02.758483 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:59:02.758533 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:59:02.758550 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:59:02.758566 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 17:59:02.758581 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:59:02.758595 kernel: Console: colour VGA+ 80x25 Mar 17 17:59:02.758628 kernel: printk: console [ttyS0] enabled Mar 17 17:59:02.758643 kernel: ACPI: Core revision 20230628 Mar 17 17:59:02.758658 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 17 17:59:02.758673 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:59:02.758691 kernel: x2apic enabled Mar 17 17:59:02.758706 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:59:02.758733 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 17 17:59:02.758752 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Mar 17 17:59:02.758767 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 17 17:59:02.758782 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 17 17:59:02.759098 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:59:02.759227 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:59:02.759245 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:59:02.759260 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:59:02.759276 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 17 17:59:02.759291 kernel: RETBleed: Vulnerable Mar 17 17:59:02.759307 kernel: Speculative Store Bypass: Vulnerable Mar 17 17:59:02.759327 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:59:02.759342 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:59:02.759358 kernel: GDS: Unknown: Dependent on hypervisor status Mar 17 17:59:02.759373 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:59:02.759388 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:59:02.759411 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:59:02.759430 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 17 17:59:02.759446 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 17 17:59:02.759462 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 17 17:59:02.759478 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 17 17:59:02.759524 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 17 17:59:02.759543 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 17 17:59:02.759559 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:59:02.759574 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 17 17:59:02.759589 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 17 17:59:02.759639 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 17 17:59:02.759656 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 17 17:59:02.759677 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 17 17:59:02.759692 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 17 17:59:02.759708 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 17 17:59:02.759724 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:59:02.759739 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:59:02.759755 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:59:02.759770 kernel: landlock: Up and running. Mar 17 17:59:02.759786 kernel: SELinux: Initializing. Mar 17 17:59:02.759801 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:59:02.759817 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:59:02.759833 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 17 17:59:02.759852 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:59:02.759868 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:59:02.759884 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:59:02.759900 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 17 17:59:02.759916 kernel: signal: max sigframe size: 3632 Mar 17 17:59:02.759931 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:59:02.759948 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:59:02.759964 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 17:59:02.759979 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:59:02.759998 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:59:02.760013 kernel: .... node #0, CPUs: #1 Mar 17 17:59:02.760031 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 17 17:59:02.760048 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 17 17:59:02.760063 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:59:02.760079 kernel: smpboot: Max logical packages: 1 Mar 17 17:59:02.760094 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Mar 17 17:59:02.760110 kernel: devtmpfs: initialized Mar 17 17:59:02.760126 kernel: x86/mm: Memory block size: 128MB Mar 17 17:59:02.760145 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:59:02.760161 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:59:02.760177 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:59:02.760193 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:59:02.760208 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:59:02.760224 kernel: audit: type=2000 audit(1742234340.880:1): state=initialized audit_enabled=0 res=1 Mar 17 17:59:02.760239 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:59:02.760254 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:59:02.760273 kernel: cpuidle: using governor menu Mar 17 17:59:02.760289 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:59:02.760305 kernel: dca service started, version 1.12.1 Mar 17 17:59:02.760321 kernel: PCI: Using configuration type 1 for base access Mar 17 17:59:02.760336 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:59:02.760352 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:59:02.760368 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:59:02.760383 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:59:02.760399 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:59:02.760418 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:59:02.760434 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:59:02.760450 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:59:02.760466 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:59:02.760481 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 17 17:59:02.760525 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:59:02.760544 kernel: ACPI: Interpreter enabled Mar 17 17:59:02.760561 kernel: ACPI: PM: (supports S0 S5) Mar 17 17:59:02.760577 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:59:02.760592 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:59:02.760622 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:59:02.760637 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 17 17:59:02.760653 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:59:02.760910 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:59:02.761127 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 17 17:59:02.761267 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 17 17:59:02.761286 kernel: acpiphp: Slot [3] registered Mar 17 17:59:02.761306 kernel: acpiphp: Slot [4] registered Mar 17 17:59:02.761322 kernel: acpiphp: Slot [5] registered Mar 17 17:59:02.761337 kernel: acpiphp: Slot [6] registered Mar 17 17:59:02.761352 kernel: acpiphp: Slot [7] registered Mar 17 17:59:02.761368 kernel: acpiphp: Slot [8] registered Mar 17 17:59:02.761383 kernel: acpiphp: Slot [9] registered Mar 17 17:59:02.761398 kernel: acpiphp: Slot [10] registered Mar 17 17:59:02.761414 kernel: acpiphp: Slot [11] registered Mar 17 17:59:02.761428 kernel: acpiphp: Slot [12] registered Mar 17 17:59:02.761447 kernel: acpiphp: Slot [13] registered Mar 17 17:59:02.761462 kernel: acpiphp: Slot [14] registered Mar 17 17:59:02.761477 kernel: acpiphp: Slot [15] registered Mar 17 17:59:02.761523 kernel: acpiphp: Slot [16] registered Mar 17 17:59:02.761541 kernel: acpiphp: Slot [17] registered Mar 17 17:59:02.761556 kernel: acpiphp: Slot [18] registered Mar 17 17:59:02.761572 kernel: acpiphp: Slot [19] registered Mar 17 17:59:02.761587 kernel: acpiphp: Slot [20] registered Mar 17 17:59:02.761681 kernel: acpiphp: Slot [21] registered Mar 17 17:59:02.761697 kernel: acpiphp: Slot [22] registered Mar 17 17:59:02.761718 kernel: acpiphp: Slot [23] registered Mar 17 17:59:02.761733 kernel: acpiphp: Slot [24] registered Mar 17 17:59:02.761749 kernel: acpiphp: Slot [25] registered Mar 17 17:59:02.761764 kernel: acpiphp: Slot [26] registered Mar 17 17:59:02.761780 kernel: acpiphp: Slot [27] registered Mar 17 17:59:02.761796 kernel: acpiphp: Slot [28] registered Mar 17 17:59:02.761812 kernel: acpiphp: Slot [29] registered Mar 17 17:59:02.761827 kernel: acpiphp: Slot [30] registered Mar 17 17:59:02.761843 kernel: acpiphp: Slot [31] registered Mar 17 17:59:02.761862 kernel: PCI host bridge to bus 0000:00 Mar 17 17:59:02.762314 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:59:02.762531 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:59:02.762679 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:59:02.762800 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 17 17:59:02.762920 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:59:02.763080 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 17:59:02.764051 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 17:59:02.764667 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 17 17:59:02.764833 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 17 17:59:02.765043 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Mar 17 17:59:02.765181 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 17 17:59:02.765315 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 17 17:59:02.765547 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 17 17:59:02.767976 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 17 17:59:02.768136 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 17 17:59:02.768289 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 17 17:59:02.768448 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 17 17:59:02.768596 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Mar 17 17:59:02.771858 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Mar 17 17:59:02.772006 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:59:02.772159 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 17:59:02.772297 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Mar 17 17:59:02.772439 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 17:59:02.772576 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Mar 17 17:59:02.772596 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:59:02.772655 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:59:02.772675 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:59:02.772692 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:59:02.772708 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 17:59:02.772725 kernel: iommu: Default domain type: Translated Mar 17 17:59:02.772740 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:59:02.772757 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:59:02.772773 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:59:02.772789 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:59:02.772804 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Mar 17 17:59:02.773137 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 17 17:59:02.773287 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 17 17:59:02.773452 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:59:02.773473 kernel: vgaarb: loaded Mar 17 17:59:02.773490 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 17 17:59:02.773506 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 17 17:59:02.773523 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:59:02.773539 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:59:02.773555 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:59:02.773576 kernel: pnp: PnP ACPI init Mar 17 17:59:02.773694 kernel: pnp: PnP ACPI: found 5 devices Mar 17 17:59:02.773714 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:59:02.773731 kernel: NET: Registered PF_INET protocol family Mar 17 17:59:02.773747 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:59:02.773764 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 17:59:02.773780 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:59:02.773796 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:59:02.773817 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 17 17:59:02.773834 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 17:59:02.773850 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:59:02.773866 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:59:02.773882 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:59:02.773898 kernel: NET: Registered PF_XDP protocol family Mar 17 17:59:02.774041 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:59:02.774268 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:59:02.774404 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:59:02.774545 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 17 17:59:02.777925 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 17:59:02.777967 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:59:02.777984 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 17:59:02.778002 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 17 17:59:02.778018 kernel: clocksource: Switched to clocksource tsc Mar 17 17:59:02.778035 kernel: Initialise system trusted keyrings Mar 17 17:59:02.778051 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 17:59:02.778074 kernel: Key type asymmetric registered Mar 17 17:59:02.778090 kernel: Asymmetric key parser 'x509' registered Mar 17 17:59:02.778106 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:59:02.778122 kernel: io scheduler mq-deadline registered Mar 17 17:59:02.778139 kernel: io scheduler kyber registered Mar 17 17:59:02.778155 kernel: io scheduler bfq registered Mar 17 17:59:02.778172 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:59:02.778188 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:59:02.778205 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:59:02.778224 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:59:02.778239 kernel: i8042: Warning: Keylock active Mar 17 17:59:02.778254 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:59:02.778269 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:59:02.778426 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 17 17:59:02.778549 kernel: rtc_cmos 00:00: registered as rtc0 Mar 17 17:59:02.778687 kernel: rtc_cmos 00:00: setting system clock to 2025-03-17T17:59:01 UTC (1742234341) Mar 17 17:59:02.778854 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 17 17:59:02.778879 kernel: intel_pstate: CPU model not supported Mar 17 17:59:02.778896 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:59:02.778911 kernel: Segment Routing with IPv6 Mar 17 17:59:02.778926 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:59:02.778941 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:59:02.778957 kernel: Key type dns_resolver registered Mar 17 17:59:02.778973 kernel: IPI shorthand broadcast: enabled Mar 17 17:59:02.778988 kernel: sched_clock: Marking stable (1340003559, 182684559)->(1621517310, -98829192) Mar 17 17:59:02.779003 kernel: registered taskstats version 1 Mar 17 17:59:02.779022 kernel: Loading compiled-in X.509 certificates Mar 17 17:59:02.779037 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 17:59:02.779052 kernel: Key type .fscrypt registered Mar 17 17:59:02.779067 kernel: Key type fscrypt-provisioning registered Mar 17 17:59:02.779082 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:59:02.779098 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:59:02.779112 kernel: ima: No architecture policies found Mar 17 17:59:02.779128 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:59:02.779143 kernel: clk: Disabling unused clocks Mar 17 17:59:02.779161 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 17:59:02.779176 kernel: Write protecting the kernel read-only data: 38912k Mar 17 17:59:02.779191 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 17:59:02.779206 kernel: Run /init as init process Mar 17 17:59:02.779222 kernel: with arguments: Mar 17 17:59:02.779236 kernel: /init Mar 17 17:59:02.779251 kernel: with environment: Mar 17 17:59:02.779265 kernel: HOME=/ Mar 17 17:59:02.779279 kernel: TERM=linux Mar 17 17:59:02.779299 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:59:02.779339 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:59:02.779358 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:59:02.779373 systemd[1]: Detected virtualization amazon. Mar 17 17:59:02.779389 systemd[1]: Detected architecture x86-64. Mar 17 17:59:02.779412 systemd[1]: Running in initrd. Mar 17 17:59:02.779428 systemd[1]: No hostname configured, using default hostname. Mar 17 17:59:02.779461 systemd[1]: Hostname set to . Mar 17 17:59:02.779478 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:59:02.779493 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:59:02.779507 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:59:02.779522 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:59:02.779541 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:59:02.779556 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:59:02.779575 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:59:02.779594 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:59:02.781719 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:59:02.781743 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:59:02.781761 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:59:02.781786 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:59:02.781802 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:59:02.781823 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:59:02.781841 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:59:02.781856 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:59:02.781874 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:59:02.781890 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:59:02.781906 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:59:02.781923 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:59:02.781941 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:59:02.781958 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:59:02.781981 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:59:02.781998 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:59:02.782016 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:59:02.782072 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:59:02.782094 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:59:02.782119 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:59:02.782138 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:59:02.782156 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:59:02.782174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:59:02.782191 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:59:02.782246 systemd-journald[180]: Collecting audit messages is disabled. Mar 17 17:59:02.782290 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:59:02.782310 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:59:02.782330 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:59:02.782352 systemd-journald[180]: Journal started Mar 17 17:59:02.782389 systemd-journald[180]: Runtime Journal (/run/log/journal/ec2890b56b3c137117c4a93d57684ffe) is 4.8M, max 38.5M, 33.7M free. Mar 17 17:59:02.777637 systemd-modules-load[182]: Inserted module 'overlay' Mar 17 17:59:02.790779 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:59:02.826633 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:59:02.829112 systemd-modules-load[182]: Inserted module 'br_netfilter' Mar 17 17:59:02.998459 kernel: Bridge firewalling registered Mar 17 17:59:03.001774 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:59:03.005508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:59:03.019520 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:59:03.037406 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:59:03.042930 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:59:03.058854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:59:03.071082 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:59:03.084540 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:59:03.107880 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:59:03.120915 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:59:03.127637 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:59:03.151985 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:59:03.172214 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:59:03.220023 dracut-cmdline[214]: dracut-dracut-053 Mar 17 17:59:03.250598 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:59:03.322407 systemd-resolved[215]: Positive Trust Anchors: Mar 17 17:59:03.323050 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:59:03.324101 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:59:03.339819 systemd-resolved[215]: Defaulting to hostname 'linux'. Mar 17 17:59:03.341579 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:59:03.345804 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:59:03.445644 kernel: SCSI subsystem initialized Mar 17 17:59:03.457637 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:59:03.470647 kernel: iscsi: registered transport (tcp) Mar 17 17:59:03.496657 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:59:03.496735 kernel: QLogic iSCSI HBA Driver Mar 17 17:59:03.545055 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:59:03.552870 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:59:03.589856 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:59:03.589945 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:59:03.589967 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:59:03.676721 kernel: raid6: avx512x4 gen() 8392 MB/s Mar 17 17:59:03.695112 kernel: raid6: avx512x2 gen() 7984 MB/s Mar 17 17:59:03.713658 kernel: raid6: avx512x1 gen() 6511 MB/s Mar 17 17:59:03.730639 kernel: raid6: avx2x4 gen() 13300 MB/s Mar 17 17:59:03.747630 kernel: raid6: avx2x2 gen() 16042 MB/s Mar 17 17:59:03.764750 kernel: raid6: avx2x1 gen() 12191 MB/s Mar 17 17:59:03.764827 kernel: raid6: using algorithm avx2x2 gen() 16042 MB/s Mar 17 17:59:03.784645 kernel: raid6: .... xor() 7938 MB/s, rmw enabled Mar 17 17:59:03.784794 kernel: raid6: using avx512x2 recovery algorithm Mar 17 17:59:03.831635 kernel: xor: automatically using best checksumming function avx Mar 17 17:59:04.062638 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:59:04.075379 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:59:04.088887 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:59:04.134159 systemd-udevd[398]: Using default interface naming scheme 'v255'. Mar 17 17:59:04.142067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:59:04.152826 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:59:04.188863 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Mar 17 17:59:04.257072 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:59:04.267826 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:59:04.371104 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:59:04.381917 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:59:04.431378 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:59:04.439837 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:59:04.443835 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:59:04.445669 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:59:04.458295 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:59:04.496172 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:59:04.546689 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:59:04.577132 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 17:59:04.610351 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 17:59:04.610550 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 17:59:04.610780 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 17:59:04.610802 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:59:04.610819 kernel: AES CTR mode by8 optimization enabled Mar 17 17:59:04.610843 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 17 17:59:04.611018 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 17:59:04.611164 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:1e:a8:7b:da:a5 Mar 17 17:59:04.611342 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:59:04.611518 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:59:04.616908 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:59:04.616943 kernel: GPT:9289727 != 16777215 Mar 17 17:59:04.616962 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:59:04.616979 kernel: GPT:9289727 != 16777215 Mar 17 17:59:04.616996 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:59:04.617013 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:59:04.618995 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:59:04.621957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:59:04.622205 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:59:04.622550 (udev-worker)[454]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:59:04.625201 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:59:04.632959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:59:04.637359 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:59:04.733700 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (445) Mar 17 17:59:04.785628 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/nvme0n1p3 scanned by (udev-worker) (458) Mar 17 17:59:04.875646 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 17 17:59:04.901516 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:59:04.911631 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 17 17:59:04.911786 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 17 17:59:04.941680 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:59:04.966925 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 17 17:59:05.005066 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:59:05.018805 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:59:05.025441 disk-uuid[619]: Primary Header is updated. Mar 17 17:59:05.025441 disk-uuid[619]: Secondary Entries is updated. Mar 17 17:59:05.025441 disk-uuid[619]: Secondary Header is updated. Mar 17 17:59:05.049918 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:59:05.138348 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:59:06.102630 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:59:06.106578 disk-uuid[620]: The operation has completed successfully. Mar 17 17:59:06.302877 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:59:06.303020 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:59:06.374821 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:59:06.384740 sh[888]: Success Mar 17 17:59:06.418675 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 17:59:06.586660 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:59:06.602247 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:59:06.613684 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:59:06.676486 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 17:59:06.676563 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:59:06.676646 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:59:06.678882 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:59:06.678948 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:59:06.717799 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:59:06.722133 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:59:06.724985 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:59:06.732234 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:59:06.743580 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:59:06.784655 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:59:06.784725 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:59:06.784753 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:59:06.796475 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:59:06.834678 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:59:06.836088 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:59:06.854042 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:59:06.866442 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:59:06.959476 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:59:06.970922 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:59:07.047519 systemd-networkd[1082]: lo: Link UP Mar 17 17:59:07.047532 systemd-networkd[1082]: lo: Gained carrier Mar 17 17:59:07.049586 systemd-networkd[1082]: Enumeration completed Mar 17 17:59:07.049724 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:59:07.050267 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:07.050273 systemd-networkd[1082]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:59:07.052259 systemd[1]: Reached target network.target - Network. Mar 17 17:59:07.068374 systemd-networkd[1082]: eth0: Link UP Mar 17 17:59:07.068383 systemd-networkd[1082]: eth0: Gained carrier Mar 17 17:59:07.068400 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:07.082758 systemd-networkd[1082]: eth0: DHCPv4 address 172.31.20.178/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:59:07.171313 ignition[1020]: Ignition 2.20.0 Mar 17 17:59:07.171328 ignition[1020]: Stage: fetch-offline Mar 17 17:59:07.172098 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:07.172196 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:59:07.175002 ignition[1020]: Ignition finished successfully Mar 17 17:59:07.178240 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:59:07.188835 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:59:07.216207 ignition[1093]: Ignition 2.20.0 Mar 17 17:59:07.216221 ignition[1093]: Stage: fetch Mar 17 17:59:07.217559 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:07.217575 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:59:07.219273 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:59:07.230831 ignition[1093]: PUT result: OK Mar 17 17:59:07.234564 ignition[1093]: parsed url from cmdline: "" Mar 17 17:59:07.234694 ignition[1093]: no config URL provided Mar 17 17:59:07.234704 ignition[1093]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:59:07.234717 ignition[1093]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:59:07.234734 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:59:07.236040 ignition[1093]: PUT result: OK Mar 17 17:59:07.236098 ignition[1093]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 17:59:07.238761 ignition[1093]: GET result: OK Mar 17 17:59:07.238846 ignition[1093]: parsing config with SHA512: 668b50443b701dc3c22a9ec21e17bdcbdd0925e4f2d4eca4fc74a4df7b664c3467d83dcc1e3863851f2ed094b0490a072ff807db1c6b6b0fe6290ec8675d8a8e Mar 17 17:59:07.246291 unknown[1093]: fetched base config from "system" Mar 17 17:59:07.246307 unknown[1093]: fetched base config from "system" Mar 17 17:59:07.247145 ignition[1093]: fetch: fetch complete Mar 17 17:59:07.246316 unknown[1093]: fetched user config from "aws" Mar 17 17:59:07.247152 ignition[1093]: fetch: fetch passed Mar 17 17:59:07.247210 ignition[1093]: Ignition finished successfully Mar 17 17:59:07.253578 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:59:07.261910 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:59:07.295015 ignition[1099]: Ignition 2.20.0 Mar 17 17:59:07.295030 ignition[1099]: Stage: kargs Mar 17 17:59:07.296807 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:07.296827 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:59:07.296957 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:59:07.298995 ignition[1099]: PUT result: OK Mar 17 17:59:07.310693 ignition[1099]: kargs: kargs passed Mar 17 17:59:07.310783 ignition[1099]: Ignition finished successfully Mar 17 17:59:07.318189 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:59:07.329943 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:59:07.353314 ignition[1107]: Ignition 2.20.0 Mar 17 17:59:07.353328 ignition[1107]: Stage: disks Mar 17 17:59:07.353823 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:07.353835 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:59:07.353953 ignition[1107]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:59:07.355939 ignition[1107]: PUT result: OK Mar 17 17:59:07.363285 ignition[1107]: disks: disks passed Mar 17 17:59:07.364468 ignition[1107]: Ignition finished successfully Mar 17 17:59:07.367465 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:59:07.371210 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:59:07.372435 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:59:07.375263 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:59:07.378300 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:59:07.379542 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:59:07.396171 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:59:07.440494 systemd-fsck[1115]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:59:07.445595 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:59:07.680774 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:59:07.817842 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 17:59:07.819884 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:59:07.820990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:59:07.835777 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:59:07.839736 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:59:07.843953 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:59:07.844029 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:59:07.844065 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:59:07.863592 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:59:07.870632 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1134) Mar 17 17:59:07.872630 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:59:07.872696 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:59:07.873743 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:59:07.873817 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:59:07.884659 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:59:07.886898 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:59:08.104792 initrd-setup-root[1158]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:59:08.111780 initrd-setup-root[1165]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:59:08.119788 initrd-setup-root[1172]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:59:08.130260 initrd-setup-root[1179]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:59:08.379748 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:59:08.391771 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:59:08.394741 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:59:08.409633 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:59:08.464015 ignition[1246]: INFO : Ignition 2.20.0 Mar 17 17:59:08.465868 ignition[1246]: INFO : Stage: mount Mar 17 17:59:08.465868 ignition[1246]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:08.465868 ignition[1246]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:59:08.465868 ignition[1246]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:59:08.472457 ignition[1246]: INFO : PUT result: OK Mar 17 17:59:08.472124 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:59:08.476135 ignition[1246]: INFO : mount: mount passed Mar 17 17:59:08.476135 ignition[1246]: INFO : Ignition finished successfully Mar 17 17:59:08.481883 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:59:08.490780 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:59:08.672777 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:59:08.678932 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:59:08.708633 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1259) Mar 17 17:59:08.709706 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:59:08.709763 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:59:08.716560 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:59:08.725672 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:59:08.729377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:59:08.759980 ignition[1276]: INFO : Ignition 2.20.0 Mar 17 17:59:08.759980 ignition[1276]: INFO : Stage: files Mar 17 17:59:08.761909 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:08.761909 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:59:08.761909 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:59:08.767724 ignition[1276]: INFO : PUT result: OK Mar 17 17:59:08.771388 ignition[1276]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:59:08.773034 ignition[1276]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:59:08.773034 ignition[1276]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:59:08.775947 systemd-networkd[1082]: eth0: Gained IPv6LL Mar 17 17:59:08.793095 ignition[1276]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:59:08.794900 ignition[1276]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:59:08.796812 unknown[1276]: wrote ssh authorized keys file for user: core Mar 17 17:59:08.798250 ignition[1276]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:59:08.802257 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:59:08.802257 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:59:08.904776 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:59:09.057662 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:59:09.057662 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:59:09.062983 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 17:59:09.527734 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:59:09.663226 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:59:09.665408 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:59:09.665408 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:59:09.669101 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:59:09.669101 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:59:09.679354 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:59:09.679354 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:59:09.679354 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:59:09.679354 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:59:09.679354 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:59:09.679354 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:59:09.679354 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:59:09.679354 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:59:09.679354 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:59:09.679354 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:59:10.052646 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:59:10.442730 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:59:10.442730 ignition[1276]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:59:10.449864 ignition[1276]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:59:10.454152 ignition[1276]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:59:10.454152 ignition[1276]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:59:10.454152 ignition[1276]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:59:10.454152 ignition[1276]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:59:10.454152 ignition[1276]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:59:10.454152 ignition[1276]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:59:10.454152 ignition[1276]: INFO : files: files passed Mar 17 17:59:10.454152 ignition[1276]: INFO : Ignition finished successfully Mar 17 17:59:10.474449 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:59:10.482919 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:59:10.504948 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:59:10.522585 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:59:10.522743 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:59:10.536532 initrd-setup-root-after-ignition[1305]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:59:10.536532 initrd-setup-root-after-ignition[1305]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:59:10.542452 initrd-setup-root-after-ignition[1309]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:59:10.546389 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:59:10.546751 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:59:10.561327 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:59:10.625347 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:59:10.625473 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:59:10.627594 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:59:10.627710 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:59:10.631656 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:59:10.664029 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:59:10.699514 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:59:10.709857 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:59:10.728499 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:59:10.731056 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:59:10.735975 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:59:10.739497 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:59:10.739866 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:59:10.744037 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:59:10.746659 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:59:10.753783 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:59:10.756928 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:59:10.761798 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:59:10.762105 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:59:10.769576 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:59:10.771854 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:59:10.776081 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:59:10.780466 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:59:10.781661 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:59:10.781941 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:59:10.785904 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:59:10.790625 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:59:10.793651 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:59:10.794448 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:59:10.794769 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:59:10.794981 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:59:10.801382 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:59:10.801554 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:59:10.807083 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:59:10.807403 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:59:10.821989 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:59:10.825457 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:59:10.829899 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:59:10.844177 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:59:10.844336 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:59:10.847812 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:59:10.850629 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:59:10.852268 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:59:10.867888 ignition[1329]: INFO : Ignition 2.20.0 Mar 17 17:59:10.869498 ignition[1329]: INFO : Stage: umount Mar 17 17:59:10.869498 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:10.869498 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:59:10.869498 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:59:10.877942 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:59:10.878099 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:59:10.883564 ignition[1329]: INFO : PUT result: OK Mar 17 17:59:10.888827 ignition[1329]: INFO : umount: umount passed Mar 17 17:59:10.892722 ignition[1329]: INFO : Ignition finished successfully Mar 17 17:59:10.893374 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:59:10.894758 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:59:10.917323 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:59:10.917463 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:59:10.923527 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:59:10.923629 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:59:10.927323 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:59:10.927411 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:59:10.930233 systemd[1]: Stopped target network.target - Network. Mar 17 17:59:10.941159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:59:10.943934 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:59:10.948412 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:59:10.956647 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:59:10.956752 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:59:10.960994 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:59:10.963118 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:59:10.970131 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:59:10.972037 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:59:10.980846 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:59:10.981169 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:59:10.984966 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:59:10.985046 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:59:10.986676 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:59:10.986754 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:59:10.987218 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:59:10.987814 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:59:10.990144 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:59:10.991058 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:59:10.991328 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:59:10.994308 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:59:10.994427 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:59:11.017192 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:59:11.017427 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:59:11.033522 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:59:11.034381 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:59:11.035722 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:59:11.044662 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:59:11.047294 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:59:11.047487 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:59:11.057335 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:59:11.057460 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:59:11.060613 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:59:11.062908 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:59:11.063524 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:59:11.066726 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:59:11.066795 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:59:11.069565 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:59:11.069661 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:59:11.074056 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:59:11.084375 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:59:11.084476 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:59:11.121310 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:59:11.121665 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:59:11.143363 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:59:11.145081 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:59:11.151621 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:59:11.151734 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:59:11.156481 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:59:11.156571 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:59:11.160110 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:59:11.160194 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:59:11.175133 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:59:11.175218 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:59:11.188853 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:59:11.188942 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:59:11.191989 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:59:11.194026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:59:11.194084 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:59:11.203880 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:59:11.203961 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:59:11.204537 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:59:11.204651 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:59:11.209814 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:59:11.209903 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:59:11.212936 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:59:11.223621 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:59:11.235351 systemd[1]: Switching root. Mar 17 17:59:11.290460 systemd-journald[180]: Journal stopped Mar 17 17:59:13.325001 systemd-journald[180]: Received SIGTERM from PID 1 (systemd). Mar 17 17:59:13.325092 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:59:13.325201 kernel: SELinux: policy capability open_perms=1 Mar 17 17:59:13.325222 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:59:13.325248 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:59:13.325269 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:59:13.325290 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:59:13.325316 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:59:13.325342 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:59:13.325363 kernel: audit: type=1403 audit(1742234351.706:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:59:13.325386 systemd[1]: Successfully loaded SELinux policy in 78.585ms. Mar 17 17:59:13.325420 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.361ms. Mar 17 17:59:13.325445 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:59:13.325467 systemd[1]: Detected virtualization amazon. Mar 17 17:59:13.325489 systemd[1]: Detected architecture x86-64. Mar 17 17:59:13.325510 systemd[1]: Detected first boot. Mar 17 17:59:13.325533 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:59:13.325555 zram_generator::config[1373]: No configuration found. Mar 17 17:59:13.325582 kernel: Guest personality initialized and is inactive Mar 17 17:59:13.330674 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 17:59:13.330716 kernel: Initialized host personality Mar 17 17:59:13.330744 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:59:13.330765 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:59:13.330790 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:59:13.330809 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:59:13.330827 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:59:13.330846 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:59:13.330865 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:59:13.330891 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:59:13.330912 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:59:13.330935 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:59:13.330954 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:59:13.330975 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:59:13.330998 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:59:13.331021 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:59:13.331043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:59:13.331070 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:59:13.331093 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:59:13.331115 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:59:13.331137 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:59:13.331161 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:59:13.331181 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:59:13.331269 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:59:13.331298 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:59:13.331326 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:59:13.331346 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:59:13.331367 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:59:13.331387 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:59:13.331416 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:59:13.331437 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:59:13.331465 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:59:13.331486 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:59:13.331506 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:59:13.331531 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:59:13.331552 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:59:13.331573 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:59:13.331597 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:59:13.331636 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:59:13.331659 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:59:13.331682 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:59:13.331706 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:59:13.331729 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:13.331755 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:59:13.331837 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:59:13.331864 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:59:13.331891 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:59:13.331913 systemd[1]: Reached target machines.target - Containers. Mar 17 17:59:13.331936 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:59:13.331959 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:13.331982 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:59:13.332008 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:59:13.332030 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:59:13.332050 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:59:13.332070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:59:13.332091 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:59:13.332111 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:59:13.332500 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:59:13.332528 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:59:13.332549 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:59:13.332717 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:59:13.332743 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:59:13.332763 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:59:13.332781 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:59:13.332799 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:59:13.332817 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:59:13.332836 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:59:13.332855 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:59:13.332878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:59:13.332896 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:59:13.332916 systemd[1]: Stopped verity-setup.service. Mar 17 17:59:13.332939 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:13.333022 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:59:13.333052 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:59:13.333074 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:59:13.333095 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:59:13.333163 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:59:13.333191 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:59:13.333213 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:59:13.333233 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:59:13.333254 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:59:13.333276 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:59:13.333299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:59:13.333322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:59:13.333344 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:59:13.333365 kernel: loop: module loaded Mar 17 17:59:13.333452 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:59:13.333486 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:59:13.333509 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:59:13.333532 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:59:13.333555 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:59:13.333577 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:59:13.333599 kernel: fuse: init (API version 7.39) Mar 17 17:59:13.333630 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:59:13.333649 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:59:13.333667 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:59:13.333690 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:59:13.333749 systemd-journald[1453]: Collecting audit messages is disabled. Mar 17 17:59:13.333794 systemd-journald[1453]: Journal started Mar 17 17:59:13.333834 systemd-journald[1453]: Runtime Journal (/run/log/journal/ec2890b56b3c137117c4a93d57684ffe) is 4.8M, max 38.5M, 33.7M free. Mar 17 17:59:13.333987 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:59:12.755308 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:59:12.767015 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 17 17:59:12.768841 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:59:13.361640 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:59:13.361738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:59:13.376491 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:59:13.390631 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:59:13.417520 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:59:13.417586 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:59:13.430802 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:59:13.435377 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:59:13.448784 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:59:13.451543 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:59:13.455282 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:59:13.455540 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:59:13.457144 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:59:13.459330 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:59:13.461087 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:59:13.476053 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:59:13.489039 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:59:13.499744 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:59:13.515897 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:59:13.523311 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:59:13.536831 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:59:13.540364 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:59:13.549638 kernel: ACPI: bus type drm_connector registered Mar 17 17:59:13.555632 kernel: loop0: detected capacity change from 0 to 62832 Mar 17 17:59:13.578587 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:59:13.579926 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:59:13.600729 systemd-journald[1453]: Time spent on flushing to /var/log/journal/ec2890b56b3c137117c4a93d57684ffe is 114.493ms for 981 entries. Mar 17 17:59:13.600729 systemd-journald[1453]: System Journal (/var/log/journal/ec2890b56b3c137117c4a93d57684ffe) is 8M, max 195.6M, 187.6M free. Mar 17 17:59:13.724758 systemd-journald[1453]: Received client request to flush runtime journal. Mar 17 17:59:13.725026 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:59:13.725064 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 17:59:13.613545 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:59:13.705143 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:59:13.715157 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:59:13.728426 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:59:13.761347 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:59:13.764971 udevadm[1519]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:59:13.775248 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:59:13.776139 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:59:13.789803 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:59:13.836063 systemd-tmpfiles[1525]: ACLs are not supported, ignoring. Mar 17 17:59:13.836090 systemd-tmpfiles[1525]: ACLs are not supported, ignoring. Mar 17 17:59:13.846868 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:59:13.850777 kernel: loop2: detected capacity change from 0 to 138176 Mar 17 17:59:13.954638 kernel: loop3: detected capacity change from 0 to 147912 Mar 17 17:59:14.050639 kernel: loop4: detected capacity change from 0 to 62832 Mar 17 17:59:14.079296 kernel: loop5: detected capacity change from 0 to 210664 Mar 17 17:59:14.131763 kernel: loop6: detected capacity change from 0 to 138176 Mar 17 17:59:14.162642 kernel: loop7: detected capacity change from 0 to 147912 Mar 17 17:59:14.195447 (sd-merge)[1531]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 17 17:59:14.196853 (sd-merge)[1531]: Merged extensions into '/usr'. Mar 17 17:59:14.213411 systemd[1]: Reload requested from client PID 1485 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:59:14.213661 systemd[1]: Reloading... Mar 17 17:59:14.397648 zram_generator::config[1559]: No configuration found. Mar 17 17:59:14.807193 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:59:14.920629 ldconfig[1481]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:59:14.968033 systemd[1]: Reloading finished in 753 ms. Mar 17 17:59:14.993036 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:59:14.998277 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:59:15.014537 systemd[1]: Starting ensure-sysext.service... Mar 17 17:59:15.023060 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:59:15.053925 systemd[1]: Reload requested from client PID 1608 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:59:15.054029 systemd[1]: Reloading... Mar 17 17:59:15.087084 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:59:15.088391 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:59:15.093275 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:59:15.095022 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Mar 17 17:59:15.095146 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Mar 17 17:59:15.105560 systemd-tmpfiles[1609]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:59:15.105582 systemd-tmpfiles[1609]: Skipping /boot Mar 17 17:59:15.143448 systemd-tmpfiles[1609]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:59:15.143477 systemd-tmpfiles[1609]: Skipping /boot Mar 17 17:59:15.269769 zram_generator::config[1635]: No configuration found. Mar 17 17:59:15.474501 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:59:15.576113 systemd[1]: Reloading finished in 521 ms. Mar 17 17:59:15.592939 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:59:15.607889 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:59:15.622001 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:59:15.630591 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:59:15.637106 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:59:15.664013 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:59:15.669654 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:59:15.680062 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:59:15.689716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:15.690446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:15.708160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:59:15.718176 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:59:15.726981 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:59:15.728497 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:59:15.729169 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:59:15.729340 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:15.749791 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:59:15.767998 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:15.768419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:15.768920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:59:15.769191 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:59:15.769504 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:15.772469 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:59:15.794477 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:15.795305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:15.802178 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:59:15.808572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:59:15.808798 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:59:15.809330 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:59:15.810881 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:15.812417 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:59:15.812668 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:59:15.822489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:59:15.822807 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:59:15.837012 systemd[1]: Finished ensure-sysext.service. Mar 17 17:59:15.839040 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:59:15.851130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:59:15.853209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:59:15.855529 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:59:15.860051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:59:15.867884 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:59:15.869153 systemd-udevd[1697]: Using default interface naming scheme 'v255'. Mar 17 17:59:15.869851 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:59:15.870108 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:59:15.898248 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:59:15.913321 augenrules[1728]: No rules Mar 17 17:59:15.914943 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:59:15.915189 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:59:15.936442 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:59:15.953938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:59:15.955815 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:59:15.974855 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:59:15.977157 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:59:16.131949 systemd-networkd[1741]: lo: Link UP Mar 17 17:59:16.131960 systemd-networkd[1741]: lo: Gained carrier Mar 17 17:59:16.133074 systemd-networkd[1741]: Enumeration completed Mar 17 17:59:16.135738 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:59:16.152979 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:59:16.160933 (udev-worker)[1743]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:59:16.166794 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:59:16.174822 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:59:16.241657 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:59:16.251444 systemd-resolved[1692]: Positive Trust Anchors: Mar 17 17:59:16.251472 systemd-resolved[1692]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:59:16.251526 systemd-resolved[1692]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:59:16.262209 systemd-resolved[1692]: Defaulting to hostname 'linux'. Mar 17 17:59:16.266070 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:59:16.267779 systemd[1]: Reached target network.target - Network. Mar 17 17:59:16.268843 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:59:16.292530 systemd-networkd[1741]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:16.292543 systemd-networkd[1741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:59:16.296828 systemd-networkd[1741]: eth0: Link UP Mar 17 17:59:16.298401 systemd-networkd[1741]: eth0: Gained carrier Mar 17 17:59:16.298598 systemd-networkd[1741]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:16.309088 systemd-networkd[1741]: eth0: DHCPv4 address 172.31.20.178/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:59:16.330656 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 17 17:59:16.333630 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Mar 17 17:59:16.347624 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:59:16.347721 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Mar 17 17:59:16.363678 kernel: ACPI: button: Sleep Button [SLPF] Mar 17 17:59:16.377633 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Mar 17 17:59:16.408008 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:59:16.446701 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1753) Mar 17 17:59:16.446795 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:59:16.643939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:59:16.711297 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:59:16.713059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:59:16.738803 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:59:16.755886 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:59:16.773567 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:59:16.776644 lvm[1862]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:59:16.804704 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:59:16.806452 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:59:16.807736 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:59:16.812776 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:59:16.817397 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:59:16.824233 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:59:16.828092 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:59:16.836549 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:59:16.837886 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:59:16.837941 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:59:16.838848 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:59:16.841325 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:59:16.844332 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:59:16.852449 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:59:16.854669 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:59:16.857296 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:59:16.870496 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:59:16.872473 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:59:16.881832 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:59:16.884792 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:59:16.886914 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:59:16.889293 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:59:16.890453 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:59:16.890494 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:59:16.891422 lvm[1869]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:59:16.901954 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:59:16.913934 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:59:16.933631 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:59:16.941789 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:59:16.953975 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:59:16.955587 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:59:16.973971 jq[1873]: false Mar 17 17:59:16.961853 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:59:16.977953 systemd[1]: Started ntpd.service - Network Time Service. Mar 17 17:59:17.049744 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:59:17.052763 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 17 17:59:17.066763 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:59:17.076872 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:59:17.089676 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:59:17.092972 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:59:17.094105 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:59:17.101924 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:59:17.109328 extend-filesystems[1874]: Found loop4 Mar 17 17:59:17.109328 extend-filesystems[1874]: Found loop5 Mar 17 17:59:17.109328 extend-filesystems[1874]: Found loop6 Mar 17 17:59:17.109328 extend-filesystems[1874]: Found loop7 Mar 17 17:59:17.109328 extend-filesystems[1874]: Found nvme0n1 Mar 17 17:59:17.109328 extend-filesystems[1874]: Found nvme0n1p1 Mar 17 17:59:17.109328 extend-filesystems[1874]: Found nvme0n1p2 Mar 17 17:59:17.109328 extend-filesystems[1874]: Found nvme0n1p3 Mar 17 17:59:17.109328 extend-filesystems[1874]: Found usr Mar 17 17:59:17.109328 extend-filesystems[1874]: Found nvme0n1p4 Mar 17 17:59:17.109328 extend-filesystems[1874]: Found nvme0n1p6 Mar 17 17:59:17.109328 extend-filesystems[1874]: Found nvme0n1p7 Mar 17 17:59:17.109328 extend-filesystems[1874]: Found nvme0n1p9 Mar 17 17:59:17.109328 extend-filesystems[1874]: Checking size of /dev/nvme0n1p9 Mar 17 17:59:17.118777 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:59:17.119450 dbus-daemon[1872]: [system] SELinux support is enabled Mar 17 17:59:17.149952 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:20 UTC 2025 (1): Starting Mar 17 17:59:17.149952 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:59:17.149952 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: ---------------------------------------------------- Mar 17 17:59:17.149952 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:59:17.149952 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:59:17.149952 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: corporation. Support and training for ntp-4 are Mar 17 17:59:17.149952 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: available at https://www.nwtime.org/support Mar 17 17:59:17.149952 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: ---------------------------------------------------- Mar 17 17:59:17.149952 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: proto: precision = 0.065 usec (-24) Mar 17 17:59:17.149952 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: basedate set to 2025-03-05 Mar 17 17:59:17.149952 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: gps base set to 2025-03-09 (week 2357) Mar 17 17:59:17.121814 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:59:17.127152 ntpd[1876]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:20 UTC 2025 (1): Starting Mar 17 17:59:17.131661 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:59:17.127179 ntpd[1876]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:59:17.148571 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:59:17.127190 ntpd[1876]: ---------------------------------------------------- Mar 17 17:59:17.169394 extend-filesystems[1874]: Resized partition /dev/nvme0n1p9 Mar 17 17:59:17.170772 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:59:17.170772 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:59:17.170772 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:59:17.170772 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: Listen normally on 3 eth0 172.31.20.178:123 Mar 17 17:59:17.170772 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: Listen normally on 4 lo [::1]:123 Mar 17 17:59:17.170772 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: bind(21) AF_INET6 fe80::41e:a8ff:fe7b:daa5%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:59:17.170772 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: unable to create socket on eth0 (5) for fe80::41e:a8ff:fe7b:daa5%2#123 Mar 17 17:59:17.170772 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: failed to init interface for address fe80::41e:a8ff:fe7b:daa5%2 Mar 17 17:59:17.170772 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: Listening on routing socket on fd #21 for interface updates Mar 17 17:59:17.148879 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:59:17.127202 ntpd[1876]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:59:17.149329 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:59:17.127211 ntpd[1876]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:59:17.151960 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:59:17.127220 ntpd[1876]: corporation. Support and training for ntp-4 are Mar 17 17:59:17.127231 ntpd[1876]: available at https://www.nwtime.org/support Mar 17 17:59:17.196545 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:59:17.196545 ntpd[1876]: 17 Mar 17:59:17 ntpd[1876]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:59:17.181171 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:59:17.127241 ntpd[1876]: ---------------------------------------------------- Mar 17 17:59:17.181449 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:59:17.133678 dbus-daemon[1872]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1741 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 17:59:17.137132 ntpd[1876]: proto: precision = 0.065 usec (-24) Mar 17 17:59:17.141241 ntpd[1876]: basedate set to 2025-03-05 Mar 17 17:59:17.205492 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:59:17.206274 extend-filesystems[1911]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:59:17.221571 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 17:59:17.141264 ntpd[1876]: gps base set to 2025-03-09 (week 2357) Mar 17 17:59:17.205572 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:59:17.222027 jq[1894]: true Mar 17 17:59:17.162906 ntpd[1876]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:59:17.210804 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:59:17.162966 ntpd[1876]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:59:17.210832 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:59:17.165813 ntpd[1876]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:59:17.165861 ntpd[1876]: Listen normally on 3 eth0 172.31.20.178:123 Mar 17 17:59:17.165906 ntpd[1876]: Listen normally on 4 lo [::1]:123 Mar 17 17:59:17.165961 ntpd[1876]: bind(21) AF_INET6 fe80::41e:a8ff:fe7b:daa5%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:59:17.165983 ntpd[1876]: unable to create socket on eth0 (5) for fe80::41e:a8ff:fe7b:daa5%2#123 Mar 17 17:59:17.165998 ntpd[1876]: failed to init interface for address fe80::41e:a8ff:fe7b:daa5%2 Mar 17 17:59:17.166030 ntpd[1876]: Listening on routing socket on fd #21 for interface updates Mar 17 17:59:17.181293 ntpd[1876]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:59:17.181331 ntpd[1876]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:59:17.214514 dbus-daemon[1872]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:59:17.248853 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 17 17:59:17.258290 update_engine[1892]: I20250317 17:59:17.255191 1892 main.cc:92] Flatcar Update Engine starting Mar 17 17:59:17.258715 tar[1901]: linux-amd64/helm Mar 17 17:59:17.287240 update_engine[1892]: I20250317 17:59:17.284516 1892 update_check_scheduler.cc:74] Next update check in 5m56s Mar 17 17:59:17.283583 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:59:17.284045 (ntainerd)[1918]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:59:17.290560 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:59:17.296737 jq[1916]: true Mar 17 17:59:17.337436 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 17:59:17.353946 extend-filesystems[1911]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 17:59:17.353946 extend-filesystems[1911]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:59:17.353946 extend-filesystems[1911]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 17:59:17.353544 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:59:17.360469 extend-filesystems[1874]: Resized filesystem in /dev/nvme0n1p9 Mar 17 17:59:17.353842 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:59:17.418627 coreos-metadata[1871]: Mar 17 17:59:17.416 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:59:17.425459 coreos-metadata[1871]: Mar 17 17:59:17.422 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 17 17:59:17.427528 coreos-metadata[1871]: Mar 17 17:59:17.426 INFO Fetch successful Mar 17 17:59:17.427528 coreos-metadata[1871]: Mar 17 17:59:17.426 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 17 17:59:17.430427 coreos-metadata[1871]: Mar 17 17:59:17.429 INFO Fetch successful Mar 17 17:59:17.430427 coreos-metadata[1871]: Mar 17 17:59:17.429 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 17 17:59:17.432666 coreos-metadata[1871]: Mar 17 17:59:17.432 INFO Fetch successful Mar 17 17:59:17.432666 coreos-metadata[1871]: Mar 17 17:59:17.432 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 17 17:59:17.436925 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 17 17:59:17.442890 coreos-metadata[1871]: Mar 17 17:59:17.442 INFO Fetch successful Mar 17 17:59:17.442890 coreos-metadata[1871]: Mar 17 17:59:17.442 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 17 17:59:17.447377 coreos-metadata[1871]: Mar 17 17:59:17.444 INFO Fetch failed with 404: resource not found Mar 17 17:59:17.447377 coreos-metadata[1871]: Mar 17 17:59:17.444 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 17 17:59:17.447377 coreos-metadata[1871]: Mar 17 17:59:17.445 INFO Fetch successful Mar 17 17:59:17.447377 coreos-metadata[1871]: Mar 17 17:59:17.445 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 17 17:59:17.447999 coreos-metadata[1871]: Mar 17 17:59:17.447 INFO Fetch successful Mar 17 17:59:17.447999 coreos-metadata[1871]: Mar 17 17:59:17.447 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 17 17:59:17.450755 coreos-metadata[1871]: Mar 17 17:59:17.448 INFO Fetch successful Mar 17 17:59:17.450755 coreos-metadata[1871]: Mar 17 17:59:17.448 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 17 17:59:17.452666 coreos-metadata[1871]: Mar 17 17:59:17.451 INFO Fetch successful Mar 17 17:59:17.452666 coreos-metadata[1871]: Mar 17 17:59:17.451 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 17 17:59:17.461753 coreos-metadata[1871]: Mar 17 17:59:17.455 INFO Fetch successful Mar 17 17:59:17.484633 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1750) Mar 17 17:59:17.485521 systemd-logind[1891]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:59:17.493978 systemd-logind[1891]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 17 17:59:17.502315 systemd-logind[1891]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:59:17.506095 systemd-logind[1891]: New seat seat0. Mar 17 17:59:17.522751 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:59:17.606958 bash[1956]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:59:17.617905 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:59:17.621053 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:59:17.638406 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:59:17.640196 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:59:17.651173 systemd[1]: Starting sshkeys.service... Mar 17 17:59:17.670720 systemd-networkd[1741]: eth0: Gained IPv6LL Mar 17 17:59:17.680588 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:59:17.687746 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:59:17.702987 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 17 17:59:17.716920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:17.723104 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:59:17.757397 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:59:17.789234 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:59:17.901227 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:59:18.030802 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 17 17:59:18.044251 dbus-daemon[1872]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 17:59:18.056626 amazon-ssm-agent[1980]: Initializing new seelog logger Mar 17 17:59:18.056626 amazon-ssm-agent[1980]: New Seelog Logger Creation Complete Mar 17 17:59:18.056626 amazon-ssm-agent[1980]: 2025/03/17 17:59:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:59:18.056626 amazon-ssm-agent[1980]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:59:18.056626 amazon-ssm-agent[1980]: 2025/03/17 17:59:18 processing appconfig overrides Mar 17 17:59:18.053910 dbus-daemon[1872]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1920 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 17:59:18.064632 amazon-ssm-agent[1980]: 2025/03/17 17:59:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:59:18.064632 amazon-ssm-agent[1980]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:59:18.064632 amazon-ssm-agent[1980]: 2025/03/17 17:59:18 processing appconfig overrides Mar 17 17:59:18.064632 amazon-ssm-agent[1980]: 2025/03/17 17:59:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:59:18.064632 amazon-ssm-agent[1980]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:59:18.064632 amazon-ssm-agent[1980]: 2025/03/17 17:59:18 processing appconfig overrides Mar 17 17:59:18.064632 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO Proxy environment variables: Mar 17 17:59:18.075668 amazon-ssm-agent[1980]: 2025/03/17 17:59:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:59:18.075668 amazon-ssm-agent[1980]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:59:18.075668 amazon-ssm-agent[1980]: 2025/03/17 17:59:18 processing appconfig overrides Mar 17 17:59:18.072685 systemd[1]: Starting polkit.service - Authorization Manager... Mar 17 17:59:18.166633 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO https_proxy: Mar 17 17:59:18.211196 polkitd[2056]: Started polkitd version 121 Mar 17 17:59:18.262121 polkitd[2056]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 17:59:18.264783 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO http_proxy: Mar 17 17:59:18.262221 polkitd[2056]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 17:59:18.266926 locksmithd[1921]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:59:18.288241 polkitd[2056]: Finished loading, compiling and executing 2 rules Mar 17 17:59:18.308269 dbus-daemon[1872]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 17:59:18.308712 systemd[1]: Started polkit.service - Authorization Manager. Mar 17 17:59:18.309522 polkitd[2056]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 17:59:18.315730 coreos-metadata[1987]: Mar 17 17:59:18.314 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:59:18.327010 coreos-metadata[1987]: Mar 17 17:59:18.324 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 17 17:59:18.327723 coreos-metadata[1987]: Mar 17 17:59:18.327 INFO Fetch successful Mar 17 17:59:18.327723 coreos-metadata[1987]: Mar 17 17:59:18.327 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 17:59:18.331019 coreos-metadata[1987]: Mar 17 17:59:18.329 INFO Fetch successful Mar 17 17:59:18.335507 unknown[1987]: wrote ssh authorized keys file for user: core Mar 17 17:59:18.373631 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO no_proxy: Mar 17 17:59:18.408765 systemd-hostnamed[1920]: Hostname set to (transient) Mar 17 17:59:18.412546 systemd-resolved[1692]: System hostname changed to 'ip-172-31-20-178'. Mar 17 17:59:18.429446 update-ssh-keys[2078]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:59:18.424198 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:59:18.431088 systemd[1]: Finished sshkeys.service. Mar 17 17:59:18.463987 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO Checking if agent identity type OnPrem can be assumed Mar 17 17:59:18.562481 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO Checking if agent identity type EC2 can be assumed Mar 17 17:59:18.635038 containerd[1918]: time="2025-03-17T17:59:18.634527126Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:59:18.664935 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO Agent will take identity from EC2 Mar 17 17:59:18.679185 sshd_keygen[1914]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:59:18.756118 containerd[1918]: time="2025-03-17T17:59:18.756025717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:18.765623 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:59:18.766492 containerd[1918]: time="2025-03-17T17:59:18.766443136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:18.766492 containerd[1918]: time="2025-03-17T17:59:18.766491814Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:59:18.766628 containerd[1918]: time="2025-03-17T17:59:18.766514838Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:59:18.766755 containerd[1918]: time="2025-03-17T17:59:18.766732625Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:59:18.766796 containerd[1918]: time="2025-03-17T17:59:18.766763587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:18.766864 containerd[1918]: time="2025-03-17T17:59:18.766843179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:18.766913 containerd[1918]: time="2025-03-17T17:59:18.766866899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:18.767163 containerd[1918]: time="2025-03-17T17:59:18.767136016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:18.767209 containerd[1918]: time="2025-03-17T17:59:18.767165579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:18.767209 containerd[1918]: time="2025-03-17T17:59:18.767186680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:18.767209 containerd[1918]: time="2025-03-17T17:59:18.767202185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:18.767315 containerd[1918]: time="2025-03-17T17:59:18.767302764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:18.768621 containerd[1918]: time="2025-03-17T17:59:18.767765775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:18.768621 containerd[1918]: time="2025-03-17T17:59:18.767976902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:18.768621 containerd[1918]: time="2025-03-17T17:59:18.767998235Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:59:18.768621 containerd[1918]: time="2025-03-17T17:59:18.768100375Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:59:18.768621 containerd[1918]: time="2025-03-17T17:59:18.768153411Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:59:18.778291 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:59:18.785239 containerd[1918]: time="2025-03-17T17:59:18.783523796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:59:18.785239 containerd[1918]: time="2025-03-17T17:59:18.783623481Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:59:18.785239 containerd[1918]: time="2025-03-17T17:59:18.783647667Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:59:18.785239 containerd[1918]: time="2025-03-17T17:59:18.783679212Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:59:18.785239 containerd[1918]: time="2025-03-17T17:59:18.783699691Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:59:18.785239 containerd[1918]: time="2025-03-17T17:59:18.783873650Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:59:18.785239 containerd[1918]: time="2025-03-17T17:59:18.784178651Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:59:18.785239 containerd[1918]: time="2025-03-17T17:59:18.784304119Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:59:18.785239 containerd[1918]: time="2025-03-17T17:59:18.784350883Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:59:18.785239 containerd[1918]: time="2025-03-17T17:59:18.784374330Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.787618030Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.787672499Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.787693135Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.787719321Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.787742155Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.787762993Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.787781773Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.787801536Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.787833649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.787866882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.787994776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.788018175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.788036643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.789625 containerd[1918]: time="2025-03-17T17:59:18.788058023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.790457 containerd[1918]: time="2025-03-17T17:59:18.788075419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.790457 containerd[1918]: time="2025-03-17T17:59:18.788095022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.790457 containerd[1918]: time="2025-03-17T17:59:18.788114656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.790457 containerd[1918]: time="2025-03-17T17:59:18.788140143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.790457 containerd[1918]: time="2025-03-17T17:59:18.788160956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.790457 containerd[1918]: time="2025-03-17T17:59:18.788177721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.790457 containerd[1918]: time="2025-03-17T17:59:18.788195522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.790457 containerd[1918]: time="2025-03-17T17:59:18.788310723Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:59:18.790457 containerd[1918]: time="2025-03-17T17:59:18.788351509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.790457 containerd[1918]: time="2025-03-17T17:59:18.788374446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.790457 containerd[1918]: time="2025-03-17T17:59:18.788391512Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:59:18.792797 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:59:18.797981 containerd[1918]: time="2025-03-17T17:59:18.790581483Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:59:18.797981 containerd[1918]: time="2025-03-17T17:59:18.795816021Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:59:18.795707 systemd[1]: Started sshd@0-172.31.20.178:22-139.178.89.65:55722.service - OpenSSH per-connection server daemon (139.178.89.65:55722). Mar 17 17:59:18.798360 containerd[1918]: time="2025-03-17T17:59:18.795856791Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:59:18.800689 containerd[1918]: time="2025-03-17T17:59:18.798432800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:59:18.800689 containerd[1918]: time="2025-03-17T17:59:18.798461896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.800689 containerd[1918]: time="2025-03-17T17:59:18.798621838Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:59:18.800689 containerd[1918]: time="2025-03-17T17:59:18.798649056Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:59:18.800689 containerd[1918]: time="2025-03-17T17:59:18.798670826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:59:18.800985 containerd[1918]: time="2025-03-17T17:59:18.799235472Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:59:18.800985 containerd[1918]: time="2025-03-17T17:59:18.799316675Z" level=info msg="Connect containerd service" Mar 17 17:59:18.800985 containerd[1918]: time="2025-03-17T17:59:18.799381581Z" level=info msg="using legacy CRI server" Mar 17 17:59:18.800985 containerd[1918]: time="2025-03-17T17:59:18.799393616Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:59:18.804995 containerd[1918]: time="2025-03-17T17:59:18.803626783Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:59:18.804995 containerd[1918]: time="2025-03-17T17:59:18.804584955Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:59:18.805251 containerd[1918]: time="2025-03-17T17:59:18.805213793Z" level=info msg="Start subscribing containerd event" Mar 17 17:59:18.805367 containerd[1918]: time="2025-03-17T17:59:18.805348939Z" level=info msg="Start recovering state" Mar 17 17:59:18.805504 containerd[1918]: time="2025-03-17T17:59:18.805489333Z" level=info msg="Start event monitor" Mar 17 17:59:18.805569 containerd[1918]: time="2025-03-17T17:59:18.805557950Z" level=info msg="Start snapshots syncer" Mar 17 17:59:18.805653 containerd[1918]: time="2025-03-17T17:59:18.805640146Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:59:18.805780 containerd[1918]: time="2025-03-17T17:59:18.805765630Z" level=info msg="Start streaming server" Mar 17 17:59:18.809138 containerd[1918]: time="2025-03-17T17:59:18.809103949Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:59:18.810127 containerd[1918]: time="2025-03-17T17:59:18.810097188Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:59:18.812078 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:59:18.822761 containerd[1918]: time="2025-03-17T17:59:18.822720817Z" level=info msg="containerd successfully booted in 0.197636s" Mar 17 17:59:18.850166 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:59:18.850680 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:59:18.865936 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:59:18.873673 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:59:18.918678 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:59:18.930859 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:59:18.941195 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:59:18.943407 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:59:18.972908 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:59:18.996169 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 17 17:59:18.996404 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 17 17:59:18.996513 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [amazon-ssm-agent] Starting Core Agent Mar 17 17:59:18.996593 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 17 17:59:18.996683 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [Registrar] Starting registrar module Mar 17 17:59:18.996758 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 17 17:59:18.996852 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [EC2Identity] EC2 registration was successful. Mar 17 17:59:18.996928 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [CredentialRefresher] credentialRefresher has started Mar 17 17:59:18.997003 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [CredentialRefresher] Starting credentials refresher loop Mar 17 17:59:18.997077 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 17 17:59:19.071626 amazon-ssm-agent[1980]: 2025-03-17 17:59:18 INFO [CredentialRefresher] Next credential rotation will be in 31.008312364583333 minutes Mar 17 17:59:19.102766 sshd[2106]: Accepted publickey for core from 139.178.89.65 port 55722 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:59:19.109736 sshd-session[2106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:59:19.143520 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:59:19.161803 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:59:19.176824 systemd-logind[1891]: New session 1 of user core. Mar 17 17:59:19.211364 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:59:19.225190 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:59:19.244167 (systemd)[2117]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:59:19.249758 systemd-logind[1891]: New session c1 of user core. Mar 17 17:59:19.285278 tar[1901]: linux-amd64/LICENSE Mar 17 17:59:19.285278 tar[1901]: linux-amd64/README.md Mar 17 17:59:19.322419 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:59:19.544273 systemd[2117]: Queued start job for default target default.target. Mar 17 17:59:19.557921 systemd[2117]: Created slice app.slice - User Application Slice. Mar 17 17:59:19.558509 systemd[2117]: Reached target paths.target - Paths. Mar 17 17:59:19.558590 systemd[2117]: Reached target timers.target - Timers. Mar 17 17:59:19.561279 systemd[2117]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:59:19.592834 systemd[2117]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:59:19.593154 systemd[2117]: Reached target sockets.target - Sockets. Mar 17 17:59:19.593299 systemd[2117]: Reached target basic.target - Basic System. Mar 17 17:59:19.593733 systemd[2117]: Reached target default.target - Main User Target. Mar 17 17:59:19.593773 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:59:19.593879 systemd[2117]: Startup finished in 330ms. Mar 17 17:59:19.603881 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:59:19.774040 systemd[1]: Started sshd@1-172.31.20.178:22-139.178.89.65:55726.service - OpenSSH per-connection server daemon (139.178.89.65:55726). Mar 17 17:59:19.952012 sshd[2131]: Accepted publickey for core from 139.178.89.65 port 55726 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:59:19.954303 sshd-session[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:59:19.961277 systemd-logind[1891]: New session 2 of user core. Mar 17 17:59:19.966797 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:59:20.044491 amazon-ssm-agent[1980]: 2025-03-17 17:59:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 17 17:59:20.112462 sshd[2133]: Connection closed by 139.178.89.65 port 55726 Mar 17 17:59:20.114716 sshd-session[2131]: pam_unix(sshd:session): session closed for user core Mar 17 17:59:20.128257 systemd[1]: sshd@1-172.31.20.178:22-139.178.89.65:55726.service: Deactivated successfully. Mar 17 17:59:20.132221 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:59:20.132988 ntpd[1876]: Listen normally on 6 eth0 [fe80::41e:a8ff:fe7b:daa5%2]:123 Mar 17 17:59:20.134126 ntpd[1876]: 17 Mar 17:59:20 ntpd[1876]: Listen normally on 6 eth0 [fe80::41e:a8ff:fe7b:daa5%2]:123 Mar 17 17:59:20.139674 systemd-logind[1891]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:59:20.144930 amazon-ssm-agent[1980]: 2025-03-17 17:59:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2135) started Mar 17 17:59:20.167723 systemd[1]: Started sshd@2-172.31.20.178:22-139.178.89.65:55740.service - OpenSSH per-connection server daemon (139.178.89.65:55740). Mar 17 17:59:20.170727 systemd-logind[1891]: Removed session 2. Mar 17 17:59:20.246754 amazon-ssm-agent[1980]: 2025-03-17 17:59:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 17 17:59:20.262850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:20.265567 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:59:20.267279 systemd[1]: Startup finished in 1.829s (kernel) + 9.536s (initrd) + 8.635s (userspace) = 20.001s. Mar 17 17:59:20.403350 sshd[2144]: Accepted publickey for core from 139.178.89.65 port 55740 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:59:20.405939 sshd-session[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:59:20.416850 systemd-logind[1891]: New session 3 of user core. Mar 17 17:59:20.421898 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:59:20.546669 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:59:20.558146 sshd[2159]: Connection closed by 139.178.89.65 port 55740 Mar 17 17:59:20.554327 sshd-session[2144]: pam_unix(sshd:session): session closed for user core Mar 17 17:59:20.580285 systemd[1]: sshd@2-172.31.20.178:22-139.178.89.65:55740.service: Deactivated successfully. Mar 17 17:59:20.589374 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:59:20.591053 systemd-logind[1891]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:59:20.595824 systemd-logind[1891]: Removed session 3. Mar 17 17:59:21.540180 kubelet[2154]: E0317 17:59:21.540094 2154 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:59:21.543570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:59:21.543859 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:59:21.544431 systemd[1]: kubelet.service: Consumed 1.022s CPU time, 246.8M memory peak. Mar 17 17:59:24.970479 systemd-resolved[1692]: Clock change detected. Flushing caches. Mar 17 17:59:31.434065 systemd[1]: Started sshd@3-172.31.20.178:22-139.178.89.65:57208.service - OpenSSH per-connection server daemon (139.178.89.65:57208). Mar 17 17:59:31.608942 sshd[2175]: Accepted publickey for core from 139.178.89.65 port 57208 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:59:31.610923 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:59:31.617386 systemd-logind[1891]: New session 4 of user core. Mar 17 17:59:31.637538 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:59:31.756245 sshd[2177]: Connection closed by 139.178.89.65 port 57208 Mar 17 17:59:31.757002 sshd-session[2175]: pam_unix(sshd:session): session closed for user core Mar 17 17:59:31.760994 systemd[1]: sshd@3-172.31.20.178:22-139.178.89.65:57208.service: Deactivated successfully. Mar 17 17:59:31.771467 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:59:31.776448 systemd-logind[1891]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:59:31.797594 systemd-logind[1891]: Removed session 4. Mar 17 17:59:31.811109 systemd[1]: Started sshd@4-172.31.20.178:22-139.178.89.65:57210.service - OpenSSH per-connection server daemon (139.178.89.65:57210). Mar 17 17:59:31.989391 sshd[2182]: Accepted publickey for core from 139.178.89.65 port 57210 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:59:31.991037 sshd-session[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:59:31.997565 systemd-logind[1891]: New session 5 of user core. Mar 17 17:59:32.010510 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:59:32.128587 sshd[2185]: Connection closed by 139.178.89.65 port 57210 Mar 17 17:59:32.129318 sshd-session[2182]: pam_unix(sshd:session): session closed for user core Mar 17 17:59:32.135880 systemd-logind[1891]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:59:32.137642 systemd[1]: sshd@4-172.31.20.178:22-139.178.89.65:57210.service: Deactivated successfully. Mar 17 17:59:32.140763 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:59:32.142072 systemd-logind[1891]: Removed session 5. Mar 17 17:59:32.173703 systemd[1]: Started sshd@5-172.31.20.178:22-139.178.89.65:57214.service - OpenSSH per-connection server daemon (139.178.89.65:57214). Mar 17 17:59:32.382928 sshd[2191]: Accepted publickey for core from 139.178.89.65 port 57214 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:59:32.384804 sshd-session[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:59:32.388883 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:59:32.395577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:32.402379 systemd-logind[1891]: New session 6 of user core. Mar 17 17:59:32.414406 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:59:32.539676 sshd[2196]: Connection closed by 139.178.89.65 port 57214 Mar 17 17:59:32.541511 sshd-session[2191]: pam_unix(sshd:session): session closed for user core Mar 17 17:59:32.548151 systemd-logind[1891]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:59:32.549324 systemd[1]: sshd@5-172.31.20.178:22-139.178.89.65:57214.service: Deactivated successfully. Mar 17 17:59:32.552964 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:59:32.557666 systemd-logind[1891]: Removed session 6. Mar 17 17:59:32.587403 systemd[1]: Started sshd@6-172.31.20.178:22-139.178.89.65:57224.service - OpenSSH per-connection server daemon (139.178.89.65:57224). Mar 17 17:59:32.642392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:32.656193 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:59:32.721198 kubelet[2209]: E0317 17:59:32.721080 2209 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:59:32.728601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:59:32.728805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:59:32.729256 systemd[1]: kubelet.service: Consumed 172ms CPU time, 97.6M memory peak. Mar 17 17:59:32.761861 sshd[2202]: Accepted publickey for core from 139.178.89.65 port 57224 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:59:32.763535 sshd-session[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:59:32.768961 systemd-logind[1891]: New session 7 of user core. Mar 17 17:59:32.786594 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:59:32.896060 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:59:32.896489 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:59:32.912946 sudo[2218]: pam_unix(sudo:session): session closed for user root Mar 17 17:59:32.935364 sshd[2217]: Connection closed by 139.178.89.65 port 57224 Mar 17 17:59:32.936389 sshd-session[2202]: pam_unix(sshd:session): session closed for user core Mar 17 17:59:32.946805 systemd[1]: sshd@6-172.31.20.178:22-139.178.89.65:57224.service: Deactivated successfully. Mar 17 17:59:32.955107 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:59:32.956241 systemd-logind[1891]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:59:32.978638 systemd[1]: Started sshd@7-172.31.20.178:22-139.178.89.65:57240.service - OpenSSH per-connection server daemon (139.178.89.65:57240). Mar 17 17:59:32.983501 systemd-logind[1891]: Removed session 7. Mar 17 17:59:33.146625 sshd[2223]: Accepted publickey for core from 139.178.89.65 port 57240 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:59:33.148938 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:59:33.155589 systemd-logind[1891]: New session 8 of user core. Mar 17 17:59:33.162547 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:59:33.271461 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:59:33.271862 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:59:33.277634 sudo[2228]: pam_unix(sudo:session): session closed for user root Mar 17 17:59:33.285518 sudo[2227]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:59:33.285907 sudo[2227]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:59:33.308748 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:59:33.359806 augenrules[2250]: No rules Mar 17 17:59:33.361764 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:59:33.361988 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:59:33.364042 sudo[2227]: pam_unix(sudo:session): session closed for user root Mar 17 17:59:33.386950 sshd[2226]: Connection closed by 139.178.89.65 port 57240 Mar 17 17:59:33.387613 sshd-session[2223]: pam_unix(sshd:session): session closed for user core Mar 17 17:59:33.393955 systemd-logind[1891]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:59:33.394938 systemd[1]: sshd@7-172.31.20.178:22-139.178.89.65:57240.service: Deactivated successfully. Mar 17 17:59:33.397666 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:59:33.399250 systemd-logind[1891]: Removed session 8. Mar 17 17:59:33.430762 systemd[1]: Started sshd@8-172.31.20.178:22-139.178.89.65:57256.service - OpenSSH per-connection server daemon (139.178.89.65:57256). Mar 17 17:59:33.604446 sshd[2259]: Accepted publickey for core from 139.178.89.65 port 57256 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:59:33.606478 sshd-session[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:59:33.617781 systemd-logind[1891]: New session 9 of user core. Mar 17 17:59:33.625629 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:59:33.726026 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:59:33.726584 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:59:34.527643 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:59:34.527775 (dockerd)[2280]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:59:35.172384 dockerd[2280]: time="2025-03-17T17:59:35.170819557Z" level=info msg="Starting up" Mar 17 17:59:35.377673 dockerd[2280]: time="2025-03-17T17:59:35.377620652Z" level=info msg="Loading containers: start." Mar 17 17:59:35.657292 kernel: Initializing XFRM netlink socket Mar 17 17:59:35.722154 (udev-worker)[2301]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:59:35.839334 systemd-networkd[1741]: docker0: Link UP Mar 17 17:59:35.906681 dockerd[2280]: time="2025-03-17T17:59:35.906633801Z" level=info msg="Loading containers: done." Mar 17 17:59:35.933452 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3081109135-merged.mount: Deactivated successfully. Mar 17 17:59:35.936963 dockerd[2280]: time="2025-03-17T17:59:35.936911625Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:59:35.937089 dockerd[2280]: time="2025-03-17T17:59:35.937032497Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:59:35.937192 dockerd[2280]: time="2025-03-17T17:59:35.937169463Z" level=info msg="Daemon has completed initialization" Mar 17 17:59:35.978713 dockerd[2280]: time="2025-03-17T17:59:35.978549029Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:59:35.978655 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:59:37.374475 containerd[1918]: time="2025-03-17T17:59:37.374428145Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:59:38.104354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3797119889.mount: Deactivated successfully. Mar 17 17:59:40.261095 containerd[1918]: time="2025-03-17T17:59:40.261041183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:40.262467 containerd[1918]: time="2025-03-17T17:59:40.262413282Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 17 17:59:40.263883 containerd[1918]: time="2025-03-17T17:59:40.263500628Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:40.268100 containerd[1918]: time="2025-03-17T17:59:40.266713925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:40.268100 containerd[1918]: time="2025-03-17T17:59:40.267886357Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 2.893414396s" Mar 17 17:59:40.268100 containerd[1918]: time="2025-03-17T17:59:40.267928757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 17:59:40.294142 containerd[1918]: time="2025-03-17T17:59:40.294098502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:59:42.856981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:59:42.867572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:43.145579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:43.149219 (kubelet)[2545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:59:43.229778 kubelet[2545]: E0317 17:59:43.229402 2545 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:59:43.232832 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:59:43.233035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:59:43.233489 systemd[1]: kubelet.service: Consumed 179ms CPU time, 95.8M memory peak. Mar 17 17:59:43.305097 containerd[1918]: time="2025-03-17T17:59:43.305047670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:43.306367 containerd[1918]: time="2025-03-17T17:59:43.306315562Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 17 17:59:43.307623 containerd[1918]: time="2025-03-17T17:59:43.307236527Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:43.310315 containerd[1918]: time="2025-03-17T17:59:43.310240102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:43.311573 containerd[1918]: time="2025-03-17T17:59:43.311536623Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 3.017391889s" Mar 17 17:59:43.311722 containerd[1918]: time="2025-03-17T17:59:43.311700482Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 17:59:43.339131 containerd[1918]: time="2025-03-17T17:59:43.339086997Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:59:45.336885 containerd[1918]: time="2025-03-17T17:59:45.336766581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:45.338878 containerd[1918]: time="2025-03-17T17:59:45.338821449Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 17 17:59:45.340774 containerd[1918]: time="2025-03-17T17:59:45.340200436Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:45.343471 containerd[1918]: time="2025-03-17T17:59:45.343431919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:45.344819 containerd[1918]: time="2025-03-17T17:59:45.344781722Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 2.005650236s" Mar 17 17:59:45.344965 containerd[1918]: time="2025-03-17T17:59:45.344945628Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 17:59:45.376738 containerd[1918]: time="2025-03-17T17:59:45.376702591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:59:46.679870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217346164.mount: Deactivated successfully. Mar 17 17:59:47.411212 containerd[1918]: time="2025-03-17T17:59:47.411149764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:47.412789 containerd[1918]: time="2025-03-17T17:59:47.412556807Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 17 17:59:47.415635 containerd[1918]: time="2025-03-17T17:59:47.414180452Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:47.417085 containerd[1918]: time="2025-03-17T17:59:47.417046406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:47.417883 containerd[1918]: time="2025-03-17T17:59:47.417841780Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 2.040920969s" Mar 17 17:59:47.417972 containerd[1918]: time="2025-03-17T17:59:47.417888470Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 17:59:47.450340 containerd[1918]: time="2025-03-17T17:59:47.450252268Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:59:48.198984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3292496568.mount: Deactivated successfully. Mar 17 17:59:49.287193 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 17:59:49.388023 containerd[1918]: time="2025-03-17T17:59:49.387969664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:49.389487 containerd[1918]: time="2025-03-17T17:59:49.389439504Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 17 17:59:49.390952 containerd[1918]: time="2025-03-17T17:59:49.390563878Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:49.396591 containerd[1918]: time="2025-03-17T17:59:49.396522980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:49.400004 containerd[1918]: time="2025-03-17T17:59:49.399953301Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.949494249s" Mar 17 17:59:49.400180 containerd[1918]: time="2025-03-17T17:59:49.400009943Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 17:59:49.430215 containerd[1918]: time="2025-03-17T17:59:49.430169737Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:59:49.916451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1949853592.mount: Deactivated successfully. Mar 17 17:59:49.925891 containerd[1918]: time="2025-03-17T17:59:49.922338520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:49.926814 containerd[1918]: time="2025-03-17T17:59:49.926746024Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 17 17:59:49.928417 containerd[1918]: time="2025-03-17T17:59:49.928381203Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:49.932054 containerd[1918]: time="2025-03-17T17:59:49.931983038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:49.933167 containerd[1918]: time="2025-03-17T17:59:49.933122093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 502.906082ms" Mar 17 17:59:49.933167 containerd[1918]: time="2025-03-17T17:59:49.933158892Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 17:59:49.968461 containerd[1918]: time="2025-03-17T17:59:49.968426236Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:59:50.533158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339302768.mount: Deactivated successfully. Mar 17 17:59:53.355207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:59:53.363820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:53.503941 containerd[1918]: time="2025-03-17T17:59:53.503307613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:53.508508 containerd[1918]: time="2025-03-17T17:59:53.508420735Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 17 17:59:53.514297 containerd[1918]: time="2025-03-17T17:59:53.512779424Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:53.520643 containerd[1918]: time="2025-03-17T17:59:53.520458060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:53.523898 containerd[1918]: time="2025-03-17T17:59:53.523828653Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.555362788s" Mar 17 17:59:53.523898 containerd[1918]: time="2025-03-17T17:59:53.523889835Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 17:59:54.107573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:54.109757 (kubelet)[2701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:59:54.205912 kubelet[2701]: E0317 17:59:54.205806 2701 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:59:54.208496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:59:54.208685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:59:54.209406 systemd[1]: kubelet.service: Consumed 181ms CPU time, 97.8M memory peak. Mar 17 17:59:58.625663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:58.625928 systemd[1]: kubelet.service: Consumed 181ms CPU time, 97.8M memory peak. Mar 17 17:59:58.633641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:58.665653 systemd[1]: Reload requested from client PID 2767 ('systemctl') (unit session-9.scope)... Mar 17 17:59:58.665676 systemd[1]: Reloading... Mar 17 17:59:58.820335 zram_generator::config[2812]: No configuration found. Mar 17 17:59:58.999956 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:59:59.126945 systemd[1]: Reloading finished in 460 ms. Mar 17 17:59:59.181199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:59.188110 (kubelet)[2863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:59:59.194896 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:59.195365 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:59:59.195654 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:59.195725 systemd[1]: kubelet.service: Consumed 125ms CPU time, 84.6M memory peak. Mar 17 17:59:59.206820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:59.385639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:59.399150 (kubelet)[2875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:59:59.461071 kubelet[2875]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:59:59.461071 kubelet[2875]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:59:59.461071 kubelet[2875]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:59:59.466113 kubelet[2875]: I0317 17:59:59.466009 2875 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:00:00.321137 kubelet[2875]: I0317 18:00:00.321093 2875 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:00:00.321137 kubelet[2875]: I0317 18:00:00.321127 2875 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:00:00.321543 kubelet[2875]: I0317 18:00:00.321517 2875 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:00:00.355303 kubelet[2875]: I0317 18:00:00.354686 2875 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:00:00.355939 kubelet[2875]: E0317 18:00:00.355911 2875 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.178:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:00.370639 kubelet[2875]: I0317 18:00:00.370420 2875 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:00:00.374668 kubelet[2875]: I0317 18:00:00.374605 2875 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:00:00.374910 kubelet[2875]: I0317 18:00:00.374666 2875 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-178","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:00:00.375082 kubelet[2875]: I0317 18:00:00.374928 2875 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:00:00.375082 kubelet[2875]: I0317 18:00:00.374945 2875 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:00:00.375163 kubelet[2875]: I0317 18:00:00.375113 2875 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:00:00.376840 kubelet[2875]: W0317 18:00:00.376773 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-178&limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:00.376949 kubelet[2875]: E0317 18:00:00.376847 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-178&limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:00.381374 kubelet[2875]: I0317 18:00:00.381189 2875 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:00:00.381653 kubelet[2875]: I0317 18:00:00.381424 2875 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:00:00.381653 kubelet[2875]: I0317 18:00:00.381473 2875 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:00:00.381653 kubelet[2875]: I0317 18:00:00.381628 2875 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:00:00.385988 kubelet[2875]: W0317 18:00:00.385804 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.178:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:00.385988 kubelet[2875]: E0317 18:00:00.385871 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.178:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:00.386333 kubelet[2875]: I0317 18:00:00.386315 2875 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 18:00:00.389732 kubelet[2875]: I0317 18:00:00.388749 2875 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:00:00.389732 kubelet[2875]: W0317 18:00:00.388837 2875 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:00:00.389732 kubelet[2875]: I0317 18:00:00.389678 2875 server.go:1264] "Started kubelet" Mar 17 18:00:00.405745 kubelet[2875]: I0317 18:00:00.405710 2875 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:00:00.406937 kubelet[2875]: E0317 18:00:00.406809 2875 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.178:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.178:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-178.182da8f4d6b0c187 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-178,UID:ip-172-31-20-178,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-178,},FirstTimestamp:2025-03-17 18:00:00.389644679 +0000 UTC m=+0.982981692,LastTimestamp:2025-03-17 18:00:00.389644679 +0000 UTC m=+0.982981692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-178,}" Mar 17 18:00:00.409022 kubelet[2875]: I0317 18:00:00.408934 2875 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:00:00.413334 kubelet[2875]: I0317 18:00:00.413099 2875 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:00:00.413334 kubelet[2875]: I0317 18:00:00.413072 2875 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:00:00.415322 kubelet[2875]: I0317 18:00:00.415096 2875 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:00:00.418902 kubelet[2875]: I0317 18:00:00.418877 2875 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:00:00.425895 kubelet[2875]: I0317 18:00:00.425851 2875 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:00:00.426255 kubelet[2875]: I0317 18:00:00.425977 2875 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:00:00.434348 kubelet[2875]: E0317 18:00:00.429530 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-178?timeout=10s\": dial tcp 172.31.20.178:6443: connect: connection refused" interval="200ms" Mar 17 18:00:00.440139 kubelet[2875]: I0317 18:00:00.439842 2875 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:00:00.440139 kubelet[2875]: I0317 18:00:00.439916 2875 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:00:00.442152 kubelet[2875]: W0317 18:00:00.442074 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:00.442579 kubelet[2875]: E0317 18:00:00.442561 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:00.445240 kubelet[2875]: E0317 18:00:00.443032 2875 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:00:00.445240 kubelet[2875]: I0317 18:00:00.443239 2875 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:00:00.498662 kubelet[2875]: I0317 18:00:00.498505 2875 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:00:00.503572 kubelet[2875]: I0317 18:00:00.503533 2875 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:00:00.503572 kubelet[2875]: I0317 18:00:00.503581 2875 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:00:00.503757 kubelet[2875]: I0317 18:00:00.503603 2875 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:00:00.503757 kubelet[2875]: E0317 18:00:00.503654 2875 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:00:00.507399 kubelet[2875]: W0317 18:00:00.507343 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:00.507905 kubelet[2875]: E0317 18:00:00.507885 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:00.522437 kubelet[2875]: I0317 18:00:00.522312 2875 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:00:00.522616 kubelet[2875]: I0317 18:00:00.522451 2875 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:00:00.522616 kubelet[2875]: I0317 18:00:00.522493 2875 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:00:00.526017 kubelet[2875]: I0317 18:00:00.525977 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-178" Mar 17 18:00:00.526713 kubelet[2875]: E0317 18:00:00.526680 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.178:6443/api/v1/nodes\": dial tcp 172.31.20.178:6443: connect: connection refused" node="ip-172-31-20-178" Mar 17 18:00:00.530174 kubelet[2875]: I0317 18:00:00.530140 2875 policy_none.go:49] "None policy: Start" Mar 17 18:00:00.531501 kubelet[2875]: I0317 18:00:00.531470 2875 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:00:00.531501 kubelet[2875]: I0317 18:00:00.531502 2875 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:00:00.543505 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 18:00:00.592167 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 18:00:00.598495 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 18:00:00.606530 kubelet[2875]: E0317 18:00:00.605974 2875 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:00:00.610516 kubelet[2875]: I0317 18:00:00.608940 2875 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:00:00.610516 kubelet[2875]: I0317 18:00:00.609987 2875 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:00:00.610760 kubelet[2875]: I0317 18:00:00.610703 2875 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:00:00.616229 kubelet[2875]: E0317 18:00:00.616190 2875 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-178\" not found" Mar 17 18:00:00.630612 kubelet[2875]: E0317 18:00:00.630555 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-178?timeout=10s\": dial tcp 172.31.20.178:6443: connect: connection refused" interval="400ms" Mar 17 18:00:00.729056 kubelet[2875]: I0317 18:00:00.729026 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-178" Mar 17 18:00:00.729700 kubelet[2875]: E0317 18:00:00.729661 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.178:6443/api/v1/nodes\": dial tcp 172.31.20.178:6443: connect: connection refused" node="ip-172-31-20-178" Mar 17 18:00:00.807022 kubelet[2875]: I0317 18:00:00.806628 2875 topology_manager.go:215] "Topology Admit Handler" podUID="724823bc622a169087b53db9151c31c9" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-178" Mar 17 18:00:00.813067 kubelet[2875]: I0317 18:00:00.813026 2875 topology_manager.go:215] "Topology Admit Handler" podUID="dc4d2c90345ea28c3b096f608f6b1a86" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-178" Mar 17 18:00:00.816875 kubelet[2875]: I0317 18:00:00.816838 2875 topology_manager.go:215] "Topology Admit Handler" podUID="35c88deedb53cb68f6b9d12a105958ad" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-178" Mar 17 18:00:00.832190 systemd[1]: Created slice kubepods-burstable-pod724823bc622a169087b53db9151c31c9.slice - libcontainer container kubepods-burstable-pod724823bc622a169087b53db9151c31c9.slice. Mar 17 18:00:00.844200 kubelet[2875]: I0317 18:00:00.842215 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc4d2c90345ea28c3b096f608f6b1a86-ca-certs\") pod \"kube-apiserver-ip-172-31-20-178\" (UID: \"dc4d2c90345ea28c3b096f608f6b1a86\") " pod="kube-system/kube-apiserver-ip-172-31-20-178" Mar 17 18:00:00.844200 kubelet[2875]: I0317 18:00:00.842274 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35c88deedb53cb68f6b9d12a105958ad-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-178\" (UID: \"35c88deedb53cb68f6b9d12a105958ad\") " pod="kube-system/kube-controller-manager-ip-172-31-20-178" Mar 17 18:00:00.844200 kubelet[2875]: I0317 18:00:00.842307 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35c88deedb53cb68f6b9d12a105958ad-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-178\" (UID: \"35c88deedb53cb68f6b9d12a105958ad\") " pod="kube-system/kube-controller-manager-ip-172-31-20-178" Mar 17 18:00:00.844200 kubelet[2875]: I0317 18:00:00.842334 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35c88deedb53cb68f6b9d12a105958ad-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-178\" (UID: \"35c88deedb53cb68f6b9d12a105958ad\") " pod="kube-system/kube-controller-manager-ip-172-31-20-178" Mar 17 18:00:00.844200 kubelet[2875]: I0317 18:00:00.842505 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/724823bc622a169087b53db9151c31c9-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-178\" (UID: \"724823bc622a169087b53db9151c31c9\") " pod="kube-system/kube-scheduler-ip-172-31-20-178" Mar 17 18:00:00.844521 kubelet[2875]: I0317 18:00:00.842534 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc4d2c90345ea28c3b096f608f6b1a86-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-178\" (UID: \"dc4d2c90345ea28c3b096f608f6b1a86\") " pod="kube-system/kube-apiserver-ip-172-31-20-178" Mar 17 18:00:00.844521 kubelet[2875]: I0317 18:00:00.842564 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc4d2c90345ea28c3b096f608f6b1a86-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-178\" (UID: \"dc4d2c90345ea28c3b096f608f6b1a86\") " pod="kube-system/kube-apiserver-ip-172-31-20-178" Mar 17 18:00:00.844521 kubelet[2875]: I0317 18:00:00.842589 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35c88deedb53cb68f6b9d12a105958ad-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-178\" (UID: \"35c88deedb53cb68f6b9d12a105958ad\") " pod="kube-system/kube-controller-manager-ip-172-31-20-178" Mar 17 18:00:00.844521 kubelet[2875]: I0317 18:00:00.842617 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35c88deedb53cb68f6b9d12a105958ad-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-178\" (UID: \"35c88deedb53cb68f6b9d12a105958ad\") " pod="kube-system/kube-controller-manager-ip-172-31-20-178" Mar 17 18:00:00.848112 systemd[1]: Created slice kubepods-burstable-poddc4d2c90345ea28c3b096f608f6b1a86.slice - libcontainer container kubepods-burstable-poddc4d2c90345ea28c3b096f608f6b1a86.slice. Mar 17 18:00:00.863001 systemd[1]: Created slice kubepods-burstable-pod35c88deedb53cb68f6b9d12a105958ad.slice - libcontainer container kubepods-burstable-pod35c88deedb53cb68f6b9d12a105958ad.slice. Mar 17 18:00:01.031405 kubelet[2875]: E0317 18:00:01.031346 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-178?timeout=10s\": dial tcp 172.31.20.178:6443: connect: connection refused" interval="800ms" Mar 17 18:00:01.131990 kubelet[2875]: I0317 18:00:01.131952 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-178" Mar 17 18:00:01.132534 kubelet[2875]: E0317 18:00:01.132499 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.178:6443/api/v1/nodes\": dial tcp 172.31.20.178:6443: connect: connection refused" node="ip-172-31-20-178" Mar 17 18:00:01.145063 containerd[1918]: time="2025-03-17T18:00:01.145018816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-178,Uid:724823bc622a169087b53db9151c31c9,Namespace:kube-system,Attempt:0,}" Mar 17 18:00:01.159404 containerd[1918]: time="2025-03-17T18:00:01.159360824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-178,Uid:dc4d2c90345ea28c3b096f608f6b1a86,Namespace:kube-system,Attempt:0,}" Mar 17 18:00:01.168829 containerd[1918]: time="2025-03-17T18:00:01.168785732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-178,Uid:35c88deedb53cb68f6b9d12a105958ad,Namespace:kube-system,Attempt:0,}" Mar 17 18:00:01.283483 kubelet[2875]: W0317 18:00:01.282794 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-178&limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:01.283483 kubelet[2875]: E0317 18:00:01.283485 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-178&limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:01.739490 kubelet[2875]: W0317 18:00:01.739427 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:01.739490 kubelet[2875]: E0317 18:00:01.739499 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:01.740824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185082142.mount: Deactivated successfully. Mar 17 18:00:01.776909 containerd[1918]: time="2025-03-17T18:00:01.776840887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:00:01.787604 containerd[1918]: time="2025-03-17T18:00:01.787535687Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 18:00:01.790238 containerd[1918]: time="2025-03-17T18:00:01.789515166Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:00:01.792306 containerd[1918]: time="2025-03-17T18:00:01.792238589Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:00:01.797008 containerd[1918]: time="2025-03-17T18:00:01.796956017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:00:01.801056 containerd[1918]: time="2025-03-17T18:00:01.800984769Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 18:00:01.804411 containerd[1918]: time="2025-03-17T18:00:01.802955751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 18:00:01.807511 containerd[1918]: time="2025-03-17T18:00:01.807151321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:00:01.810701 containerd[1918]: time="2025-03-17T18:00:01.810408619Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 665.267914ms" Mar 17 18:00:01.826153 containerd[1918]: time="2025-03-17T18:00:01.825876012Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 656.980001ms" Mar 17 18:00:01.828230 containerd[1918]: time="2025-03-17T18:00:01.828174506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 668.70673ms" Mar 17 18:00:01.833029 kubelet[2875]: E0317 18:00:01.832830 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-178?timeout=10s\": dial tcp 172.31.20.178:6443: connect: connection refused" interval="1.6s" Mar 17 18:00:01.946918 kubelet[2875]: W0317 18:00:01.946819 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.178:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:01.946918 kubelet[2875]: E0317 18:00:01.946922 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.178:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:01.986197 kubelet[2875]: W0317 18:00:01.986141 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:01.986403 kubelet[2875]: E0317 18:00:01.986383 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:02.064625 kubelet[2875]: I0317 18:00:02.062926 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-178" Mar 17 18:00:02.065292 kubelet[2875]: E0317 18:00:02.065210 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.178:6443/api/v1/nodes\": dial tcp 172.31.20.178:6443: connect: connection refused" node="ip-172-31-20-178" Mar 17 18:00:02.516294 kubelet[2875]: E0317 18:00:02.515666 2875 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.178:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:02.539924 containerd[1918]: time="2025-03-17T18:00:02.532190387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:00:02.543867 containerd[1918]: time="2025-03-17T18:00:02.543748279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:00:02.543867 containerd[1918]: time="2025-03-17T18:00:02.543836419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:00:02.544187 containerd[1918]: time="2025-03-17T18:00:02.543858500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:02.544323 containerd[1918]: time="2025-03-17T18:00:02.544162250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:00:02.545119 containerd[1918]: time="2025-03-17T18:00:02.545069901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:02.545436 containerd[1918]: time="2025-03-17T18:00:02.545386394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:02.553517 containerd[1918]: time="2025-03-17T18:00:02.553383601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:02.589191 containerd[1918]: time="2025-03-17T18:00:02.588827411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:00:02.589191 containerd[1918]: time="2025-03-17T18:00:02.589062727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:00:02.589191 containerd[1918]: time="2025-03-17T18:00:02.589125989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:02.589462 containerd[1918]: time="2025-03-17T18:00:02.589379655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:02.642917 systemd[1]: Started cri-containerd-2643ce11d35f6129a3cd4824ef6b1969611156cb69605770d88ee828b426ae75.scope - libcontainer container 2643ce11d35f6129a3cd4824ef6b1969611156cb69605770d88ee828b426ae75. Mar 17 18:00:02.741556 systemd[1]: Started cri-containerd-dd30027d66cef9d796df1e205de0445b5664420299538d5d8cefcf8819850b29.scope - libcontainer container dd30027d66cef9d796df1e205de0445b5664420299538d5d8cefcf8819850b29. Mar 17 18:00:02.764681 systemd[1]: Started cri-containerd-de1151ef33511b266c9d60e5e593a4b0fd79141dc888dddafe162ab0278b77dd.scope - libcontainer container de1151ef33511b266c9d60e5e593a4b0fd79141dc888dddafe162ab0278b77dd. Mar 17 18:00:03.074650 containerd[1918]: time="2025-03-17T18:00:03.074448286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-178,Uid:35c88deedb53cb68f6b9d12a105958ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd30027d66cef9d796df1e205de0445b5664420299538d5d8cefcf8819850b29\"" Mar 17 18:00:03.109882 update_engine[1892]: I20250317 18:00:03.109093 1892 update_attempter.cc:509] Updating boot flags... Mar 17 18:00:03.112830 containerd[1918]: time="2025-03-17T18:00:03.110747095Z" level=info msg="CreateContainer within sandbox \"dd30027d66cef9d796df1e205de0445b5664420299538d5d8cefcf8819850b29\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:00:03.249578 containerd[1918]: time="2025-03-17T18:00:03.249526430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-178,Uid:dc4d2c90345ea28c3b096f608f6b1a86,Namespace:kube-system,Attempt:0,} returns sandbox id \"2643ce11d35f6129a3cd4824ef6b1969611156cb69605770d88ee828b426ae75\"" Mar 17 18:00:03.261506 containerd[1918]: time="2025-03-17T18:00:03.260477400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-178,Uid:724823bc622a169087b53db9151c31c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"de1151ef33511b266c9d60e5e593a4b0fd79141dc888dddafe162ab0278b77dd\"" Mar 17 18:00:03.261544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154933372.mount: Deactivated successfully. Mar 17 18:00:03.314996 containerd[1918]: time="2025-03-17T18:00:03.314398437Z" level=info msg="CreateContainer within sandbox \"2643ce11d35f6129a3cd4824ef6b1969611156cb69605770d88ee828b426ae75\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:00:03.334140 containerd[1918]: time="2025-03-17T18:00:03.332813283Z" level=info msg="CreateContainer within sandbox \"de1151ef33511b266c9d60e5e593a4b0fd79141dc888dddafe162ab0278b77dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:00:03.340143 containerd[1918]: time="2025-03-17T18:00:03.340021089Z" level=info msg="CreateContainer within sandbox \"dd30027d66cef9d796df1e205de0445b5664420299538d5d8cefcf8819850b29\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c151aa2d24756051155cbf6ec6193148189f4059a3cd7c6891f39dd1175841d2\"" Mar 17 18:00:03.356295 containerd[1918]: time="2025-03-17T18:00:03.356146728Z" level=info msg="StartContainer for \"c151aa2d24756051155cbf6ec6193148189f4059a3cd7c6891f39dd1175841d2\"" Mar 17 18:00:03.436983 kubelet[2875]: E0317 18:00:03.434481 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-178?timeout=10s\": dial tcp 172.31.20.178:6443: connect: connection refused" interval="3.2s" Mar 17 18:00:03.462320 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3053) Mar 17 18:00:03.464694 containerd[1918]: time="2025-03-17T18:00:03.464639707Z" level=info msg="CreateContainer within sandbox \"2643ce11d35f6129a3cd4824ef6b1969611156cb69605770d88ee828b426ae75\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0cfb9188a2e14fb7d4dc69b6bd0fb98b004295a2ebfdc26f42db0779f4cdb34a\"" Mar 17 18:00:03.469454 containerd[1918]: time="2025-03-17T18:00:03.469400634Z" level=info msg="StartContainer for \"0cfb9188a2e14fb7d4dc69b6bd0fb98b004295a2ebfdc26f42db0779f4cdb34a\"" Mar 17 18:00:03.474920 kubelet[2875]: W0317 18:00:03.474816 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-178&limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:03.475073 kubelet[2875]: E0317 18:00:03.474940 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-178&limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:03.489850 containerd[1918]: time="2025-03-17T18:00:03.489798916Z" level=info msg="CreateContainer within sandbox \"de1151ef33511b266c9d60e5e593a4b0fd79141dc888dddafe162ab0278b77dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"768975dbce4d6ba8e2610e6cc867aa674fa8730268e4c96662ee244367bbfcd9\"" Mar 17 18:00:03.495346 containerd[1918]: time="2025-03-17T18:00:03.494253823Z" level=info msg="StartContainer for \"768975dbce4d6ba8e2610e6cc867aa674fa8730268e4c96662ee244367bbfcd9\"" Mar 17 18:00:03.705582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215579965.mount: Deactivated successfully. Mar 17 18:00:03.715744 kubelet[2875]: I0317 18:00:03.715631 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-178" Mar 17 18:00:03.717302 kubelet[2875]: E0317 18:00:03.716031 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.178:6443/api/v1/nodes\": dial tcp 172.31.20.178:6443: connect: connection refused" node="ip-172-31-20-178" Mar 17 18:00:04.005391 systemd[1]: Started cri-containerd-0cfb9188a2e14fb7d4dc69b6bd0fb98b004295a2ebfdc26f42db0779f4cdb34a.scope - libcontainer container 0cfb9188a2e14fb7d4dc69b6bd0fb98b004295a2ebfdc26f42db0779f4cdb34a. Mar 17 18:00:04.017714 systemd[1]: Started cri-containerd-c151aa2d24756051155cbf6ec6193148189f4059a3cd7c6891f39dd1175841d2.scope - libcontainer container c151aa2d24756051155cbf6ec6193148189f4059a3cd7c6891f39dd1175841d2. Mar 17 18:00:04.304202 kubelet[2875]: W0317 18:00:04.304023 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.178:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:04.304202 kubelet[2875]: E0317 18:00:04.304111 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.178:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:04.342448 systemd[1]: Started cri-containerd-768975dbce4d6ba8e2610e6cc867aa674fa8730268e4c96662ee244367bbfcd9.scope - libcontainer container 768975dbce4d6ba8e2610e6cc867aa674fa8730268e4c96662ee244367bbfcd9. Mar 17 18:00:04.414794 kubelet[2875]: W0317 18:00:04.414701 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:04.414794 kubelet[2875]: E0317 18:00:04.414800 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:04.705641 kubelet[2875]: W0317 18:00:04.705456 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:04.705641 kubelet[2875]: E0317 18:00:04.705644 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:04.712231 containerd[1918]: time="2025-03-17T18:00:04.712182447Z" level=info msg="StartContainer for \"0cfb9188a2e14fb7d4dc69b6bd0fb98b004295a2ebfdc26f42db0779f4cdb34a\" returns successfully" Mar 17 18:00:04.826530 containerd[1918]: time="2025-03-17T18:00:04.808379956Z" level=info msg="StartContainer for \"c151aa2d24756051155cbf6ec6193148189f4059a3cd7c6891f39dd1175841d2\" returns successfully" Mar 17 18:00:05.030085 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3059) Mar 17 18:00:05.435914 containerd[1918]: time="2025-03-17T18:00:05.435859046Z" level=info msg="StartContainer for \"768975dbce4d6ba8e2610e6cc867aa674fa8730268e4c96662ee244367bbfcd9\" returns successfully" Mar 17 18:00:06.646286 kubelet[2875]: E0317 18:00:06.632122 2875 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.178:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.178:6443: connect: connection refused Mar 17 18:00:06.646286 kubelet[2875]: E0317 18:00:06.638849 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-178?timeout=10s\": dial tcp 172.31.20.178:6443: connect: connection refused" interval="6.4s" Mar 17 18:00:06.920506 kubelet[2875]: I0317 18:00:06.920384 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-178" Mar 17 18:00:06.924198 kubelet[2875]: E0317 18:00:06.924148 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.178:6443/api/v1/nodes\": dial tcp 172.31.20.178:6443: connect: connection refused" node="ip-172-31-20-178" Mar 17 18:00:10.616763 kubelet[2875]: E0317 18:00:10.616717 2875 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-178\" not found" Mar 17 18:00:13.328915 kubelet[2875]: E0317 18:00:13.328843 2875 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-178\" not found" node="ip-172-31-20-178" Mar 17 18:00:13.331666 kubelet[2875]: I0317 18:00:13.329251 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-178" Mar 17 18:00:13.355342 kubelet[2875]: E0317 18:00:13.355179 2875 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-178.182da8f4d6b0c187 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-178,UID:ip-172-31-20-178,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-178,},FirstTimestamp:2025-03-17 18:00:00.389644679 +0000 UTC m=+0.982981692,LastTimestamp:2025-03-17 18:00:00.389644679 +0000 UTC m=+0.982981692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-178,}" Mar 17 18:00:13.410358 kubelet[2875]: I0317 18:00:13.410311 2875 apiserver.go:52] "Watching apiserver" Mar 17 18:00:13.429865 kubelet[2875]: E0317 18:00:13.416890 2875 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-178.182da8f4d9deda78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-178,UID:ip-172-31-20-178,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-20-178,},FirstTimestamp:2025-03-17 18:00:00.442997368 +0000 UTC m=+1.036334377,LastTimestamp:2025-03-17 18:00:00.442997368 +0000 UTC m=+1.036334377,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-178,}" Mar 17 18:00:13.441151 kubelet[2875]: I0317 18:00:13.440894 2875 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:00:13.453270 kubelet[2875]: I0317 18:00:13.450908 2875 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-178" Mar 17 18:00:13.492065 kubelet[2875]: E0317 18:00:13.491961 2875 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-178.182da8f4de7ada5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-178,UID:ip-172-31-20-178,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-20-178 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-20-178,},FirstTimestamp:2025-03-17 18:00:00.520329821 +0000 UTC m=+1.113666822,LastTimestamp:2025-03-17 18:00:00.520329821 +0000 UTC m=+1.113666822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-178,}" Mar 17 18:00:16.366343 systemd[1]: Reload requested from client PID 3340 ('systemctl') (unit session-9.scope)... Mar 17 18:00:16.366369 systemd[1]: Reloading... Mar 17 18:00:16.560378 zram_generator::config[3388]: No configuration found. Mar 17 18:00:16.784363 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:00:17.039994 systemd[1]: Reloading finished in 673 ms. Mar 17 18:00:17.097739 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:17.114732 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:00:17.116705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:17.116806 systemd[1]: kubelet.service: Consumed 1.421s CPU time, 114.3M memory peak. Mar 17 18:00:17.128750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:17.488614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:17.501309 (kubelet)[3442]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 18:00:17.645451 kubelet[3442]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:00:17.645451 kubelet[3442]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:00:17.645451 kubelet[3442]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:00:17.660958 kubelet[3442]: I0317 18:00:17.659397 3442 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:00:17.695997 kubelet[3442]: I0317 18:00:17.695959 3442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:00:17.695997 kubelet[3442]: I0317 18:00:17.695988 3442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:00:17.698644 kubelet[3442]: I0317 18:00:17.698551 3442 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:00:17.702842 kubelet[3442]: I0317 18:00:17.702805 3442 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:00:17.704407 kubelet[3442]: I0317 18:00:17.704131 3442 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:00:17.723713 kubelet[3442]: I0317 18:00:17.723675 3442 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:00:17.724036 kubelet[3442]: I0317 18:00:17.723991 3442 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:00:17.724550 kubelet[3442]: I0317 18:00:17.724036 3442 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-178","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:00:17.724700 kubelet[3442]: I0317 18:00:17.724572 3442 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:00:17.724700 kubelet[3442]: I0317 18:00:17.724589 3442 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:00:17.724700 kubelet[3442]: I0317 18:00:17.724696 3442 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:00:17.726740 kubelet[3442]: I0317 18:00:17.725822 3442 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:00:17.726740 kubelet[3442]: I0317 18:00:17.725852 3442 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:00:17.726740 kubelet[3442]: I0317 18:00:17.725885 3442 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:00:17.726740 kubelet[3442]: I0317 18:00:17.725906 3442 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:00:17.738935 sudo[3455]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:00:17.739439 sudo[3455]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 18:00:17.749636 kubelet[3442]: I0317 18:00:17.746185 3442 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 18:00:17.749636 kubelet[3442]: I0317 18:00:17.746453 3442 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:00:17.749636 kubelet[3442]: I0317 18:00:17.747059 3442 server.go:1264] "Started kubelet" Mar 17 18:00:17.755936 kubelet[3442]: I0317 18:00:17.755901 3442 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:00:17.760034 kubelet[3442]: I0317 18:00:17.759951 3442 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:00:17.765622 kubelet[3442]: I0317 18:00:17.764782 3442 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:00:17.773871 kubelet[3442]: I0317 18:00:17.773448 3442 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:00:17.785966 kubelet[3442]: I0317 18:00:17.785778 3442 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:00:17.786095 kubelet[3442]: I0317 18:00:17.782476 3442 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:00:17.786095 kubelet[3442]: I0317 18:00:17.780284 3442 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:00:17.787444 kubelet[3442]: I0317 18:00:17.786259 3442 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:00:17.796760 kubelet[3442]: I0317 18:00:17.796509 3442 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:00:17.796760 kubelet[3442]: I0317 18:00:17.796657 3442 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:00:17.835863 kubelet[3442]: I0317 18:00:17.835831 3442 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:00:17.838022 kubelet[3442]: E0317 18:00:17.837885 3442 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:00:17.868424 kubelet[3442]: I0317 18:00:17.865766 3442 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:00:17.871328 kubelet[3442]: I0317 18:00:17.870952 3442 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:00:17.871328 kubelet[3442]: I0317 18:00:17.871016 3442 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:00:17.871328 kubelet[3442]: I0317 18:00:17.871103 3442 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:00:17.871328 kubelet[3442]: E0317 18:00:17.871184 3442 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:00:17.915954 kubelet[3442]: I0317 18:00:17.915917 3442 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-178" Mar 17 18:00:17.946052 kubelet[3442]: I0317 18:00:17.946021 3442 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-20-178" Mar 17 18:00:17.946197 kubelet[3442]: I0317 18:00:17.946116 3442 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-178" Mar 17 18:00:17.971345 kubelet[3442]: E0317 18:00:17.971314 3442 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:00:18.003432 kubelet[3442]: I0317 18:00:18.003318 3442 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:00:18.003432 kubelet[3442]: I0317 18:00:18.003346 3442 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:00:18.003432 kubelet[3442]: I0317 18:00:18.003376 3442 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:00:18.004842 kubelet[3442]: I0317 18:00:18.003862 3442 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:00:18.004842 kubelet[3442]: I0317 18:00:18.003893 3442 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:00:18.004842 kubelet[3442]: I0317 18:00:18.003923 3442 policy_none.go:49] "None policy: Start" Mar 17 18:00:18.005878 kubelet[3442]: I0317 18:00:18.005658 3442 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:00:18.005878 kubelet[3442]: I0317 18:00:18.005682 3442 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:00:18.006954 kubelet[3442]: I0317 18:00:18.005990 3442 state_mem.go:75] "Updated machine memory state" Mar 17 18:00:18.028079 kubelet[3442]: I0317 18:00:18.027443 3442 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:00:18.028079 kubelet[3442]: I0317 18:00:18.027654 3442 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:00:18.030898 kubelet[3442]: I0317 18:00:18.030879 3442 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:00:18.171590 kubelet[3442]: I0317 18:00:18.171533 3442 topology_manager.go:215] "Topology Admit Handler" podUID="dc4d2c90345ea28c3b096f608f6b1a86" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-178" Mar 17 18:00:18.171758 kubelet[3442]: I0317 18:00:18.171664 3442 topology_manager.go:215] "Topology Admit Handler" podUID="35c88deedb53cb68f6b9d12a105958ad" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-178" Mar 17 18:00:18.171758 kubelet[3442]: I0317 18:00:18.171736 3442 topology_manager.go:215] "Topology Admit Handler" podUID="724823bc622a169087b53db9151c31c9" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-178" Mar 17 18:00:18.189612 kubelet[3442]: I0317 18:00:18.188083 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc4d2c90345ea28c3b096f608f6b1a86-ca-certs\") pod \"kube-apiserver-ip-172-31-20-178\" (UID: \"dc4d2c90345ea28c3b096f608f6b1a86\") " pod="kube-system/kube-apiserver-ip-172-31-20-178" Mar 17 18:00:18.189612 kubelet[3442]: I0317 18:00:18.188143 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35c88deedb53cb68f6b9d12a105958ad-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-178\" (UID: \"35c88deedb53cb68f6b9d12a105958ad\") " pod="kube-system/kube-controller-manager-ip-172-31-20-178" Mar 17 18:00:18.189612 kubelet[3442]: I0317 18:00:18.188180 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35c88deedb53cb68f6b9d12a105958ad-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-178\" (UID: \"35c88deedb53cb68f6b9d12a105958ad\") " pod="kube-system/kube-controller-manager-ip-172-31-20-178" Mar 17 18:00:18.189612 kubelet[3442]: I0317 18:00:18.188209 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35c88deedb53cb68f6b9d12a105958ad-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-178\" (UID: \"35c88deedb53cb68f6b9d12a105958ad\") " pod="kube-system/kube-controller-manager-ip-172-31-20-178" Mar 17 18:00:18.189612 kubelet[3442]: I0317 18:00:18.188233 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc4d2c90345ea28c3b096f608f6b1a86-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-178\" (UID: \"dc4d2c90345ea28c3b096f608f6b1a86\") " pod="kube-system/kube-apiserver-ip-172-31-20-178" Mar 17 18:00:18.190003 kubelet[3442]: I0317 18:00:18.188258 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc4d2c90345ea28c3b096f608f6b1a86-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-178\" (UID: \"dc4d2c90345ea28c3b096f608f6b1a86\") " pod="kube-system/kube-apiserver-ip-172-31-20-178" Mar 17 18:00:18.190003 kubelet[3442]: I0317 18:00:18.188305 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35c88deedb53cb68f6b9d12a105958ad-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-178\" (UID: \"35c88deedb53cb68f6b9d12a105958ad\") " pod="kube-system/kube-controller-manager-ip-172-31-20-178" Mar 17 18:00:18.190003 kubelet[3442]: I0317 18:00:18.188329 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35c88deedb53cb68f6b9d12a105958ad-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-178\" (UID: \"35c88deedb53cb68f6b9d12a105958ad\") " pod="kube-system/kube-controller-manager-ip-172-31-20-178" Mar 17 18:00:18.190003 kubelet[3442]: I0317 18:00:18.188355 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/724823bc622a169087b53db9151c31c9-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-178\" (UID: \"724823bc622a169087b53db9151c31c9\") " pod="kube-system/kube-scheduler-ip-172-31-20-178" Mar 17 18:00:18.763328 kubelet[3442]: I0317 18:00:18.759553 3442 apiserver.go:52] "Watching apiserver" Mar 17 18:00:18.786425 kubelet[3442]: I0317 18:00:18.786383 3442 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:00:18.821650 sudo[3455]: pam_unix(sudo:session): session closed for user root Mar 17 18:00:18.963283 kubelet[3442]: E0317 18:00:18.960607 3442 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-178\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-178" Mar 17 18:00:19.018382 kubelet[3442]: I0317 18:00:19.016090 3442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-178" podStartSLOduration=1.016067671 podStartE2EDuration="1.016067671s" podCreationTimestamp="2025-03-17 18:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:00:18.99562681 +0000 UTC m=+1.458662516" watchObservedRunningTime="2025-03-17 18:00:19.016067671 +0000 UTC m=+1.479103394" Mar 17 18:00:19.049843 kubelet[3442]: I0317 18:00:19.049784 3442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-178" podStartSLOduration=1.049054933 podStartE2EDuration="1.049054933s" podCreationTimestamp="2025-03-17 18:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:00:19.016594679 +0000 UTC m=+1.479630384" watchObservedRunningTime="2025-03-17 18:00:19.049054933 +0000 UTC m=+1.512090640" Mar 17 18:00:19.052498 kubelet[3442]: I0317 18:00:19.052137 3442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-178" podStartSLOduration=1.052120333 podStartE2EDuration="1.052120333s" podCreationTimestamp="2025-03-17 18:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:00:19.050567145 +0000 UTC m=+1.513602840" watchObservedRunningTime="2025-03-17 18:00:19.052120333 +0000 UTC m=+1.515156038" Mar 17 18:00:21.032675 sudo[2262]: pam_unix(sudo:session): session closed for user root Mar 17 18:00:21.056408 sshd[2261]: Connection closed by 139.178.89.65 port 57256 Mar 17 18:00:21.057712 sshd-session[2259]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:21.062239 systemd[1]: sshd@8-172.31.20.178:22-139.178.89.65:57256.service: Deactivated successfully. Mar 17 18:00:21.066476 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:00:21.066736 systemd[1]: session-9.scope: Consumed 5.890s CPU time, 230.5M memory peak. Mar 17 18:00:21.070315 systemd-logind[1891]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:00:21.072080 systemd-logind[1891]: Removed session 9. Mar 17 18:00:30.147411 kubelet[3442]: I0317 18:00:30.147376 3442 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:00:30.147883 containerd[1918]: time="2025-03-17T18:00:30.147791579Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:00:30.148186 kubelet[3442]: I0317 18:00:30.147995 3442 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:00:31.176278 kubelet[3442]: I0317 18:00:31.176212 3442 topology_manager.go:215] "Topology Admit Handler" podUID="2b5a22bd-b82c-49e9-b225-e8fb182b304c" podNamespace="kube-system" podName="kube-proxy-smdcv" Mar 17 18:00:31.177247 kubelet[3442]: I0317 18:00:31.177099 3442 topology_manager.go:215] "Topology Admit Handler" podUID="802fd80f-7bae-4e90-a87a-7d931a6f3649" podNamespace="kube-system" podName="cilium-825ws" Mar 17 18:00:31.181519 kubelet[3442]: W0317 18:00:31.181488 3442 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-20-178" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-178' and this object Mar 17 18:00:31.181732 kubelet[3442]: E0317 18:00:31.181535 3442 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-20-178" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-178' and this object Mar 17 18:00:31.185699 kubelet[3442]: I0317 18:00:31.185666 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-cilium-cgroup\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.185699 kubelet[3442]: I0317 18:00:31.185709 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-lib-modules\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.186061 kubelet[3442]: I0317 18:00:31.185732 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-xtables-lock\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.186061 kubelet[3442]: I0317 18:00:31.185755 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2b5a22bd-b82c-49e9-b225-e8fb182b304c-kube-proxy\") pod \"kube-proxy-smdcv\" (UID: \"2b5a22bd-b82c-49e9-b225-e8fb182b304c\") " pod="kube-system/kube-proxy-smdcv" Mar 17 18:00:31.186061 kubelet[3442]: I0317 18:00:31.185776 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b5a22bd-b82c-49e9-b225-e8fb182b304c-lib-modules\") pod \"kube-proxy-smdcv\" (UID: \"2b5a22bd-b82c-49e9-b225-e8fb182b304c\") " pod="kube-system/kube-proxy-smdcv" Mar 17 18:00:31.186061 kubelet[3442]: I0317 18:00:31.185798 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhw8\" (UniqueName: \"kubernetes.io/projected/2b5a22bd-b82c-49e9-b225-e8fb182b304c-kube-api-access-5zhw8\") pod \"kube-proxy-smdcv\" (UID: \"2b5a22bd-b82c-49e9-b225-e8fb182b304c\") " pod="kube-system/kube-proxy-smdcv" Mar 17 18:00:31.186061 kubelet[3442]: I0317 18:00:31.185822 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/802fd80f-7bae-4e90-a87a-7d931a6f3649-clustermesh-secrets\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.186298 kubelet[3442]: I0317 18:00:31.186008 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-host-proc-sys-net\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.186298 kubelet[3442]: I0317 18:00:31.186042 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-host-proc-sys-kernel\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.186298 kubelet[3442]: I0317 18:00:31.186064 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/802fd80f-7bae-4e90-a87a-7d931a6f3649-hubble-tls\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.186298 kubelet[3442]: I0317 18:00:31.186087 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-cni-path\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.186298 kubelet[3442]: I0317 18:00:31.186111 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b5a22bd-b82c-49e9-b225-e8fb182b304c-xtables-lock\") pod \"kube-proxy-smdcv\" (UID: \"2b5a22bd-b82c-49e9-b225-e8fb182b304c\") " pod="kube-system/kube-proxy-smdcv" Mar 17 18:00:31.186298 kubelet[3442]: I0317 18:00:31.186134 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/802fd80f-7bae-4e90-a87a-7d931a6f3649-cilium-config-path\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.186725 kubelet[3442]: I0317 18:00:31.186160 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-etc-cni-netd\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.186725 kubelet[3442]: I0317 18:00:31.186185 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-hostproc\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.186725 kubelet[3442]: I0317 18:00:31.186208 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v59hs\" (UniqueName: \"kubernetes.io/projected/802fd80f-7bae-4e90-a87a-7d931a6f3649-kube-api-access-v59hs\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.186725 kubelet[3442]: I0317 18:00:31.186236 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-cilium-run\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.186725 kubelet[3442]: I0317 18:00:31.186284 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-bpf-maps\") pod \"cilium-825ws\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " pod="kube-system/cilium-825ws" Mar 17 18:00:31.202624 systemd[1]: Created slice kubepods-besteffort-pod2b5a22bd_b82c_49e9_b225_e8fb182b304c.slice - libcontainer container kubepods-besteffort-pod2b5a22bd_b82c_49e9_b225_e8fb182b304c.slice. Mar 17 18:00:31.225946 systemd[1]: Created slice kubepods-burstable-pod802fd80f_7bae_4e90_a87a_7d931a6f3649.slice - libcontainer container kubepods-burstable-pod802fd80f_7bae_4e90_a87a_7d931a6f3649.slice. Mar 17 18:00:31.321998 kubelet[3442]: I0317 18:00:31.319182 3442 topology_manager.go:215] "Topology Admit Handler" podUID="6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc" podNamespace="kube-system" podName="cilium-operator-599987898-drmcj" Mar 17 18:00:31.348920 systemd[1]: Created slice kubepods-besteffort-pod6a0b737b_3cf8_4e9f_a2f8_fcde55f091fc.slice - libcontainer container kubepods-besteffort-pod6a0b737b_3cf8_4e9f_a2f8_fcde55f091fc.slice. Mar 17 18:00:31.490136 kubelet[3442]: I0317 18:00:31.489964 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lttpr\" (UniqueName: \"kubernetes.io/projected/6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc-kube-api-access-lttpr\") pod \"cilium-operator-599987898-drmcj\" (UID: \"6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc\") " pod="kube-system/cilium-operator-599987898-drmcj" Mar 17 18:00:31.490136 kubelet[3442]: I0317 18:00:31.490033 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc-cilium-config-path\") pod \"cilium-operator-599987898-drmcj\" (UID: \"6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc\") " pod="kube-system/cilium-operator-599987898-drmcj" Mar 17 18:00:31.516065 containerd[1918]: time="2025-03-17T18:00:31.516018443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-smdcv,Uid:2b5a22bd-b82c-49e9-b225-e8fb182b304c,Namespace:kube-system,Attempt:0,}" Mar 17 18:00:31.573594 containerd[1918]: time="2025-03-17T18:00:31.573087061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:00:31.573594 containerd[1918]: time="2025-03-17T18:00:31.573144339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:00:31.573594 containerd[1918]: time="2025-03-17T18:00:31.573160074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:31.573594 containerd[1918]: time="2025-03-17T18:00:31.573247153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:31.598490 systemd[1]: Started cri-containerd-8401a2458f72e4b48e70812c7b194534570d75f81c25d1b962e59c4b489daf89.scope - libcontainer container 8401a2458f72e4b48e70812c7b194534570d75f81c25d1b962e59c4b489daf89. Mar 17 18:00:31.661454 containerd[1918]: time="2025-03-17T18:00:31.661417238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-smdcv,Uid:2b5a22bd-b82c-49e9-b225-e8fb182b304c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8401a2458f72e4b48e70812c7b194534570d75f81c25d1b962e59c4b489daf89\"" Mar 17 18:00:31.679535 containerd[1918]: time="2025-03-17T18:00:31.679461906Z" level=info msg="CreateContainer within sandbox \"8401a2458f72e4b48e70812c7b194534570d75f81c25d1b962e59c4b489daf89\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:00:31.718157 containerd[1918]: time="2025-03-17T18:00:31.718104328Z" level=info msg="CreateContainer within sandbox \"8401a2458f72e4b48e70812c7b194534570d75f81c25d1b962e59c4b489daf89\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c3c5f43d3b01425deeacd7683bed38cf3e21f857096e31816ede8807598eb4ea\"" Mar 17 18:00:31.720292 containerd[1918]: time="2025-03-17T18:00:31.718777428Z" level=info msg="StartContainer for \"c3c5f43d3b01425deeacd7683bed38cf3e21f857096e31816ede8807598eb4ea\"" Mar 17 18:00:31.754482 systemd[1]: Started cri-containerd-c3c5f43d3b01425deeacd7683bed38cf3e21f857096e31816ede8807598eb4ea.scope - libcontainer container c3c5f43d3b01425deeacd7683bed38cf3e21f857096e31816ede8807598eb4ea. Mar 17 18:00:31.796775 containerd[1918]: time="2025-03-17T18:00:31.796729364Z" level=info msg="StartContainer for \"c3c5f43d3b01425deeacd7683bed38cf3e21f857096e31816ede8807598eb4ea\" returns successfully" Mar 17 18:00:32.007357 kubelet[3442]: I0317 18:00:32.006847 3442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-smdcv" podStartSLOduration=1.006826471 podStartE2EDuration="1.006826471s" podCreationTimestamp="2025-03-17 18:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:00:32.006568734 +0000 UTC m=+14.469604437" watchObservedRunningTime="2025-03-17 18:00:32.006826471 +0000 UTC m=+14.469862174" Mar 17 18:00:32.136725 containerd[1918]: time="2025-03-17T18:00:32.136525572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-825ws,Uid:802fd80f-7bae-4e90-a87a-7d931a6f3649,Namespace:kube-system,Attempt:0,}" Mar 17 18:00:32.178083 containerd[1918]: time="2025-03-17T18:00:32.177939497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:00:32.178083 containerd[1918]: time="2025-03-17T18:00:32.178028863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:00:32.178083 containerd[1918]: time="2025-03-17T18:00:32.178051579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:32.178713 containerd[1918]: time="2025-03-17T18:00:32.178245828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:32.202492 systemd[1]: Started cri-containerd-7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5.scope - libcontainer container 7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5. Mar 17 18:00:32.234712 containerd[1918]: time="2025-03-17T18:00:32.234668885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-825ws,Uid:802fd80f-7bae-4e90-a87a-7d931a6f3649,Namespace:kube-system,Attempt:0,} returns sandbox id \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\"" Mar 17 18:00:32.237570 containerd[1918]: time="2025-03-17T18:00:32.237496937Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:00:32.256476 containerd[1918]: time="2025-03-17T18:00:32.256023804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-drmcj,Uid:6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc,Namespace:kube-system,Attempt:0,}" Mar 17 18:00:32.324163 containerd[1918]: time="2025-03-17T18:00:32.323726750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:00:32.324163 containerd[1918]: time="2025-03-17T18:00:32.323808904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:00:32.324163 containerd[1918]: time="2025-03-17T18:00:32.323833382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:32.324163 containerd[1918]: time="2025-03-17T18:00:32.323950210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:00:32.364592 systemd[1]: Started cri-containerd-bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e.scope - libcontainer container bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e. Mar 17 18:00:32.437757 containerd[1918]: time="2025-03-17T18:00:32.437714246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-drmcj,Uid:6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e\"" Mar 17 18:00:42.517567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795997152.mount: Deactivated successfully. Mar 17 18:00:45.545118 containerd[1918]: time="2025-03-17T18:00:45.545003750Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:45.546177 containerd[1918]: time="2025-03-17T18:00:45.546122171Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 17 18:00:45.548217 containerd[1918]: time="2025-03-17T18:00:45.548119582Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:45.550303 containerd[1918]: time="2025-03-17T18:00:45.550231484Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.312337318s" Mar 17 18:00:45.550303 containerd[1918]: time="2025-03-17T18:00:45.550284167Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:00:45.551807 containerd[1918]: time="2025-03-17T18:00:45.551775427Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:00:45.574523 containerd[1918]: time="2025-03-17T18:00:45.573713072Z" level=info msg="CreateContainer within sandbox \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:00:45.671387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050486264.mount: Deactivated successfully. Mar 17 18:00:45.676600 containerd[1918]: time="2025-03-17T18:00:45.676550529Z" level=info msg="CreateContainer within sandbox \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242\"" Mar 17 18:00:45.678112 containerd[1918]: time="2025-03-17T18:00:45.677082889Z" level=info msg="StartContainer for \"72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242\"" Mar 17 18:00:45.829829 systemd[1]: Started cri-containerd-72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242.scope - libcontainer container 72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242. Mar 17 18:00:45.892540 containerd[1918]: time="2025-03-17T18:00:45.892492691Z" level=info msg="StartContainer for \"72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242\" returns successfully" Mar 17 18:00:45.904441 systemd[1]: cri-containerd-72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242.scope: Deactivated successfully. Mar 17 18:00:46.044254 containerd[1918]: time="2025-03-17T18:00:46.019647167Z" level=info msg="shim disconnected" id=72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242 namespace=k8s.io Mar 17 18:00:46.044535 containerd[1918]: time="2025-03-17T18:00:46.044278348Z" level=warning msg="cleaning up after shim disconnected" id=72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242 namespace=k8s.io Mar 17 18:00:46.044535 containerd[1918]: time="2025-03-17T18:00:46.044300313Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:00:46.088800 containerd[1918]: time="2025-03-17T18:00:46.088577815Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:00:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:00:46.135488 containerd[1918]: time="2025-03-17T18:00:46.135253399Z" level=info msg="CreateContainer within sandbox \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:00:46.180796 containerd[1918]: time="2025-03-17T18:00:46.180744595Z" level=info msg="CreateContainer within sandbox \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a\"" Mar 17 18:00:46.181671 containerd[1918]: time="2025-03-17T18:00:46.181422306Z" level=info msg="StartContainer for \"5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a\"" Mar 17 18:00:46.211475 systemd[1]: Started cri-containerd-5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a.scope - libcontainer container 5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a. Mar 17 18:00:46.251664 containerd[1918]: time="2025-03-17T18:00:46.251504717Z" level=info msg="StartContainer for \"5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a\" returns successfully" Mar 17 18:00:46.267871 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:00:46.268747 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:00:46.269192 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:00:46.278189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:00:46.278536 systemd[1]: cri-containerd-5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a.scope: Deactivated successfully. Mar 17 18:00:46.367765 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:00:46.382227 containerd[1918]: time="2025-03-17T18:00:46.382161862Z" level=info msg="shim disconnected" id=5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a namespace=k8s.io Mar 17 18:00:46.382227 containerd[1918]: time="2025-03-17T18:00:46.382220546Z" level=warning msg="cleaning up after shim disconnected" id=5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a namespace=k8s.io Mar 17 18:00:46.382227 containerd[1918]: time="2025-03-17T18:00:46.382232218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:00:46.665779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242-rootfs.mount: Deactivated successfully. Mar 17 18:00:47.141497 containerd[1918]: time="2025-03-17T18:00:47.141457755Z" level=info msg="CreateContainer within sandbox \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:00:47.186429 containerd[1918]: time="2025-03-17T18:00:47.186389507Z" level=info msg="CreateContainer within sandbox \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b\"" Mar 17 18:00:47.189311 containerd[1918]: time="2025-03-17T18:00:47.187018862Z" level=info msg="StartContainer for \"acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b\"" Mar 17 18:00:47.242531 systemd[1]: run-containerd-runc-k8s.io-acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b-runc.LWl5pA.mount: Deactivated successfully. Mar 17 18:00:47.254659 systemd[1]: Started cri-containerd-acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b.scope - libcontainer container acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b. Mar 17 18:00:47.326926 containerd[1918]: time="2025-03-17T18:00:47.326879262Z" level=info msg="StartContainer for \"acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b\" returns successfully" Mar 17 18:00:47.331598 systemd[1]: cri-containerd-acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b.scope: Deactivated successfully. Mar 17 18:00:47.361486 containerd[1918]: time="2025-03-17T18:00:47.361402889Z" level=info msg="shim disconnected" id=acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b namespace=k8s.io Mar 17 18:00:47.361486 containerd[1918]: time="2025-03-17T18:00:47.361480475Z" level=warning msg="cleaning up after shim disconnected" id=acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b namespace=k8s.io Mar 17 18:00:47.361486 containerd[1918]: time="2025-03-17T18:00:47.361493074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:00:47.665524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b-rootfs.mount: Deactivated successfully. Mar 17 18:00:48.151785 containerd[1918]: time="2025-03-17T18:00:48.151582310Z" level=info msg="CreateContainer within sandbox \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:00:48.188133 containerd[1918]: time="2025-03-17T18:00:48.187201182Z" level=info msg="CreateContainer within sandbox \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830\"" Mar 17 18:00:48.190300 containerd[1918]: time="2025-03-17T18:00:48.188489256Z" level=info msg="StartContainer for \"dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830\"" Mar 17 18:00:48.284509 systemd[1]: Started cri-containerd-dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830.scope - libcontainer container dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830. Mar 17 18:00:48.381317 systemd[1]: cri-containerd-dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830.scope: Deactivated successfully. Mar 17 18:00:48.400576 containerd[1918]: time="2025-03-17T18:00:48.400529349Z" level=info msg="StartContainer for \"dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830\" returns successfully" Mar 17 18:00:48.432841 containerd[1918]: time="2025-03-17T18:00:48.432574131Z" level=info msg="shim disconnected" id=dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830 namespace=k8s.io Mar 17 18:00:48.432841 containerd[1918]: time="2025-03-17T18:00:48.432654571Z" level=warning msg="cleaning up after shim disconnected" id=dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830 namespace=k8s.io Mar 17 18:00:48.432841 containerd[1918]: time="2025-03-17T18:00:48.432666963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:00:48.665718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830-rootfs.mount: Deactivated successfully. Mar 17 18:00:49.154001 containerd[1918]: time="2025-03-17T18:00:49.153123062Z" level=info msg="CreateContainer within sandbox \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:00:49.200177 containerd[1918]: time="2025-03-17T18:00:49.198700888Z" level=info msg="CreateContainer within sandbox \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\"" Mar 17 18:00:49.199143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717660286.mount: Deactivated successfully. Mar 17 18:00:49.201400 containerd[1918]: time="2025-03-17T18:00:49.201240349Z" level=info msg="StartContainer for \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\"" Mar 17 18:00:49.249762 systemd[1]: Started cri-containerd-3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01.scope - libcontainer container 3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01. Mar 17 18:00:49.293377 containerd[1918]: time="2025-03-17T18:00:49.292042520Z" level=info msg="StartContainer for \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\" returns successfully" Mar 17 18:00:49.536323 kubelet[3442]: I0317 18:00:49.534776 3442 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:00:49.725806 kubelet[3442]: I0317 18:00:49.725753 3442 topology_manager.go:215] "Topology Admit Handler" podUID="86a0c950-68ef-4119-a173-2395c37d3b0a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xd8sd" Mar 17 18:00:49.728161 kubelet[3442]: I0317 18:00:49.728117 3442 topology_manager.go:215] "Topology Admit Handler" podUID="8f7bb410-21fc-444c-920c-2ae109631683" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fz4bk" Mar 17 18:00:49.737700 kubelet[3442]: I0317 18:00:49.737657 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpbxf\" (UniqueName: \"kubernetes.io/projected/8f7bb410-21fc-444c-920c-2ae109631683-kube-api-access-rpbxf\") pod \"coredns-7db6d8ff4d-fz4bk\" (UID: \"8f7bb410-21fc-444c-920c-2ae109631683\") " pod="kube-system/coredns-7db6d8ff4d-fz4bk" Mar 17 18:00:49.739184 kubelet[3442]: I0317 18:00:49.737710 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86a0c950-68ef-4119-a173-2395c37d3b0a-config-volume\") pod \"coredns-7db6d8ff4d-xd8sd\" (UID: \"86a0c950-68ef-4119-a173-2395c37d3b0a\") " pod="kube-system/coredns-7db6d8ff4d-xd8sd" Mar 17 18:00:49.739184 kubelet[3442]: I0317 18:00:49.737765 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd49z\" (UniqueName: \"kubernetes.io/projected/86a0c950-68ef-4119-a173-2395c37d3b0a-kube-api-access-kd49z\") pod \"coredns-7db6d8ff4d-xd8sd\" (UID: \"86a0c950-68ef-4119-a173-2395c37d3b0a\") " pod="kube-system/coredns-7db6d8ff4d-xd8sd" Mar 17 18:00:49.739184 kubelet[3442]: I0317 18:00:49.737858 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f7bb410-21fc-444c-920c-2ae109631683-config-volume\") pod \"coredns-7db6d8ff4d-fz4bk\" (UID: \"8f7bb410-21fc-444c-920c-2ae109631683\") " pod="kube-system/coredns-7db6d8ff4d-fz4bk" Mar 17 18:00:49.745470 systemd[1]: Created slice kubepods-burstable-pod86a0c950_68ef_4119_a173_2395c37d3b0a.slice - libcontainer container kubepods-burstable-pod86a0c950_68ef_4119_a173_2395c37d3b0a.slice. Mar 17 18:00:49.761393 systemd[1]: Created slice kubepods-burstable-pod8f7bb410_21fc_444c_920c_2ae109631683.slice - libcontainer container kubepods-burstable-pod8f7bb410_21fc_444c_920c_2ae109631683.slice. Mar 17 18:00:50.059368 containerd[1918]: time="2025-03-17T18:00:50.058989494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xd8sd,Uid:86a0c950-68ef-4119-a173-2395c37d3b0a,Namespace:kube-system,Attempt:0,}" Mar 17 18:00:50.072431 containerd[1918]: time="2025-03-17T18:00:50.072170299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fz4bk,Uid:8f7bb410-21fc-444c-920c-2ae109631683,Namespace:kube-system,Attempt:0,}" Mar 17 18:00:51.035971 containerd[1918]: time="2025-03-17T18:00:51.035916189Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:51.037807 containerd[1918]: time="2025-03-17T18:00:51.037660632Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 17 18:00:51.039987 containerd[1918]: time="2025-03-17T18:00:51.039940047Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:00:51.042569 containerd[1918]: time="2025-03-17T18:00:51.042288340Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.490454919s" Mar 17 18:00:51.042569 containerd[1918]: time="2025-03-17T18:00:51.042333312Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:00:51.047007 containerd[1918]: time="2025-03-17T18:00:51.046932830Z" level=info msg="CreateContainer within sandbox \"bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:00:51.091440 containerd[1918]: time="2025-03-17T18:00:51.091395805Z" level=info msg="CreateContainer within sandbox \"bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\"" Mar 17 18:00:51.093532 containerd[1918]: time="2025-03-17T18:00:51.092607274Z" level=info msg="StartContainer for \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\"" Mar 17 18:00:51.149483 systemd[1]: Started cri-containerd-95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de.scope - libcontainer container 95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de. Mar 17 18:00:51.199901 containerd[1918]: time="2025-03-17T18:00:51.199821350Z" level=info msg="StartContainer for \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\" returns successfully" Mar 17 18:00:51.668940 systemd[1]: run-containerd-runc-k8s.io-95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de-runc.AVC93o.mount: Deactivated successfully. Mar 17 18:00:52.335931 kubelet[3442]: I0317 18:00:52.330913 3442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-825ws" podStartSLOduration=8.016137122 podStartE2EDuration="21.330888834s" podCreationTimestamp="2025-03-17 18:00:31 +0000 UTC" firstStartedPulling="2025-03-17 18:00:32.236783291 +0000 UTC m=+14.699818972" lastFinishedPulling="2025-03-17 18:00:45.551534982 +0000 UTC m=+28.014570684" observedRunningTime="2025-03-17 18:00:50.239937895 +0000 UTC m=+32.702973600" watchObservedRunningTime="2025-03-17 18:00:52.330888834 +0000 UTC m=+34.793924561" Mar 17 18:00:52.335931 kubelet[3442]: I0317 18:00:52.333913 3442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-drmcj" podStartSLOduration=2.730738053 podStartE2EDuration="21.333895957s" podCreationTimestamp="2025-03-17 18:00:31 +0000 UTC" firstStartedPulling="2025-03-17 18:00:32.440884864 +0000 UTC m=+14.903920560" lastFinishedPulling="2025-03-17 18:00:51.044042765 +0000 UTC m=+33.507078464" observedRunningTime="2025-03-17 18:00:52.333759272 +0000 UTC m=+34.796794973" watchObservedRunningTime="2025-03-17 18:00:52.333895957 +0000 UTC m=+34.796931667" Mar 17 18:00:55.360572 systemd-networkd[1741]: cilium_host: Link UP Mar 17 18:00:55.362006 systemd-networkd[1741]: cilium_net: Link UP Mar 17 18:00:55.363666 systemd-networkd[1741]: cilium_net: Gained carrier Mar 17 18:00:55.363911 systemd-networkd[1741]: cilium_host: Gained carrier Mar 17 18:00:55.370065 (udev-worker)[4274]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:00:55.370868 (udev-worker)[4276]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:00:55.920578 systemd-networkd[1741]: cilium_host: Gained IPv6LL Mar 17 18:00:56.041832 systemd-networkd[1741]: cilium_vxlan: Link UP Mar 17 18:00:56.041842 systemd-networkd[1741]: cilium_vxlan: Gained carrier Mar 17 18:00:56.368771 systemd-networkd[1741]: cilium_net: Gained IPv6LL Mar 17 18:00:57.456560 systemd-networkd[1741]: cilium_vxlan: Gained IPv6LL Mar 17 18:00:59.970323 ntpd[1876]: Listen normally on 7 cilium_host 192.168.0.56:123 Mar 17 18:00:59.970423 ntpd[1876]: Listen normally on 8 cilium_net [fe80::8e2:83ff:fed3:7275%4]:123 Mar 17 18:00:59.970938 ntpd[1876]: 17 Mar 18:00:59 ntpd[1876]: Listen normally on 7 cilium_host 192.168.0.56:123 Mar 17 18:00:59.970938 ntpd[1876]: 17 Mar 18:00:59 ntpd[1876]: Listen normally on 8 cilium_net [fe80::8e2:83ff:fed3:7275%4]:123 Mar 17 18:00:59.970938 ntpd[1876]: 17 Mar 18:00:59 ntpd[1876]: Listen normally on 9 cilium_host [fe80::e467:2ff:fee1:6b67%5]:123 Mar 17 18:00:59.970938 ntpd[1876]: 17 Mar 18:00:59 ntpd[1876]: Listen normally on 10 cilium_vxlan [fe80::d85e:e0ff:fe8d:b146%6]:123 Mar 17 18:00:59.970485 ntpd[1876]: Listen normally on 9 cilium_host [fe80::e467:2ff:fee1:6b67%5]:123 Mar 17 18:00:59.970529 ntpd[1876]: Listen normally on 10 cilium_vxlan [fe80::d85e:e0ff:fe8d:b146%6]:123 Mar 17 18:01:03.987392 kernel: NET: Registered PF_ALG protocol family Mar 17 18:01:05.700000 (udev-worker)[4366]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:01:05.700070 (udev-worker)[4606]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:01:05.712380 systemd-networkd[1741]: lxc_health: Link UP Mar 17 18:01:05.716442 systemd-networkd[1741]: lxc_health: Gained carrier Mar 17 18:01:06.314968 kernel: eth0: renamed from tmp6fa7c Mar 17 18:01:06.318837 systemd-networkd[1741]: lxc9447f274200d: Link UP Mar 17 18:01:06.320011 (udev-worker)[4618]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:01:06.323404 systemd-networkd[1741]: lxc9447f274200d: Gained carrier Mar 17 18:01:06.375565 systemd-networkd[1741]: lxc34ebed5d64ff: Link UP Mar 17 18:01:06.378435 kernel: eth0: renamed from tmp6713b Mar 17 18:01:06.389029 systemd-networkd[1741]: lxc34ebed5d64ff: Gained carrier Mar 17 18:01:07.569841 systemd-networkd[1741]: lxc_health: Gained IPv6LL Mar 17 18:01:07.760419 systemd-networkd[1741]: lxc9447f274200d: Gained IPv6LL Mar 17 18:01:08.214057 systemd-networkd[1741]: lxc34ebed5d64ff: Gained IPv6LL Mar 17 18:01:10.971412 ntpd[1876]: Listen normally on 11 lxc_health [fe80::e499:17ff:fee8:5627%8]:123 Mar 17 18:01:10.972155 ntpd[1876]: 17 Mar 18:01:10 ntpd[1876]: Listen normally on 11 lxc_health [fe80::e499:17ff:fee8:5627%8]:123 Mar 17 18:01:10.972155 ntpd[1876]: 17 Mar 18:01:10 ntpd[1876]: Listen normally on 12 lxc9447f274200d [fe80::d822:ebff:fe4c:6370%10]:123 Mar 17 18:01:10.972155 ntpd[1876]: 17 Mar 18:01:10 ntpd[1876]: Listen normally on 13 lxc34ebed5d64ff [fe80::3c5a:a1ff:fe5d:cbe2%12]:123 Mar 17 18:01:10.971509 ntpd[1876]: Listen normally on 12 lxc9447f274200d [fe80::d822:ebff:fe4c:6370%10]:123 Mar 17 18:01:10.971558 ntpd[1876]: Listen normally on 13 lxc34ebed5d64ff [fe80::3c5a:a1ff:fe5d:cbe2%12]:123 Mar 17 18:01:11.490680 systemd[1]: Started sshd@9-172.31.20.178:22-139.178.89.65:47708.service - OpenSSH per-connection server daemon (139.178.89.65:47708). Mar 17 18:01:11.733471 sshd[4649]: Accepted publickey for core from 139.178.89.65 port 47708 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:11.735791 sshd-session[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:11.750735 systemd-logind[1891]: New session 10 of user core. Mar 17 18:01:11.760963 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 18:01:12.718685 containerd[1918]: time="2025-03-17T18:01:12.712480097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:01:12.718685 containerd[1918]: time="2025-03-17T18:01:12.712567707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:01:12.718685 containerd[1918]: time="2025-03-17T18:01:12.712593165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:01:12.718685 containerd[1918]: time="2025-03-17T18:01:12.712809546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:01:12.761426 containerd[1918]: time="2025-03-17T18:01:12.759554574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:01:12.761426 containerd[1918]: time="2025-03-17T18:01:12.759633289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:01:12.761426 containerd[1918]: time="2025-03-17T18:01:12.759653633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:01:12.761426 containerd[1918]: time="2025-03-17T18:01:12.759798381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:01:12.850428 systemd[1]: Started cri-containerd-6713b68af7622afb863bda6e9dadbf384416f3fc510b6900abae5a54b973f4d8.scope - libcontainer container 6713b68af7622afb863bda6e9dadbf384416f3fc510b6900abae5a54b973f4d8. Mar 17 18:01:12.880093 systemd[1]: Started cri-containerd-6fa7c5c33c3bfc4a2576f67305cedb9f1d99411c87fb13c44629bd076c1eb090.scope - libcontainer container 6fa7c5c33c3bfc4a2576f67305cedb9f1d99411c87fb13c44629bd076c1eb090. Mar 17 18:01:13.012816 sshd[4651]: Connection closed by 139.178.89.65 port 47708 Mar 17 18:01:13.013823 sshd-session[4649]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:13.024471 systemd[1]: sshd@9-172.31.20.178:22-139.178.89.65:47708.service: Deactivated successfully. Mar 17 18:01:13.025026 systemd-logind[1891]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:01:13.031097 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:01:13.042792 systemd-logind[1891]: Removed session 10. Mar 17 18:01:13.151778 containerd[1918]: time="2025-03-17T18:01:13.151729826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xd8sd,Uid:86a0c950-68ef-4119-a173-2395c37d3b0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6713b68af7622afb863bda6e9dadbf384416f3fc510b6900abae5a54b973f4d8\"" Mar 17 18:01:13.164648 containerd[1918]: time="2025-03-17T18:01:13.164594991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fz4bk,Uid:8f7bb410-21fc-444c-920c-2ae109631683,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fa7c5c33c3bfc4a2576f67305cedb9f1d99411c87fb13c44629bd076c1eb090\"" Mar 17 18:01:13.197109 containerd[1918]: time="2025-03-17T18:01:13.196738145Z" level=info msg="CreateContainer within sandbox \"6fa7c5c33c3bfc4a2576f67305cedb9f1d99411c87fb13c44629bd076c1eb090\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:01:13.197109 containerd[1918]: time="2025-03-17T18:01:13.197006367Z" level=info msg="CreateContainer within sandbox \"6713b68af7622afb863bda6e9dadbf384416f3fc510b6900abae5a54b973f4d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:01:13.261783 containerd[1918]: time="2025-03-17T18:01:13.261725886Z" level=info msg="CreateContainer within sandbox \"6713b68af7622afb863bda6e9dadbf384416f3fc510b6900abae5a54b973f4d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76b57fc7f83d92ecac7808941da9033fcbd98334099c2d6e91d5511714d78b64\"" Mar 17 18:01:13.263858 containerd[1918]: time="2025-03-17T18:01:13.263680637Z" level=info msg="StartContainer for \"76b57fc7f83d92ecac7808941da9033fcbd98334099c2d6e91d5511714d78b64\"" Mar 17 18:01:13.268771 containerd[1918]: time="2025-03-17T18:01:13.268720119Z" level=info msg="CreateContainer within sandbox \"6fa7c5c33c3bfc4a2576f67305cedb9f1d99411c87fb13c44629bd076c1eb090\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"673a7b8d5cbbb56428fab5287314325e92d173315cb54230f9259fe76c07f45c\"" Mar 17 18:01:13.273333 containerd[1918]: time="2025-03-17T18:01:13.271091835Z" level=info msg="StartContainer for \"673a7b8d5cbbb56428fab5287314325e92d173315cb54230f9259fe76c07f45c\"" Mar 17 18:01:13.369500 systemd[1]: Started cri-containerd-76b57fc7f83d92ecac7808941da9033fcbd98334099c2d6e91d5511714d78b64.scope - libcontainer container 76b57fc7f83d92ecac7808941da9033fcbd98334099c2d6e91d5511714d78b64. Mar 17 18:01:13.381983 systemd[1]: Started cri-containerd-673a7b8d5cbbb56428fab5287314325e92d173315cb54230f9259fe76c07f45c.scope - libcontainer container 673a7b8d5cbbb56428fab5287314325e92d173315cb54230f9259fe76c07f45c. Mar 17 18:01:13.463587 containerd[1918]: time="2025-03-17T18:01:13.463542938Z" level=info msg="StartContainer for \"76b57fc7f83d92ecac7808941da9033fcbd98334099c2d6e91d5511714d78b64\" returns successfully" Mar 17 18:01:13.468700 containerd[1918]: time="2025-03-17T18:01:13.468602079Z" level=info msg="StartContainer for \"673a7b8d5cbbb56428fab5287314325e92d173315cb54230f9259fe76c07f45c\" returns successfully" Mar 17 18:01:13.738956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510432008.mount: Deactivated successfully. Mar 17 18:01:14.371451 kubelet[3442]: I0317 18:01:14.371383 3442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xd8sd" podStartSLOduration=43.371361433 podStartE2EDuration="43.371361433s" podCreationTimestamp="2025-03-17 18:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:01:14.371060555 +0000 UTC m=+56.834096263" watchObservedRunningTime="2025-03-17 18:01:14.371361433 +0000 UTC m=+56.834397136" Mar 17 18:01:14.372776 kubelet[3442]: I0317 18:01:14.371500 3442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fz4bk" podStartSLOduration=43.371491927 podStartE2EDuration="43.371491927s" podCreationTimestamp="2025-03-17 18:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:01:14.353300918 +0000 UTC m=+56.816336623" watchObservedRunningTime="2025-03-17 18:01:14.371491927 +0000 UTC m=+56.834527631" Mar 17 18:01:18.057797 systemd[1]: Started sshd@10-172.31.20.178:22-139.178.89.65:47712.service - OpenSSH per-connection server daemon (139.178.89.65:47712). Mar 17 18:01:18.282253 sshd[4831]: Accepted publickey for core from 139.178.89.65 port 47712 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:18.286631 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:18.295902 systemd-logind[1891]: New session 11 of user core. Mar 17 18:01:18.307292 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 18:01:18.761702 sshd[4838]: Connection closed by 139.178.89.65 port 47712 Mar 17 18:01:18.764635 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:18.769029 systemd[1]: sshd@10-172.31.20.178:22-139.178.89.65:47712.service: Deactivated successfully. Mar 17 18:01:18.772400 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:01:18.773305 systemd-logind[1891]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:01:18.774814 systemd-logind[1891]: Removed session 11. Mar 17 18:01:23.804645 systemd[1]: Started sshd@11-172.31.20.178:22-139.178.89.65:34956.service - OpenSSH per-connection server daemon (139.178.89.65:34956). Mar 17 18:01:23.997768 sshd[4860]: Accepted publickey for core from 139.178.89.65 port 34956 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:24.001963 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:24.019407 systemd-logind[1891]: New session 12 of user core. Mar 17 18:01:24.028540 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 18:01:24.253394 sshd[4862]: Connection closed by 139.178.89.65 port 34956 Mar 17 18:01:24.255898 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:24.260569 systemd[1]: sshd@11-172.31.20.178:22-139.178.89.65:34956.service: Deactivated successfully. Mar 17 18:01:24.265570 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:01:24.267002 systemd-logind[1891]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:01:24.270022 systemd-logind[1891]: Removed session 12. Mar 17 18:01:29.303878 systemd[1]: Started sshd@12-172.31.20.178:22-139.178.89.65:34962.service - OpenSSH per-connection server daemon (139.178.89.65:34962). Mar 17 18:01:29.503935 sshd[4875]: Accepted publickey for core from 139.178.89.65 port 34962 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:29.506774 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:29.513789 systemd-logind[1891]: New session 13 of user core. Mar 17 18:01:29.519713 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 18:01:29.740526 sshd[4877]: Connection closed by 139.178.89.65 port 34962 Mar 17 18:01:29.742257 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:29.746708 systemd-logind[1891]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:01:29.747790 systemd[1]: sshd@12-172.31.20.178:22-139.178.89.65:34962.service: Deactivated successfully. Mar 17 18:01:29.750484 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:01:29.751740 systemd-logind[1891]: Removed session 13. Mar 17 18:01:29.776613 systemd[1]: Started sshd@13-172.31.20.178:22-139.178.89.65:34970.service - OpenSSH per-connection server daemon (139.178.89.65:34970). Mar 17 18:01:29.953825 sshd[4890]: Accepted publickey for core from 139.178.89.65 port 34970 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:29.955374 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:29.960470 systemd-logind[1891]: New session 14 of user core. Mar 17 18:01:29.964440 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 18:01:30.342759 sshd[4892]: Connection closed by 139.178.89.65 port 34970 Mar 17 18:01:30.343450 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:30.347936 systemd-logind[1891]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:01:30.350271 systemd[1]: sshd@13-172.31.20.178:22-139.178.89.65:34970.service: Deactivated successfully. Mar 17 18:01:30.355084 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:01:30.360016 systemd-logind[1891]: Removed session 14. Mar 17 18:01:30.383664 systemd[1]: Started sshd@14-172.31.20.178:22-139.178.89.65:34984.service - OpenSSH per-connection server daemon (139.178.89.65:34984). Mar 17 18:01:30.565834 sshd[4902]: Accepted publickey for core from 139.178.89.65 port 34984 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:30.567869 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:30.577958 systemd-logind[1891]: New session 15 of user core. Mar 17 18:01:30.581077 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 18:01:30.807475 sshd[4904]: Connection closed by 139.178.89.65 port 34984 Mar 17 18:01:30.809569 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:30.816842 systemd[1]: sshd@14-172.31.20.178:22-139.178.89.65:34984.service: Deactivated successfully. Mar 17 18:01:30.819691 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:01:30.821179 systemd-logind[1891]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:01:30.822907 systemd-logind[1891]: Removed session 15. Mar 17 18:01:35.850659 systemd[1]: Started sshd@15-172.31.20.178:22-139.178.89.65:57876.service - OpenSSH per-connection server daemon (139.178.89.65:57876). Mar 17 18:01:36.035201 sshd[4918]: Accepted publickey for core from 139.178.89.65 port 57876 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:36.038210 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:36.047062 systemd-logind[1891]: New session 16 of user core. Mar 17 18:01:36.054483 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 18:01:36.247062 sshd[4920]: Connection closed by 139.178.89.65 port 57876 Mar 17 18:01:36.248667 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:36.251711 systemd[1]: sshd@15-172.31.20.178:22-139.178.89.65:57876.service: Deactivated successfully. Mar 17 18:01:36.254131 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:01:36.256195 systemd-logind[1891]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:01:36.257879 systemd-logind[1891]: Removed session 16. Mar 17 18:01:41.284814 systemd[1]: Started sshd@16-172.31.20.178:22-139.178.89.65:34500.service - OpenSSH per-connection server daemon (139.178.89.65:34500). Mar 17 18:01:41.468769 sshd[4935]: Accepted publickey for core from 139.178.89.65 port 34500 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:41.469583 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:41.476385 systemd-logind[1891]: New session 17 of user core. Mar 17 18:01:41.483503 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 18:01:41.717119 sshd[4937]: Connection closed by 139.178.89.65 port 34500 Mar 17 18:01:41.794590 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:41.803797 systemd-logind[1891]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:01:41.806440 systemd[1]: sshd@16-172.31.20.178:22-139.178.89.65:34500.service: Deactivated successfully. Mar 17 18:01:41.810044 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:01:41.812051 systemd-logind[1891]: Removed session 17. Mar 17 18:01:46.766823 systemd[1]: Started sshd@17-172.31.20.178:22-139.178.89.65:34516.service - OpenSSH per-connection server daemon (139.178.89.65:34516). Mar 17 18:01:47.001890 sshd[4949]: Accepted publickey for core from 139.178.89.65 port 34516 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:47.003955 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:47.014453 systemd-logind[1891]: New session 18 of user core. Mar 17 18:01:47.026508 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 18:01:47.256577 sshd[4951]: Connection closed by 139.178.89.65 port 34516 Mar 17 18:01:47.259451 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:47.271559 systemd-logind[1891]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:01:47.272172 systemd[1]: sshd@17-172.31.20.178:22-139.178.89.65:34516.service: Deactivated successfully. Mar 17 18:01:47.277240 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:01:47.281192 systemd-logind[1891]: Removed session 18. Mar 17 18:01:52.299213 systemd[1]: Started sshd@18-172.31.20.178:22-139.178.89.65:36538.service - OpenSSH per-connection server daemon (139.178.89.65:36538). Mar 17 18:01:52.508746 sshd[4962]: Accepted publickey for core from 139.178.89.65 port 36538 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:52.517220 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:52.541747 systemd-logind[1891]: New session 19 of user core. Mar 17 18:01:52.555504 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 18:01:52.762393 sshd[4964]: Connection closed by 139.178.89.65 port 36538 Mar 17 18:01:52.763468 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:52.768457 systemd-logind[1891]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:01:52.769748 systemd[1]: sshd@18-172.31.20.178:22-139.178.89.65:36538.service: Deactivated successfully. Mar 17 18:01:52.773517 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:01:52.774752 systemd-logind[1891]: Removed session 19. Mar 17 18:01:52.800679 systemd[1]: Started sshd@19-172.31.20.178:22-139.178.89.65:36542.service - OpenSSH per-connection server daemon (139.178.89.65:36542). Mar 17 18:01:52.975313 sshd[4977]: Accepted publickey for core from 139.178.89.65 port 36542 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:52.977126 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:52.984997 systemd-logind[1891]: New session 20 of user core. Mar 17 18:01:52.995487 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 18:01:53.735756 sshd[4979]: Connection closed by 139.178.89.65 port 36542 Mar 17 18:01:53.737164 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:53.744553 systemd[1]: sshd@19-172.31.20.178:22-139.178.89.65:36542.service: Deactivated successfully. Mar 17 18:01:53.747214 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:01:53.749962 systemd-logind[1891]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:01:53.751900 systemd-logind[1891]: Removed session 20. Mar 17 18:01:53.775967 systemd[1]: Started sshd@20-172.31.20.178:22-139.178.89.65:36550.service - OpenSSH per-connection server daemon (139.178.89.65:36550). Mar 17 18:01:53.985940 sshd[4989]: Accepted publickey for core from 139.178.89.65 port 36550 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:53.987021 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:53.992412 systemd-logind[1891]: New session 21 of user core. Mar 17 18:01:54.000572 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 18:01:56.541902 sshd[4991]: Connection closed by 139.178.89.65 port 36550 Mar 17 18:01:56.544546 sshd-session[4989]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:56.561880 systemd[1]: sshd@20-172.31.20.178:22-139.178.89.65:36550.service: Deactivated successfully. Mar 17 18:01:56.567575 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:01:56.605503 systemd-logind[1891]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:01:56.617997 systemd[1]: Started sshd@21-172.31.20.178:22-139.178.89.65:36556.service - OpenSSH per-connection server daemon (139.178.89.65:36556). Mar 17 18:01:56.620099 systemd-logind[1891]: Removed session 21. Mar 17 18:01:56.821479 sshd[5008]: Accepted publickey for core from 139.178.89.65 port 36556 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:56.823918 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:56.831338 systemd-logind[1891]: New session 22 of user core. Mar 17 18:01:56.836567 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 18:01:57.425779 sshd[5011]: Connection closed by 139.178.89.65 port 36556 Mar 17 18:01:57.427546 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:57.433012 systemd[1]: sshd@21-172.31.20.178:22-139.178.89.65:36556.service: Deactivated successfully. Mar 17 18:01:57.436009 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:01:57.437295 systemd-logind[1891]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:01:57.438641 systemd-logind[1891]: Removed session 22. Mar 17 18:01:57.463844 systemd[1]: Started sshd@22-172.31.20.178:22-139.178.89.65:36562.service - OpenSSH per-connection server daemon (139.178.89.65:36562). Mar 17 18:01:57.630784 sshd[5021]: Accepted publickey for core from 139.178.89.65 port 36562 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:01:57.634577 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:57.642816 systemd-logind[1891]: New session 23 of user core. Mar 17 18:01:57.654518 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 18:01:57.910489 sshd[5023]: Connection closed by 139.178.89.65 port 36562 Mar 17 18:01:57.911563 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:57.917433 systemd[1]: sshd@22-172.31.20.178:22-139.178.89.65:36562.service: Deactivated successfully. Mar 17 18:01:57.920642 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:01:57.921708 systemd-logind[1891]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:01:57.924009 systemd-logind[1891]: Removed session 23. Mar 17 18:02:02.955753 systemd[1]: Started sshd@23-172.31.20.178:22-139.178.89.65:32932.service - OpenSSH per-connection server daemon (139.178.89.65:32932). Mar 17 18:02:03.200398 sshd[5036]: Accepted publickey for core from 139.178.89.65 port 32932 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:02:03.202339 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:03.216650 systemd-logind[1891]: New session 24 of user core. Mar 17 18:02:03.225785 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 18:02:03.527588 sshd[5038]: Connection closed by 139.178.89.65 port 32932 Mar 17 18:02:03.529871 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:03.535399 systemd-logind[1891]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:02:03.536608 systemd[1]: sshd@23-172.31.20.178:22-139.178.89.65:32932.service: Deactivated successfully. Mar 17 18:02:03.539723 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:02:03.540868 systemd-logind[1891]: Removed session 24. Mar 17 18:02:08.567651 systemd[1]: Started sshd@24-172.31.20.178:22-139.178.89.65:32938.service - OpenSSH per-connection server daemon (139.178.89.65:32938). Mar 17 18:02:08.785734 sshd[5055]: Accepted publickey for core from 139.178.89.65 port 32938 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:02:08.788753 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:08.797036 systemd-logind[1891]: New session 25 of user core. Mar 17 18:02:08.803574 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 18:02:09.005609 sshd[5057]: Connection closed by 139.178.89.65 port 32938 Mar 17 18:02:09.006735 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:09.013097 systemd[1]: sshd@24-172.31.20.178:22-139.178.89.65:32938.service: Deactivated successfully. Mar 17 18:02:09.017021 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:02:09.018292 systemd-logind[1891]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:02:09.020312 systemd-logind[1891]: Removed session 25. Mar 17 18:02:14.046693 systemd[1]: Started sshd@25-172.31.20.178:22-139.178.89.65:40312.service - OpenSSH per-connection server daemon (139.178.89.65:40312). Mar 17 18:02:14.219139 sshd[5069]: Accepted publickey for core from 139.178.89.65 port 40312 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:02:14.220758 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:14.227698 systemd-logind[1891]: New session 26 of user core. Mar 17 18:02:14.237511 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 18:02:14.431285 sshd[5071]: Connection closed by 139.178.89.65 port 40312 Mar 17 18:02:14.432603 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:14.437562 systemd-logind[1891]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:02:14.438735 systemd[1]: sshd@25-172.31.20.178:22-139.178.89.65:40312.service: Deactivated successfully. Mar 17 18:02:14.442643 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:02:14.443957 systemd-logind[1891]: Removed session 26. Mar 17 18:02:19.468681 systemd[1]: Started sshd@26-172.31.20.178:22-139.178.89.65:40326.service - OpenSSH per-connection server daemon (139.178.89.65:40326). Mar 17 18:02:19.865300 sshd[5085]: Accepted publickey for core from 139.178.89.65 port 40326 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:02:19.866875 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:19.873842 systemd-logind[1891]: New session 27 of user core. Mar 17 18:02:19.883515 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 18:02:20.104848 sshd[5087]: Connection closed by 139.178.89.65 port 40326 Mar 17 18:02:20.106148 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:20.116796 systemd[1]: sshd@26-172.31.20.178:22-139.178.89.65:40326.service: Deactivated successfully. Mar 17 18:02:20.125871 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:02:20.129119 systemd-logind[1891]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:02:20.153824 systemd[1]: Started sshd@27-172.31.20.178:22-139.178.89.65:40336.service - OpenSSH per-connection server daemon (139.178.89.65:40336). Mar 17 18:02:20.166670 systemd-logind[1891]: Removed session 27. Mar 17 18:02:20.348343 sshd[5098]: Accepted publickey for core from 139.178.89.65 port 40336 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:02:20.350868 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:20.372280 systemd-logind[1891]: New session 28 of user core. Mar 17 18:02:20.378818 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 18:02:23.444681 containerd[1918]: time="2025-03-17T18:02:23.444626792Z" level=info msg="StopContainer for \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\" with timeout 30 (s)" Mar 17 18:02:23.449167 containerd[1918]: time="2025-03-17T18:02:23.445745783Z" level=info msg="Stop container \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\" with signal terminated" Mar 17 18:02:23.517280 systemd[1]: run-containerd-runc-k8s.io-3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01-runc.u50xhF.mount: Deactivated successfully. Mar 17 18:02:23.526182 systemd[1]: cri-containerd-95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de.scope: Deactivated successfully. Mar 17 18:02:23.590783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de-rootfs.mount: Deactivated successfully. Mar 17 18:02:23.602203 containerd[1918]: time="2025-03-17T18:02:23.601738039Z" level=info msg="shim disconnected" id=95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de namespace=k8s.io Mar 17 18:02:23.602203 containerd[1918]: time="2025-03-17T18:02:23.601835467Z" level=warning msg="cleaning up after shim disconnected" id=95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de namespace=k8s.io Mar 17 18:02:23.602203 containerd[1918]: time="2025-03-17T18:02:23.601988543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:02:23.631619 containerd[1918]: time="2025-03-17T18:02:23.631355113Z" level=info msg="StopContainer for \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\" returns successfully" Mar 17 18:02:23.632236 containerd[1918]: time="2025-03-17T18:02:23.632204892Z" level=info msg="StopPodSandbox for \"bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e\"" Mar 17 18:02:23.655871 containerd[1918]: time="2025-03-17T18:02:23.644236454Z" level=info msg="Container to stop \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:02:23.662190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e-shm.mount: Deactivated successfully. Mar 17 18:02:23.672450 systemd[1]: cri-containerd-bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e.scope: Deactivated successfully. Mar 17 18:02:23.691549 containerd[1918]: time="2025-03-17T18:02:23.691409427Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:02:23.727459 containerd[1918]: time="2025-03-17T18:02:23.727210595Z" level=info msg="StopContainer for \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\" with timeout 2 (s)" Mar 17 18:02:23.730330 containerd[1918]: time="2025-03-17T18:02:23.729856807Z" level=info msg="Stop container \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\" with signal terminated" Mar 17 18:02:23.733213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e-rootfs.mount: Deactivated successfully. Mar 17 18:02:23.746662 systemd-networkd[1741]: lxc_health: Link DOWN Mar 17 18:02:23.746671 systemd-networkd[1741]: lxc_health: Lost carrier Mar 17 18:02:23.749809 containerd[1918]: time="2025-03-17T18:02:23.749121451Z" level=info msg="shim disconnected" id=bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e namespace=k8s.io Mar 17 18:02:23.751026 containerd[1918]: time="2025-03-17T18:02:23.750081880Z" level=warning msg="cleaning up after shim disconnected" id=bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e namespace=k8s.io Mar 17 18:02:23.751026 containerd[1918]: time="2025-03-17T18:02:23.750112116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:02:23.781415 systemd[1]: cri-containerd-3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01.scope: Deactivated successfully. Mar 17 18:02:23.782407 systemd[1]: cri-containerd-3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01.scope: Consumed 9.212s CPU time, 188.5M memory peak, 68.8M read from disk, 13.3M written to disk. Mar 17 18:02:23.820140 containerd[1918]: time="2025-03-17T18:02:23.820091023Z" level=info msg="TearDown network for sandbox \"bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e\" successfully" Mar 17 18:02:23.820451 containerd[1918]: time="2025-03-17T18:02:23.820313967Z" level=info msg="StopPodSandbox for \"bc8b8426dc5dda9cc2bd41ae4c071974e73621baaa72dad22fe59a2f64a95a2e\" returns successfully" Mar 17 18:02:23.859281 containerd[1918]: time="2025-03-17T18:02:23.859181785Z" level=info msg="shim disconnected" id=3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01 namespace=k8s.io Mar 17 18:02:23.859281 containerd[1918]: time="2025-03-17T18:02:23.859246331Z" level=warning msg="cleaning up after shim disconnected" id=3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01 namespace=k8s.io Mar 17 18:02:23.859281 containerd[1918]: time="2025-03-17T18:02:23.859284404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:02:23.890419 containerd[1918]: time="2025-03-17T18:02:23.890375582Z" level=info msg="StopContainer for \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\" returns successfully" Mar 17 18:02:23.891279 containerd[1918]: time="2025-03-17T18:02:23.891008617Z" level=info msg="StopPodSandbox for \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\"" Mar 17 18:02:23.891279 containerd[1918]: time="2025-03-17T18:02:23.891066841Z" level=info msg="Container to stop \"72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:02:23.891279 containerd[1918]: time="2025-03-17T18:02:23.891146736Z" level=info msg="Container to stop \"acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:02:23.891279 containerd[1918]: time="2025-03-17T18:02:23.891160995Z" level=info msg="Container to stop \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:02:23.891279 containerd[1918]: time="2025-03-17T18:02:23.891174974Z" level=info msg="Container to stop \"5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:02:23.891279 containerd[1918]: time="2025-03-17T18:02:23.891186600Z" level=info msg="Container to stop \"dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:02:23.900346 systemd[1]: cri-containerd-7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5.scope: Deactivated successfully. Mar 17 18:02:23.943777 containerd[1918]: time="2025-03-17T18:02:23.943708502Z" level=info msg="shim disconnected" id=7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5 namespace=k8s.io Mar 17 18:02:23.943777 containerd[1918]: time="2025-03-17T18:02:23.943773652Z" level=warning msg="cleaning up after shim disconnected" id=7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5 namespace=k8s.io Mar 17 18:02:23.943777 containerd[1918]: time="2025-03-17T18:02:23.943787081Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:02:23.965105 containerd[1918]: time="2025-03-17T18:02:23.965022824Z" level=info msg="TearDown network for sandbox \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\" successfully" Mar 17 18:02:23.965105 containerd[1918]: time="2025-03-17T18:02:23.965062290Z" level=info msg="StopPodSandbox for \"7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5\" returns successfully" Mar 17 18:02:24.042985 kubelet[3442]: I0317 18:02:24.042854 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-etc-cni-netd\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.042985 kubelet[3442]: I0317 18:02:24.042917 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v59hs\" (UniqueName: \"kubernetes.io/projected/802fd80f-7bae-4e90-a87a-7d931a6f3649-kube-api-access-v59hs\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.042985 kubelet[3442]: I0317 18:02:24.042944 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc-cilium-config-path\") pod \"6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc\" (UID: \"6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc\") " Mar 17 18:02:24.044176 kubelet[3442]: I0317 18:02:24.044148 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/802fd80f-7bae-4e90-a87a-7d931a6f3649-clustermesh-secrets\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.044298 kubelet[3442]: I0317 18:02:24.044192 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-cni-path\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.044298 kubelet[3442]: I0317 18:02:24.044214 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-host-proc-sys-kernel\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.044298 kubelet[3442]: I0317 18:02:24.044236 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-bpf-maps\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.044298 kubelet[3442]: I0317 18:02:24.044256 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-cilium-cgroup\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.044298 kubelet[3442]: I0317 18:02:24.044292 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-host-proc-sys-net\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.044599 kubelet[3442]: I0317 18:02:24.044320 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/802fd80f-7bae-4e90-a87a-7d931a6f3649-hubble-tls\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.044599 kubelet[3442]: I0317 18:02:24.044343 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-lib-modules\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.044599 kubelet[3442]: I0317 18:02:24.044363 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-hostproc\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.044599 kubelet[3442]: I0317 18:02:24.044386 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-cilium-run\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.044599 kubelet[3442]: I0317 18:02:24.044410 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lttpr\" (UniqueName: \"kubernetes.io/projected/6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc-kube-api-access-lttpr\") pod \"6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc\" (UID: \"6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc\") " Mar 17 18:02:24.044599 kubelet[3442]: I0317 18:02:24.044434 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/802fd80f-7bae-4e90-a87a-7d931a6f3649-cilium-config-path\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.044761 kubelet[3442]: I0317 18:02:24.044456 3442 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-xtables-lock\") pod \"802fd80f-7bae-4e90-a87a-7d931a6f3649\" (UID: \"802fd80f-7bae-4e90-a87a-7d931a6f3649\") " Mar 17 18:02:24.046884 kubelet[3442]: I0317 18:02:24.044533 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:02:24.054036 kubelet[3442]: I0317 18:02:24.053978 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc" (UID: "6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:02:24.060325 kubelet[3442]: I0317 18:02:24.059992 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802fd80f-7bae-4e90-a87a-7d931a6f3649-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:02:24.060325 kubelet[3442]: I0317 18:02:24.060071 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-cni-path" (OuterVolumeSpecName: "cni-path") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:02:24.060325 kubelet[3442]: I0317 18:02:24.060098 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:02:24.060325 kubelet[3442]: I0317 18:02:24.060119 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:02:24.060325 kubelet[3442]: I0317 18:02:24.060139 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:02:24.060671 kubelet[3442]: I0317 18:02:24.060157 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:02:24.061854 kubelet[3442]: I0317 18:02:24.061239 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802fd80f-7bae-4e90-a87a-7d931a6f3649-kube-api-access-v59hs" (OuterVolumeSpecName: "kube-api-access-v59hs") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "kube-api-access-v59hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:02:24.061854 kubelet[3442]: I0317 18:02:24.061426 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:02:24.061854 kubelet[3442]: I0317 18:02:24.061533 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:02:24.061854 kubelet[3442]: I0317 18:02:24.061558 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:02:24.061854 kubelet[3442]: I0317 18:02:24.061582 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-hostproc" (OuterVolumeSpecName: "hostproc") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:02:24.066515 kubelet[3442]: I0317 18:02:24.066478 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802fd80f-7bae-4e90-a87a-7d931a6f3649-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:02:24.072568 kubelet[3442]: I0317 18:02:24.072481 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802fd80f-7bae-4e90-a87a-7d931a6f3649-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "802fd80f-7bae-4e90-a87a-7d931a6f3649" (UID: "802fd80f-7bae-4e90-a87a-7d931a6f3649"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:02:24.074582 kubelet[3442]: I0317 18:02:24.074491 3442 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc-kube-api-access-lttpr" (OuterVolumeSpecName: "kube-api-access-lttpr") pod "6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc" (UID: "6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc"). InnerVolumeSpecName "kube-api-access-lttpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:02:24.147319 kubelet[3442]: I0317 18:02:24.147273 3442 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-host-proc-sys-kernel\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147319 kubelet[3442]: I0317 18:02:24.147324 3442 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-cni-path\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147555 kubelet[3442]: I0317 18:02:24.147337 3442 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-host-proc-sys-net\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147555 kubelet[3442]: I0317 18:02:24.147349 3442 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/802fd80f-7bae-4e90-a87a-7d931a6f3649-hubble-tls\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147555 kubelet[3442]: I0317 18:02:24.147361 3442 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-bpf-maps\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147555 kubelet[3442]: I0317 18:02:24.147371 3442 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-cilium-cgroup\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147555 kubelet[3442]: I0317 18:02:24.147388 3442 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-lib-modules\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147555 kubelet[3442]: I0317 18:02:24.147399 3442 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-hostproc\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147555 kubelet[3442]: I0317 18:02:24.147410 3442 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-cilium-run\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147555 kubelet[3442]: I0317 18:02:24.147422 3442 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/802fd80f-7bae-4e90-a87a-7d931a6f3649-cilium-config-path\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147754 kubelet[3442]: I0317 18:02:24.147433 3442 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lttpr\" (UniqueName: \"kubernetes.io/projected/6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc-kube-api-access-lttpr\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147754 kubelet[3442]: I0317 18:02:24.147445 3442 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-xtables-lock\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147754 kubelet[3442]: I0317 18:02:24.147458 3442 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc-cilium-config-path\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147754 kubelet[3442]: I0317 18:02:24.147472 3442 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/802fd80f-7bae-4e90-a87a-7d931a6f3649-clustermesh-secrets\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147754 kubelet[3442]: I0317 18:02:24.147483 3442 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/802fd80f-7bae-4e90-a87a-7d931a6f3649-etc-cni-netd\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.147754 kubelet[3442]: I0317 18:02:24.147495 3442 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v59hs\" (UniqueName: \"kubernetes.io/projected/802fd80f-7bae-4e90-a87a-7d931a6f3649-kube-api-access-v59hs\") on node \"ip-172-31-20-178\" DevicePath \"\"" Mar 17 18:02:24.504543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01-rootfs.mount: Deactivated successfully. Mar 17 18:02:24.505196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5-rootfs.mount: Deactivated successfully. Mar 17 18:02:24.505333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7098a69ef046531dfa184cd0f682174f00d8268c09785f18c59208e18f0307b5-shm.mount: Deactivated successfully. Mar 17 18:02:24.505427 systemd[1]: var-lib-kubelet-pods-6a0b737b\x2d3cf8\x2d4e9f\x2da2f8\x2dfcde55f091fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlttpr.mount: Deactivated successfully. Mar 17 18:02:24.505515 systemd[1]: var-lib-kubelet-pods-802fd80f\x2d7bae\x2d4e90\x2da87a\x2d7d931a6f3649-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv59hs.mount: Deactivated successfully. Mar 17 18:02:24.505604 systemd[1]: var-lib-kubelet-pods-802fd80f\x2d7bae\x2d4e90\x2da87a\x2d7d931a6f3649-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:02:24.505693 systemd[1]: var-lib-kubelet-pods-802fd80f\x2d7bae\x2d4e90\x2da87a\x2d7d931a6f3649-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:02:24.605534 systemd[1]: Removed slice kubepods-besteffort-pod6a0b737b_3cf8_4e9f_a2f8_fcde55f091fc.slice - libcontainer container kubepods-besteffort-pod6a0b737b_3cf8_4e9f_a2f8_fcde55f091fc.slice. Mar 17 18:02:24.626306 kubelet[3442]: I0317 18:02:24.626251 3442 scope.go:117] "RemoveContainer" containerID="95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de" Mar 17 18:02:24.655373 containerd[1918]: time="2025-03-17T18:02:24.654894707Z" level=info msg="RemoveContainer for \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\"" Mar 17 18:02:24.661359 systemd[1]: Removed slice kubepods-burstable-pod802fd80f_7bae_4e90_a87a_7d931a6f3649.slice - libcontainer container kubepods-burstable-pod802fd80f_7bae_4e90_a87a_7d931a6f3649.slice. Mar 17 18:02:24.661548 systemd[1]: kubepods-burstable-pod802fd80f_7bae_4e90_a87a_7d931a6f3649.slice: Consumed 9.310s CPU time, 188.8M memory peak, 68.8M read from disk, 13.3M written to disk. Mar 17 18:02:24.664223 containerd[1918]: time="2025-03-17T18:02:24.664182310Z" level=info msg="RemoveContainer for \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\" returns successfully" Mar 17 18:02:24.664651 kubelet[3442]: I0317 18:02:24.664631 3442 scope.go:117] "RemoveContainer" containerID="95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de" Mar 17 18:02:24.667786 containerd[1918]: time="2025-03-17T18:02:24.667141851Z" level=error msg="ContainerStatus for \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\": not found" Mar 17 18:02:24.670109 kubelet[3442]: E0317 18:02:24.669829 3442 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\": not found" containerID="95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de" Mar 17 18:02:24.673803 kubelet[3442]: I0317 18:02:24.672825 3442 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de"} err="failed to get container status \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\": rpc error: code = NotFound desc = an error occurred when try to find container \"95a6c8c7fe17c4ba6c07c3e4d7fd77895a13d63381b416acc3d5e06eff4176de\": not found" Mar 17 18:02:24.673803 kubelet[3442]: I0317 18:02:24.672991 3442 scope.go:117] "RemoveContainer" containerID="3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01" Mar 17 18:02:24.675977 containerd[1918]: time="2025-03-17T18:02:24.675941850Z" level=info msg="RemoveContainer for \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\"" Mar 17 18:02:24.682840 containerd[1918]: time="2025-03-17T18:02:24.682798570Z" level=info msg="RemoveContainer for \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\" returns successfully" Mar 17 18:02:24.683243 kubelet[3442]: I0317 18:02:24.683213 3442 scope.go:117] "RemoveContainer" containerID="dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830" Mar 17 18:02:24.687980 containerd[1918]: time="2025-03-17T18:02:24.687830922Z" level=info msg="RemoveContainer for \"dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830\"" Mar 17 18:02:24.697536 containerd[1918]: time="2025-03-17T18:02:24.697296151Z" level=info msg="RemoveContainer for \"dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830\" returns successfully" Mar 17 18:02:24.697965 kubelet[3442]: I0317 18:02:24.697929 3442 scope.go:117] "RemoveContainer" containerID="acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b" Mar 17 18:02:24.699428 containerd[1918]: time="2025-03-17T18:02:24.699396622Z" level=info msg="RemoveContainer for \"acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b\"" Mar 17 18:02:24.707603 containerd[1918]: time="2025-03-17T18:02:24.707550867Z" level=info msg="RemoveContainer for \"acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b\" returns successfully" Mar 17 18:02:24.708575 kubelet[3442]: I0317 18:02:24.708534 3442 scope.go:117] "RemoveContainer" containerID="5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a" Mar 17 18:02:24.710834 containerd[1918]: time="2025-03-17T18:02:24.710805250Z" level=info msg="RemoveContainer for \"5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a\"" Mar 17 18:02:24.716784 containerd[1918]: time="2025-03-17T18:02:24.716738796Z" level=info msg="RemoveContainer for \"5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a\" returns successfully" Mar 17 18:02:24.717615 kubelet[3442]: I0317 18:02:24.717585 3442 scope.go:117] "RemoveContainer" containerID="72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242" Mar 17 18:02:24.719028 containerd[1918]: time="2025-03-17T18:02:24.718994455Z" level=info msg="RemoveContainer for \"72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242\"" Mar 17 18:02:24.724533 containerd[1918]: time="2025-03-17T18:02:24.724495122Z" level=info msg="RemoveContainer for \"72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242\" returns successfully" Mar 17 18:02:24.724908 kubelet[3442]: I0317 18:02:24.724882 3442 scope.go:117] "RemoveContainer" containerID="3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01" Mar 17 18:02:24.726709 containerd[1918]: time="2025-03-17T18:02:24.726671411Z" level=error msg="ContainerStatus for \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\": not found" Mar 17 18:02:24.727133 kubelet[3442]: E0317 18:02:24.726964 3442 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\": not found" containerID="3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01" Mar 17 18:02:24.727133 kubelet[3442]: I0317 18:02:24.727103 3442 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01"} err="failed to get container status \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\": rpc error: code = NotFound desc = an error occurred when try to find container \"3bc6acede1f74f282a00cd699acfc7d8e3837b9e6f64b3a42ba3210a05b92f01\": not found" Mar 17 18:02:24.727133 kubelet[3442]: I0317 18:02:24.727132 3442 scope.go:117] "RemoveContainer" containerID="dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830" Mar 17 18:02:24.727610 containerd[1918]: time="2025-03-17T18:02:24.727448721Z" level=error msg="ContainerStatus for \"dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830\": not found" Mar 17 18:02:24.727988 kubelet[3442]: E0317 18:02:24.727846 3442 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830\": not found" containerID="dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830" Mar 17 18:02:24.727988 kubelet[3442]: I0317 18:02:24.727876 3442 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830"} err="failed to get container status \"dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd24e3d429cee008f4eed37c081b9da508eebeb0970f46dbe441b66a95d41830\": not found" Mar 17 18:02:24.727988 kubelet[3442]: I0317 18:02:24.727963 3442 scope.go:117] "RemoveContainer" containerID="acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b" Mar 17 18:02:24.729597 containerd[1918]: time="2025-03-17T18:02:24.728624242Z" level=error msg="ContainerStatus for \"acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b\": not found" Mar 17 18:02:24.730660 kubelet[3442]: E0317 18:02:24.730601 3442 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b\": not found" containerID="acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b" Mar 17 18:02:24.730742 kubelet[3442]: I0317 18:02:24.730666 3442 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b"} err="failed to get container status \"acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"acc4965c1f01504a6152e265c5f85e83d09d1f8f76724cd1c4972d220d698f0b\": not found" Mar 17 18:02:24.730742 kubelet[3442]: I0317 18:02:24.730690 3442 scope.go:117] "RemoveContainer" containerID="5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a" Mar 17 18:02:24.730910 containerd[1918]: time="2025-03-17T18:02:24.730865601Z" level=error msg="ContainerStatus for \"5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a\": not found" Mar 17 18:02:24.731038 kubelet[3442]: E0317 18:02:24.731014 3442 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a\": not found" containerID="5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a" Mar 17 18:02:24.731284 kubelet[3442]: I0317 18:02:24.731039 3442 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a"} err="failed to get container status \"5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b45f28450eb6f4c2af6624388f95b26ae2df6a310985e2e7dc78ce6cc7d1c0a\": not found" Mar 17 18:02:24.731284 kubelet[3442]: I0317 18:02:24.731207 3442 scope.go:117] "RemoveContainer" containerID="72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242" Mar 17 18:02:24.731474 containerd[1918]: time="2025-03-17T18:02:24.731438447Z" level=error msg="ContainerStatus for \"72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242\": not found" Mar 17 18:02:24.731643 kubelet[3442]: E0317 18:02:24.731592 3442 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242\": not found" containerID="72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242" Mar 17 18:02:24.731712 kubelet[3442]: I0317 18:02:24.731630 3442 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242"} err="failed to get container status \"72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242\": rpc error: code = NotFound desc = an error occurred when try to find container \"72f07dbba430791fae3d4b3696d1bcc1c31bb28c69a50d29c9b9bd204906e242\": not found" Mar 17 18:02:25.309830 sshd[5101]: Connection closed by 139.178.89.65 port 40336 Mar 17 18:02:25.311312 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:25.317235 systemd[1]: sshd@27-172.31.20.178:22-139.178.89.65:40336.service: Deactivated successfully. Mar 17 18:02:25.321528 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 18:02:25.324478 systemd-logind[1891]: Session 28 logged out. Waiting for processes to exit. Mar 17 18:02:25.326583 systemd-logind[1891]: Removed session 28. Mar 17 18:02:25.346913 systemd[1]: Started sshd@28-172.31.20.178:22-139.178.89.65:60370.service - OpenSSH per-connection server daemon (139.178.89.65:60370). Mar 17 18:02:25.536868 sshd[5267]: Accepted publickey for core from 139.178.89.65 port 60370 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:02:25.541571 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:25.548362 systemd-logind[1891]: New session 29 of user core. Mar 17 18:02:25.555611 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 18:02:25.878294 kubelet[3442]: I0317 18:02:25.877136 3442 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc" path="/var/lib/kubelet/pods/6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc/volumes" Mar 17 18:02:25.878294 kubelet[3442]: I0317 18:02:25.877727 3442 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802fd80f-7bae-4e90-a87a-7d931a6f3649" path="/var/lib/kubelet/pods/802fd80f-7bae-4e90-a87a-7d931a6f3649/volumes" Mar 17 18:02:25.973605 ntpd[1876]: Deleting interface #11 lxc_health, fe80::e499:17ff:fee8:5627%8#123, interface stats: received=0, sent=0, dropped=0, active_time=75 secs Mar 17 18:02:25.974670 ntpd[1876]: 17 Mar 18:02:25 ntpd[1876]: Deleting interface #11 lxc_health, fe80::e499:17ff:fee8:5627%8#123, interface stats: received=0, sent=0, dropped=0, active_time=75 secs Mar 17 18:02:27.282301 sshd[5269]: Connection closed by 139.178.89.65 port 60370 Mar 17 18:02:27.283138 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:27.292961 systemd[1]: sshd@28-172.31.20.178:22-139.178.89.65:60370.service: Deactivated successfully. Mar 17 18:02:27.305973 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 18:02:27.312479 systemd-logind[1891]: Session 29 logged out. Waiting for processes to exit. Mar 17 18:02:27.341467 systemd[1]: Started sshd@29-172.31.20.178:22-139.178.89.65:60386.service - OpenSSH per-connection server daemon (139.178.89.65:60386). Mar 17 18:02:27.344460 systemd-logind[1891]: Removed session 29. Mar 17 18:02:27.422765 kubelet[3442]: I0317 18:02:27.418016 3442 topology_manager.go:215] "Topology Admit Handler" podUID="e544dc85-8889-46f4-b942-3bde53a1d8a2" podNamespace="kube-system" podName="cilium-xvd6c" Mar 17 18:02:27.428681 kubelet[3442]: E0317 18:02:27.426992 3442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="802fd80f-7bae-4e90-a87a-7d931a6f3649" containerName="apply-sysctl-overwrites" Mar 17 18:02:27.428681 kubelet[3442]: E0317 18:02:27.427045 3442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="802fd80f-7bae-4e90-a87a-7d931a6f3649" containerName="clean-cilium-state" Mar 17 18:02:27.428681 kubelet[3442]: E0317 18:02:27.427055 3442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="802fd80f-7bae-4e90-a87a-7d931a6f3649" containerName="cilium-agent" Mar 17 18:02:27.428681 kubelet[3442]: E0317 18:02:27.427066 3442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc" containerName="cilium-operator" Mar 17 18:02:27.428681 kubelet[3442]: E0317 18:02:27.427075 3442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="802fd80f-7bae-4e90-a87a-7d931a6f3649" containerName="mount-cgroup" Mar 17 18:02:27.428681 kubelet[3442]: E0317 18:02:27.427083 3442 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="802fd80f-7bae-4e90-a87a-7d931a6f3649" containerName="mount-bpf-fs" Mar 17 18:02:27.428681 kubelet[3442]: I0317 18:02:27.427147 3442 memory_manager.go:354] "RemoveStaleState removing state" podUID="802fd80f-7bae-4e90-a87a-7d931a6f3649" containerName="cilium-agent" Mar 17 18:02:27.428681 kubelet[3442]: I0317 18:02:27.427156 3442 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a0b737b-3cf8-4e9f-a2f8-fcde55f091fc" containerName="cilium-operator" Mar 17 18:02:27.471113 systemd[1]: Created slice kubepods-burstable-pode544dc85_8889_46f4_b942_3bde53a1d8a2.slice - libcontainer container kubepods-burstable-pode544dc85_8889_46f4_b942_3bde53a1d8a2.slice. Mar 17 18:02:27.480851 kubelet[3442]: I0317 18:02:27.480809 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e544dc85-8889-46f4-b942-3bde53a1d8a2-cni-path\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481014 kubelet[3442]: I0317 18:02:27.480865 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e544dc85-8889-46f4-b942-3bde53a1d8a2-xtables-lock\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481014 kubelet[3442]: I0317 18:02:27.480891 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e544dc85-8889-46f4-b942-3bde53a1d8a2-host-proc-sys-net\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481014 kubelet[3442]: I0317 18:02:27.480916 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e544dc85-8889-46f4-b942-3bde53a1d8a2-bpf-maps\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481014 kubelet[3442]: I0317 18:02:27.480939 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e544dc85-8889-46f4-b942-3bde53a1d8a2-host-proc-sys-kernel\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481014 kubelet[3442]: I0317 18:02:27.480967 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e544dc85-8889-46f4-b942-3bde53a1d8a2-hostproc\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481014 kubelet[3442]: I0317 18:02:27.480990 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e544dc85-8889-46f4-b942-3bde53a1d8a2-clustermesh-secrets\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481255 kubelet[3442]: I0317 18:02:27.481016 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e544dc85-8889-46f4-b942-3bde53a1d8a2-hubble-tls\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481255 kubelet[3442]: I0317 18:02:27.481047 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e544dc85-8889-46f4-b942-3bde53a1d8a2-cilium-config-path\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481255 kubelet[3442]: I0317 18:02:27.481074 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e544dc85-8889-46f4-b942-3bde53a1d8a2-lib-modules\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481255 kubelet[3442]: I0317 18:02:27.481101 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e544dc85-8889-46f4-b942-3bde53a1d8a2-cilium-ipsec-secrets\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481255 kubelet[3442]: I0317 18:02:27.481127 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j9hn\" (UniqueName: \"kubernetes.io/projected/e544dc85-8889-46f4-b942-3bde53a1d8a2-kube-api-access-2j9hn\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481255 kubelet[3442]: I0317 18:02:27.481155 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e544dc85-8889-46f4-b942-3bde53a1d8a2-cilium-run\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481610 kubelet[3442]: I0317 18:02:27.481180 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e544dc85-8889-46f4-b942-3bde53a1d8a2-cilium-cgroup\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.481610 kubelet[3442]: I0317 18:02:27.481205 3442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e544dc85-8889-46f4-b942-3bde53a1d8a2-etc-cni-netd\") pod \"cilium-xvd6c\" (UID: \"e544dc85-8889-46f4-b942-3bde53a1d8a2\") " pod="kube-system/cilium-xvd6c" Mar 17 18:02:27.567217 sshd[5279]: Accepted publickey for core from 139.178.89.65 port 60386 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:02:27.577276 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:27.605799 systemd-logind[1891]: New session 30 of user core. Mar 17 18:02:27.637353 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 17 18:02:27.780074 containerd[1918]: time="2025-03-17T18:02:27.780029202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xvd6c,Uid:e544dc85-8889-46f4-b942-3bde53a1d8a2,Namespace:kube-system,Attempt:0,}" Mar 17 18:02:27.789856 sshd[5285]: Connection closed by 139.178.89.65 port 60386 Mar 17 18:02:27.790765 sshd-session[5279]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:27.813604 systemd[1]: sshd@29-172.31.20.178:22-139.178.89.65:60386.service: Deactivated successfully. Mar 17 18:02:27.821695 systemd[1]: session-30.scope: Deactivated successfully. Mar 17 18:02:27.824437 systemd-logind[1891]: Session 30 logged out. Waiting for processes to exit. Mar 17 18:02:27.847567 containerd[1918]: time="2025-03-17T18:02:27.842141647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:02:27.847567 containerd[1918]: time="2025-03-17T18:02:27.842216611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:02:27.847567 containerd[1918]: time="2025-03-17T18:02:27.842242509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:27.847567 containerd[1918]: time="2025-03-17T18:02:27.842369967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:27.855573 systemd[1]: Started sshd@30-172.31.20.178:22-139.178.89.65:60394.service - OpenSSH per-connection server daemon (139.178.89.65:60394). Mar 17 18:02:27.859658 systemd-logind[1891]: Removed session 30. Mar 17 18:02:27.898763 systemd[1]: Started cri-containerd-ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396.scope - libcontainer container ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396. Mar 17 18:02:27.968336 containerd[1918]: time="2025-03-17T18:02:27.968297132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xvd6c,Uid:e544dc85-8889-46f4-b942-3bde53a1d8a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396\"" Mar 17 18:02:27.982421 containerd[1918]: time="2025-03-17T18:02:27.982380085Z" level=info msg="CreateContainer within sandbox \"ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:02:28.040522 containerd[1918]: time="2025-03-17T18:02:28.040465187Z" level=info msg="CreateContainer within sandbox \"ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2588c242e5140e2a9fa85aaa86566e57bc3d99ce1fa1a3138289946e7af0f61d\"" Mar 17 18:02:28.041371 containerd[1918]: time="2025-03-17T18:02:28.041275983Z" level=info msg="StartContainer for \"2588c242e5140e2a9fa85aaa86566e57bc3d99ce1fa1a3138289946e7af0f61d\"" Mar 17 18:02:28.078515 systemd[1]: Started cri-containerd-2588c242e5140e2a9fa85aaa86566e57bc3d99ce1fa1a3138289946e7af0f61d.scope - libcontainer container 2588c242e5140e2a9fa85aaa86566e57bc3d99ce1fa1a3138289946e7af0f61d. Mar 17 18:02:28.099543 kubelet[3442]: E0317 18:02:28.099426 3442 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:02:28.112330 sshd[5311]: Accepted publickey for core from 139.178.89.65 port 60394 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 18:02:28.115948 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:28.124973 systemd-logind[1891]: New session 31 of user core. Mar 17 18:02:28.127948 containerd[1918]: time="2025-03-17T18:02:28.127913774Z" level=info msg="StartContainer for \"2588c242e5140e2a9fa85aaa86566e57bc3d99ce1fa1a3138289946e7af0f61d\" returns successfully" Mar 17 18:02:28.132563 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 17 18:02:28.644908 systemd[1]: cri-containerd-2588c242e5140e2a9fa85aaa86566e57bc3d99ce1fa1a3138289946e7af0f61d.scope: Deactivated successfully. Mar 17 18:02:28.645922 systemd[1]: cri-containerd-2588c242e5140e2a9fa85aaa86566e57bc3d99ce1fa1a3138289946e7af0f61d.scope: Consumed 25ms CPU time, 9.3M memory peak, 2.9M read from disk. Mar 17 18:02:28.732136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2588c242e5140e2a9fa85aaa86566e57bc3d99ce1fa1a3138289946e7af0f61d-rootfs.mount: Deactivated successfully. Mar 17 18:02:28.773494 containerd[1918]: time="2025-03-17T18:02:28.771491514Z" level=info msg="shim disconnected" id=2588c242e5140e2a9fa85aaa86566e57bc3d99ce1fa1a3138289946e7af0f61d namespace=k8s.io Mar 17 18:02:28.773818 containerd[1918]: time="2025-03-17T18:02:28.773552952Z" level=warning msg="cleaning up after shim disconnected" id=2588c242e5140e2a9fa85aaa86566e57bc3d99ce1fa1a3138289946e7af0f61d namespace=k8s.io Mar 17 18:02:28.773818 containerd[1918]: time="2025-03-17T18:02:28.773595898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:02:28.794418 containerd[1918]: time="2025-03-17T18:02:28.794344316Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:02:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:02:29.695900 containerd[1918]: time="2025-03-17T18:02:29.695616508Z" level=info msg="CreateContainer within sandbox \"ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:02:29.729089 containerd[1918]: time="2025-03-17T18:02:29.729032101Z" level=info msg="CreateContainer within sandbox \"ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b6b7f962d6dfd703821601a5ce2276f45c3f5ee0d74e87a147dcbab629a52f2f\"" Mar 17 18:02:29.731353 containerd[1918]: time="2025-03-17T18:02:29.730037513Z" level=info msg="StartContainer for \"b6b7f962d6dfd703821601a5ce2276f45c3f5ee0d74e87a147dcbab629a52f2f\"" Mar 17 18:02:29.810497 systemd[1]: Started cri-containerd-b6b7f962d6dfd703821601a5ce2276f45c3f5ee0d74e87a147dcbab629a52f2f.scope - libcontainer container b6b7f962d6dfd703821601a5ce2276f45c3f5ee0d74e87a147dcbab629a52f2f. Mar 17 18:02:29.890294 containerd[1918]: time="2025-03-17T18:02:29.890184736Z" level=info msg="StartContainer for \"b6b7f962d6dfd703821601a5ce2276f45c3f5ee0d74e87a147dcbab629a52f2f\" returns successfully" Mar 17 18:02:30.209962 systemd[1]: cri-containerd-b6b7f962d6dfd703821601a5ce2276f45c3f5ee0d74e87a147dcbab629a52f2f.scope: Deactivated successfully. Mar 17 18:02:30.210824 systemd[1]: cri-containerd-b6b7f962d6dfd703821601a5ce2276f45c3f5ee0d74e87a147dcbab629a52f2f.scope: Consumed 24ms CPU time, 7.6M memory peak, 2.2M read from disk. Mar 17 18:02:30.235883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6b7f962d6dfd703821601a5ce2276f45c3f5ee0d74e87a147dcbab629a52f2f-rootfs.mount: Deactivated successfully. Mar 17 18:02:30.258193 containerd[1918]: time="2025-03-17T18:02:30.258130309Z" level=info msg="shim disconnected" id=b6b7f962d6dfd703821601a5ce2276f45c3f5ee0d74e87a147dcbab629a52f2f namespace=k8s.io Mar 17 18:02:30.258193 containerd[1918]: time="2025-03-17T18:02:30.258185512Z" level=warning msg="cleaning up after shim disconnected" id=b6b7f962d6dfd703821601a5ce2276f45c3f5ee0d74e87a147dcbab629a52f2f namespace=k8s.io Mar 17 18:02:30.258193 containerd[1918]: time="2025-03-17T18:02:30.258197249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:02:30.695484 containerd[1918]: time="2025-03-17T18:02:30.695012194Z" level=info msg="CreateContainer within sandbox \"ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:02:30.747622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176423065.mount: Deactivated successfully. Mar 17 18:02:30.749104 containerd[1918]: time="2025-03-17T18:02:30.749064950Z" level=info msg="CreateContainer within sandbox \"ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"930521a819ca5b79af5eb9d246371c8fc7cecd66b8704bc1acceeadf09932290\"" Mar 17 18:02:30.751520 containerd[1918]: time="2025-03-17T18:02:30.749613103Z" level=info msg="StartContainer for \"930521a819ca5b79af5eb9d246371c8fc7cecd66b8704bc1acceeadf09932290\"" Mar 17 18:02:30.799510 systemd[1]: Started cri-containerd-930521a819ca5b79af5eb9d246371c8fc7cecd66b8704bc1acceeadf09932290.scope - libcontainer container 930521a819ca5b79af5eb9d246371c8fc7cecd66b8704bc1acceeadf09932290. Mar 17 18:02:30.846322 containerd[1918]: time="2025-03-17T18:02:30.846094379Z" level=info msg="StartContainer for \"930521a819ca5b79af5eb9d246371c8fc7cecd66b8704bc1acceeadf09932290\" returns successfully" Mar 17 18:02:30.859036 systemd[1]: cri-containerd-930521a819ca5b79af5eb9d246371c8fc7cecd66b8704bc1acceeadf09932290.scope: Deactivated successfully. Mar 17 18:02:30.900176 containerd[1918]: time="2025-03-17T18:02:30.900107898Z" level=info msg="shim disconnected" id=930521a819ca5b79af5eb9d246371c8fc7cecd66b8704bc1acceeadf09932290 namespace=k8s.io Mar 17 18:02:30.900176 containerd[1918]: time="2025-03-17T18:02:30.900167795Z" level=warning msg="cleaning up after shim disconnected" id=930521a819ca5b79af5eb9d246371c8fc7cecd66b8704bc1acceeadf09932290 namespace=k8s.io Mar 17 18:02:30.900176 containerd[1918]: time="2025-03-17T18:02:30.900180363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:02:31.046378 kubelet[3442]: I0317 18:02:31.046232 3442 setters.go:580] "Node became not ready" node="ip-172-31-20-178" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:02:31Z","lastTransitionTime":"2025-03-17T18:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:02:31.701476 containerd[1918]: time="2025-03-17T18:02:31.701432345Z" level=info msg="CreateContainer within sandbox \"ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:02:31.727805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-930521a819ca5b79af5eb9d246371c8fc7cecd66b8704bc1acceeadf09932290-rootfs.mount: Deactivated successfully. Mar 17 18:02:31.732504 containerd[1918]: time="2025-03-17T18:02:31.732174117Z" level=info msg="CreateContainer within sandbox \"ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8a3eabab80eaf8ca42f8802edbd362c44c394bad1dd0ca67c787de88e00f51d6\"" Mar 17 18:02:31.733178 containerd[1918]: time="2025-03-17T18:02:31.733062646Z" level=info msg="StartContainer for \"8a3eabab80eaf8ca42f8802edbd362c44c394bad1dd0ca67c787de88e00f51d6\"" Mar 17 18:02:31.774776 systemd[1]: run-containerd-runc-k8s.io-8a3eabab80eaf8ca42f8802edbd362c44c394bad1dd0ca67c787de88e00f51d6-runc.cxJYVi.mount: Deactivated successfully. Mar 17 18:02:31.785360 systemd[1]: Started cri-containerd-8a3eabab80eaf8ca42f8802edbd362c44c394bad1dd0ca67c787de88e00f51d6.scope - libcontainer container 8a3eabab80eaf8ca42f8802edbd362c44c394bad1dd0ca67c787de88e00f51d6. Mar 17 18:02:31.817251 systemd[1]: cri-containerd-8a3eabab80eaf8ca42f8802edbd362c44c394bad1dd0ca67c787de88e00f51d6.scope: Deactivated successfully. Mar 17 18:02:31.820692 containerd[1918]: time="2025-03-17T18:02:31.820570344Z" level=info msg="StartContainer for \"8a3eabab80eaf8ca42f8802edbd362c44c394bad1dd0ca67c787de88e00f51d6\" returns successfully" Mar 17 18:02:31.859434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a3eabab80eaf8ca42f8802edbd362c44c394bad1dd0ca67c787de88e00f51d6-rootfs.mount: Deactivated successfully. Mar 17 18:02:31.878665 containerd[1918]: time="2025-03-17T18:02:31.878568946Z" level=info msg="shim disconnected" id=8a3eabab80eaf8ca42f8802edbd362c44c394bad1dd0ca67c787de88e00f51d6 namespace=k8s.io Mar 17 18:02:31.878665 containerd[1918]: time="2025-03-17T18:02:31.878659392Z" level=warning msg="cleaning up after shim disconnected" id=8a3eabab80eaf8ca42f8802edbd362c44c394bad1dd0ca67c787de88e00f51d6 namespace=k8s.io Mar 17 18:02:31.879027 containerd[1918]: time="2025-03-17T18:02:31.878671916Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:02:31.924111 containerd[1918]: time="2025-03-17T18:02:31.924066727Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:02:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:02:32.745287 containerd[1918]: time="2025-03-17T18:02:32.745231471Z" level=info msg="CreateContainer within sandbox \"ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:02:32.774994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2324494750.mount: Deactivated successfully. Mar 17 18:02:32.777995 containerd[1918]: time="2025-03-17T18:02:32.777948609Z" level=info msg="CreateContainer within sandbox \"ec67aea978dfe4c73ec74134b156f58cdc2eddfd3f3c778ebebdc7d06752e396\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4da8aa7f09d1e6c67d35965c33410797d63d762bc89843bda9252dadcaf7ef7d\"" Mar 17 18:02:32.780592 containerd[1918]: time="2025-03-17T18:02:32.778833990Z" level=info msg="StartContainer for \"4da8aa7f09d1e6c67d35965c33410797d63d762bc89843bda9252dadcaf7ef7d\"" Mar 17 18:02:32.858492 systemd[1]: Started cri-containerd-4da8aa7f09d1e6c67d35965c33410797d63d762bc89843bda9252dadcaf7ef7d.scope - libcontainer container 4da8aa7f09d1e6c67d35965c33410797d63d762bc89843bda9252dadcaf7ef7d. Mar 17 18:02:32.914832 containerd[1918]: time="2025-03-17T18:02:32.914787414Z" level=info msg="StartContainer for \"4da8aa7f09d1e6c67d35965c33410797d63d762bc89843bda9252dadcaf7ef7d\" returns successfully" Mar 17 18:02:33.101360 kubelet[3442]: E0317 18:02:33.101214 3442 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:02:33.768607 systemd[1]: run-containerd-runc-k8s.io-4da8aa7f09d1e6c67d35965c33410797d63d762bc89843bda9252dadcaf7ef7d-runc.nbwmCt.mount: Deactivated successfully. Mar 17 18:02:34.755332 kubelet[3442]: I0317 18:02:34.755162 3442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xvd6c" podStartSLOduration=7.755117813 podStartE2EDuration="7.755117813s" podCreationTimestamp="2025-03-17 18:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:02:34.754655211 +0000 UTC m=+137.217690953" watchObservedRunningTime="2025-03-17 18:02:34.755117813 +0000 UTC m=+137.218153515" Mar 17 18:02:36.634429 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:02:36.872582 kubelet[3442]: E0317 18:02:36.872522 3442 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-xd8sd" podUID="86a0c950-68ef-4119-a173-2395c37d3b0a" Mar 17 18:02:37.503731 systemd[1]: run-containerd-runc-k8s.io-4da8aa7f09d1e6c67d35965c33410797d63d762bc89843bda9252dadcaf7ef7d-runc.hEcTJE.mount: Deactivated successfully. Mar 17 18:02:40.221228 systemd-networkd[1741]: lxc_health: Link UP Mar 17 18:02:40.229806 (udev-worker)[6163]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:02:40.231981 systemd-networkd[1741]: lxc_health: Gained carrier Mar 17 18:02:41.648752 systemd-networkd[1741]: lxc_health: Gained IPv6LL Mar 17 18:02:43.970428 ntpd[1876]: Listen normally on 14 lxc_health [fe80::7c61:16ff:fe72:a3e5%14]:123 Mar 17 18:02:43.972967 ntpd[1876]: 17 Mar 18:02:43 ntpd[1876]: Listen normally on 14 lxc_health [fe80::7c61:16ff:fe72:a3e5%14]:123 Mar 17 18:02:47.027792 sshd[5370]: Connection closed by 139.178.89.65 port 60394 Mar 17 18:02:47.029578 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:47.036811 systemd[1]: sshd@30-172.31.20.178:22-139.178.89.65:60394.service: Deactivated successfully. Mar 17 18:02:47.041793 systemd[1]: session-31.scope: Deactivated successfully. Mar 17 18:02:47.046203 systemd-logind[1891]: Session 31 logged out. Waiting for processes to exit. Mar 17 18:02:47.048802 systemd-logind[1891]: Removed session 31. Mar 17 18:03:01.079725 kubelet[3442]: E0317 18:03:01.079652 3442 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-178?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:03:02.012735 systemd[1]: cri-containerd-c151aa2d24756051155cbf6ec6193148189f4059a3cd7c6891f39dd1175841d2.scope: Deactivated successfully. Mar 17 18:03:02.013638 systemd[1]: cri-containerd-c151aa2d24756051155cbf6ec6193148189f4059a3cd7c6891f39dd1175841d2.scope: Consumed 3.573s CPU time, 77.9M memory peak, 24M read from disk. Mar 17 18:03:02.083132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c151aa2d24756051155cbf6ec6193148189f4059a3cd7c6891f39dd1175841d2-rootfs.mount: Deactivated successfully. Mar 17 18:03:02.110169 containerd[1918]: time="2025-03-17T18:03:02.110083470Z" level=info msg="shim disconnected" id=c151aa2d24756051155cbf6ec6193148189f4059a3cd7c6891f39dd1175841d2 namespace=k8s.io Mar 17 18:03:02.110169 containerd[1918]: time="2025-03-17T18:03:02.110153239Z" level=warning msg="cleaning up after shim disconnected" id=c151aa2d24756051155cbf6ec6193148189f4059a3cd7c6891f39dd1175841d2 namespace=k8s.io Mar 17 18:03:02.110169 containerd[1918]: time="2025-03-17T18:03:02.110167171Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:03:02.885024 kubelet[3442]: I0317 18:03:02.884990 3442 scope.go:117] "RemoveContainer" containerID="c151aa2d24756051155cbf6ec6193148189f4059a3cd7c6891f39dd1175841d2" Mar 17 18:03:02.892070 containerd[1918]: time="2025-03-17T18:03:02.892025850Z" level=info msg="CreateContainer within sandbox \"dd30027d66cef9d796df1e205de0445b5664420299538d5d8cefcf8819850b29\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 18:03:02.924067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2495063791.mount: Deactivated successfully. Mar 17 18:03:02.933767 containerd[1918]: time="2025-03-17T18:03:02.933713091Z" level=info msg="CreateContainer within sandbox \"dd30027d66cef9d796df1e205de0445b5664420299538d5d8cefcf8819850b29\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"268dd24639a550667d03f4f58e062a923f5171a7fd7f188b4f2526f57dcbcafa\"" Mar 17 18:03:02.936294 containerd[1918]: time="2025-03-17T18:03:02.934723399Z" level=info msg="StartContainer for \"268dd24639a550667d03f4f58e062a923f5171a7fd7f188b4f2526f57dcbcafa\"" Mar 17 18:03:03.011776 systemd[1]: Started cri-containerd-268dd24639a550667d03f4f58e062a923f5171a7fd7f188b4f2526f57dcbcafa.scope - libcontainer container 268dd24639a550667d03f4f58e062a923f5171a7fd7f188b4f2526f57dcbcafa. Mar 17 18:03:03.082904 systemd[1]: run-containerd-runc-k8s.io-268dd24639a550667d03f4f58e062a923f5171a7fd7f188b4f2526f57dcbcafa-runc.qX7tQk.mount: Deactivated successfully. Mar 17 18:03:03.107898 containerd[1918]: time="2025-03-17T18:03:03.107842311Z" level=info msg="StartContainer for \"268dd24639a550667d03f4f58e062a923f5171a7fd7f188b4f2526f57dcbcafa\" returns successfully" Mar 17 18:03:05.737554 systemd[1]: cri-containerd-768975dbce4d6ba8e2610e6cc867aa674fa8730268e4c96662ee244367bbfcd9.scope: Deactivated successfully. Mar 17 18:03:05.738584 systemd[1]: cri-containerd-768975dbce4d6ba8e2610e6cc867aa674fa8730268e4c96662ee244367bbfcd9.scope: Consumed 2.397s CPU time, 30.5M memory peak, 13.6M read from disk. Mar 17 18:03:05.798713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-768975dbce4d6ba8e2610e6cc867aa674fa8730268e4c96662ee244367bbfcd9-rootfs.mount: Deactivated successfully. Mar 17 18:03:05.832890 containerd[1918]: time="2025-03-17T18:03:05.832588529Z" level=info msg="shim disconnected" id=768975dbce4d6ba8e2610e6cc867aa674fa8730268e4c96662ee244367bbfcd9 namespace=k8s.io Mar 17 18:03:05.832890 containerd[1918]: time="2025-03-17T18:03:05.832676418Z" level=warning msg="cleaning up after shim disconnected" id=768975dbce4d6ba8e2610e6cc867aa674fa8730268e4c96662ee244367bbfcd9 namespace=k8s.io Mar 17 18:03:05.832890 containerd[1918]: time="2025-03-17T18:03:05.832688778Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:03:05.860467 containerd[1918]: time="2025-03-17T18:03:05.859930903Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:03:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:03:05.909452 kubelet[3442]: I0317 18:03:05.909421 3442 scope.go:117] "RemoveContainer" containerID="768975dbce4d6ba8e2610e6cc867aa674fa8730268e4c96662ee244367bbfcd9" Mar 17 18:03:05.912977 containerd[1918]: time="2025-03-17T18:03:05.912935794Z" level=info msg="CreateContainer within sandbox \"de1151ef33511b266c9d60e5e593a4b0fd79141dc888dddafe162ab0278b77dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 18:03:05.954765 containerd[1918]: time="2025-03-17T18:03:05.954702018Z" level=info msg="CreateContainer within sandbox \"de1151ef33511b266c9d60e5e593a4b0fd79141dc888dddafe162ab0278b77dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"00cde69fd88259ffb9c2e402984041d9e94b0128d325e2efa4b95b89a2f57b77\"" Mar 17 18:03:05.956781 containerd[1918]: time="2025-03-17T18:03:05.956219206Z" level=info msg="StartContainer for \"00cde69fd88259ffb9c2e402984041d9e94b0128d325e2efa4b95b89a2f57b77\"" Mar 17 18:03:06.018657 systemd[1]: Started cri-containerd-00cde69fd88259ffb9c2e402984041d9e94b0128d325e2efa4b95b89a2f57b77.scope - libcontainer container 00cde69fd88259ffb9c2e402984041d9e94b0128d325e2efa4b95b89a2f57b77. Mar 17 18:03:06.113176 containerd[1918]: time="2025-03-17T18:03:06.112877047Z" level=info msg="StartContainer for \"00cde69fd88259ffb9c2e402984041d9e94b0128d325e2efa4b95b89a2f57b77\" returns successfully" Mar 17 18:03:11.080604 kubelet[3442]: E0317 18:03:11.080158 3442 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-178?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"