Feb 13 15:08:14.227950 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 15:08:14.228451 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 13:51:50 -00 2025 Feb 13 15:08:14.228478 kernel: KASLR disabled due to lack of seed Feb 13 15:08:14.228494 kernel: efi: EFI v2.7 by EDK II Feb 13 15:08:14.228510 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Feb 13 15:08:14.228525 kernel: secureboot: Secure boot disabled Feb 13 15:08:14.228543 kernel: ACPI: Early table checksum verification disabled Feb 13 15:08:14.228558 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 15:08:14.228574 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 15:08:14.228589 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:08:14.228610 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 15:08:14.228626 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:08:14.228641 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 15:08:14.228657 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 15:08:14.228675 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 15:08:14.228696 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:08:14.228713 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 15:08:14.228730 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 15:08:14.228746 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 15:08:14.228762 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 15:08:14.228779 kernel: printk: bootconsole [uart0] enabled Feb 13 15:08:14.228795 kernel: NUMA: Failed to initialise from firmware Feb 13 15:08:14.228811 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:08:14.228828 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 15:08:14.228844 kernel: Zone ranges: Feb 13 15:08:14.228860 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:08:14.228881 kernel: DMA32 empty Feb 13 15:08:14.228897 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 15:08:14.228914 kernel: Movable zone start for each node Feb 13 15:08:14.228930 kernel: Early memory node ranges Feb 13 15:08:14.228946 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 15:08:14.228962 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 15:08:14.228979 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 15:08:14.228995 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 15:08:14.229011 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 15:08:14.229028 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 15:08:14.229044 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 15:08:14.229060 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 15:08:14.229081 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:08:14.229098 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 15:08:14.229121 kernel: psci: probing for conduit method from ACPI. Feb 13 15:08:14.229139 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 15:08:14.229156 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:08:14.229177 kernel: psci: Trusted OS migration not required Feb 13 15:08:14.230269 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:08:14.230299 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:08:14.230317 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:08:14.230335 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:08:14.230353 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:08:14.230371 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:08:14.230388 kernel: CPU features: detected: Spectre-v2 Feb 13 15:08:14.230405 kernel: CPU features: detected: Spectre-v3a Feb 13 15:08:14.230422 kernel: CPU features: detected: Spectre-BHB Feb 13 15:08:14.230440 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 15:08:14.230457 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 15:08:14.230484 kernel: alternatives: applying boot alternatives Feb 13 15:08:14.230503 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:08:14.230523 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:08:14.230540 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:08:14.230558 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:08:14.230575 kernel: Fallback order for Node 0: 0 Feb 13 15:08:14.230592 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 15:08:14.230609 kernel: Policy zone: Normal Feb 13 15:08:14.230626 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:08:14.230643 kernel: software IO TLB: area num 2. Feb 13 15:08:14.230665 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 15:08:14.230683 kernel: Memory: 3821240K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 209224K reserved, 0K cma-reserved) Feb 13 15:08:14.230700 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:08:14.230717 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:08:14.230735 kernel: rcu: RCU event tracing is enabled. Feb 13 15:08:14.230753 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:08:14.230771 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:08:14.230788 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:08:14.230806 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:08:14.230823 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:08:14.230840 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:08:14.230862 kernel: GICv3: 96 SPIs implemented Feb 13 15:08:14.230880 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:08:14.230897 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:08:14.230914 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 15:08:14.230931 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 15:08:14.230948 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 15:08:14.230965 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:08:14.230983 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:08:14.231000 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 15:08:14.231017 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 15:08:14.231034 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 15:08:14.231052 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:08:14.231073 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 15:08:14.231091 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 15:08:14.231108 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 15:08:14.231126 kernel: Console: colour dummy device 80x25 Feb 13 15:08:14.231143 kernel: printk: console [tty1] enabled Feb 13 15:08:14.231161 kernel: ACPI: Core revision 20230628 Feb 13 15:08:14.231179 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 15:08:14.231230 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:08:14.231250 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:08:14.231268 kernel: landlock: Up and running. Feb 13 15:08:14.231292 kernel: SELinux: Initializing. Feb 13 15:08:14.231310 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:08:14.231328 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:08:14.231345 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:08:14.231363 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:08:14.231381 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:08:14.231399 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:08:14.231417 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 15:08:14.231439 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 15:08:14.231457 kernel: Remapping and enabling EFI services. Feb 13 15:08:14.231474 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:08:14.231492 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:08:14.231509 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 15:08:14.231527 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 15:08:14.231545 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 15:08:14.231563 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:08:14.231580 kernel: SMP: Total of 2 processors activated. Feb 13 15:08:14.231598 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:08:14.231621 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 15:08:14.231639 kernel: CPU features: detected: CRC32 instructions Feb 13 15:08:14.231668 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:08:14.231691 kernel: alternatives: applying system-wide alternatives Feb 13 15:08:14.231709 kernel: devtmpfs: initialized Feb 13 15:08:14.231728 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:08:14.231747 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:08:14.231766 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:08:14.231785 kernel: SMBIOS 3.0.0 present. Feb 13 15:08:14.231809 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 15:08:14.231829 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:08:14.231848 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:08:14.231867 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:08:14.231886 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:08:14.231904 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:08:14.231923 kernel: audit: type=2000 audit(0.224:1): state=initialized audit_enabled=0 res=1 Feb 13 15:08:14.231946 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:08:14.231966 kernel: cpuidle: using governor menu Feb 13 15:08:14.231984 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:08:14.232003 kernel: ASID allocator initialised with 65536 entries Feb 13 15:08:14.232021 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:08:14.232039 kernel: Serial: AMBA PL011 UART driver Feb 13 15:08:14.232058 kernel: Modules: 17760 pages in range for non-PLT usage Feb 13 15:08:14.232076 kernel: Modules: 509280 pages in range for PLT usage Feb 13 15:08:14.232095 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:08:14.232118 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:08:14.232138 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:08:14.232157 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:08:14.232176 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:08:14.232596 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:08:14.232622 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:08:14.232642 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:08:14.232660 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:08:14.232761 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:08:14.232790 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:08:14.232810 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:08:14.232828 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:08:14.232847 kernel: ACPI: Interpreter enabled Feb 13 15:08:14.232865 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:08:14.232884 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:08:14.232903 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 15:08:14.233230 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:08:14.233459 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:08:14.233662 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:08:14.233872 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 15:08:14.234079 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 15:08:14.234104 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 15:08:14.234123 kernel: acpiphp: Slot [1] registered Feb 13 15:08:14.234143 kernel: acpiphp: Slot [2] registered Feb 13 15:08:14.234161 kernel: acpiphp: Slot [3] registered Feb 13 15:08:14.234274 kernel: acpiphp: Slot [4] registered Feb 13 15:08:14.234299 kernel: acpiphp: Slot [5] registered Feb 13 15:08:14.234318 kernel: acpiphp: Slot [6] registered Feb 13 15:08:14.234337 kernel: acpiphp: Slot [7] registered Feb 13 15:08:14.234355 kernel: acpiphp: Slot [8] registered Feb 13 15:08:14.234373 kernel: acpiphp: Slot [9] registered Feb 13 15:08:14.234392 kernel: acpiphp: Slot [10] registered Feb 13 15:08:14.234411 kernel: acpiphp: Slot [11] registered Feb 13 15:08:14.234429 kernel: acpiphp: Slot [12] registered Feb 13 15:08:14.234448 kernel: acpiphp: Slot [13] registered Feb 13 15:08:14.234473 kernel: acpiphp: Slot [14] registered Feb 13 15:08:14.234491 kernel: acpiphp: Slot [15] registered Feb 13 15:08:14.234510 kernel: acpiphp: Slot [16] registered Feb 13 15:08:14.234528 kernel: acpiphp: Slot [17] registered Feb 13 15:08:14.234547 kernel: acpiphp: Slot [18] registered Feb 13 15:08:14.234565 kernel: acpiphp: Slot [19] registered Feb 13 15:08:14.234583 kernel: acpiphp: Slot [20] registered Feb 13 15:08:14.234602 kernel: acpiphp: Slot [21] registered Feb 13 15:08:14.234620 kernel: acpiphp: Slot [22] registered Feb 13 15:08:14.234643 kernel: acpiphp: Slot [23] registered Feb 13 15:08:14.234662 kernel: acpiphp: Slot [24] registered Feb 13 15:08:14.234681 kernel: acpiphp: Slot [25] registered Feb 13 15:08:14.234700 kernel: acpiphp: Slot [26] registered Feb 13 15:08:14.234718 kernel: acpiphp: Slot [27] registered Feb 13 15:08:14.234737 kernel: acpiphp: Slot [28] registered Feb 13 15:08:14.234755 kernel: acpiphp: Slot [29] registered Feb 13 15:08:14.234773 kernel: acpiphp: Slot [30] registered Feb 13 15:08:14.234791 kernel: acpiphp: Slot [31] registered Feb 13 15:08:14.234809 kernel: PCI host bridge to bus 0000:00 Feb 13 15:08:14.235066 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 15:08:14.235310 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:08:14.235516 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 15:08:14.235719 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 15:08:14.235970 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 15:08:14.238592 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 15:08:14.238888 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 15:08:14.239116 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:08:14.239369 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 15:08:14.239580 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:08:14.239800 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:08:14.240010 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 15:08:14.240266 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 15:08:14.240510 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 15:08:14.240714 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:08:14.240922 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 15:08:14.241129 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 15:08:14.241379 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 15:08:14.241589 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 15:08:14.241805 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 15:08:14.242002 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 15:08:14.242208 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:08:14.242408 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 15:08:14.242433 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:08:14.242453 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:08:14.242471 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:08:14.242490 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:08:14.242508 kernel: iommu: Default domain type: Translated Feb 13 15:08:14.242533 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:08:14.242552 kernel: efivars: Registered efivars operations Feb 13 15:08:14.242571 kernel: vgaarb: loaded Feb 13 15:08:14.242589 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:08:14.242607 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:08:14.242625 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:08:14.242644 kernel: pnp: PnP ACPI init Feb 13 15:08:14.242864 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 15:08:14.242896 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:08:14.242915 kernel: NET: Registered PF_INET protocol family Feb 13 15:08:14.242933 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:08:14.242952 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:08:14.242971 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:08:14.242989 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:08:14.243008 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:08:14.243026 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:08:14.243045 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:08:14.243068 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:08:14.243087 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:08:14.243105 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:08:14.243123 kernel: kvm [1]: HYP mode not available Feb 13 15:08:14.243142 kernel: Initialise system trusted keyrings Feb 13 15:08:14.243160 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:08:14.243179 kernel: Key type asymmetric registered Feb 13 15:08:14.243225 kernel: Asymmetric key parser 'x509' registered Feb 13 15:08:14.243246 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:08:14.243270 kernel: io scheduler mq-deadline registered Feb 13 15:08:14.243289 kernel: io scheduler kyber registered Feb 13 15:08:14.243307 kernel: io scheduler bfq registered Feb 13 15:08:14.243541 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 15:08:14.243568 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:08:14.243587 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:08:14.243606 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 15:08:14.243624 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 15:08:14.243648 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:08:14.243667 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:08:14.243872 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 15:08:14.243897 kernel: printk: console [ttyS0] disabled Feb 13 15:08:14.243916 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 15:08:14.243934 kernel: printk: console [ttyS0] enabled Feb 13 15:08:14.243952 kernel: printk: bootconsole [uart0] disabled Feb 13 15:08:14.243971 kernel: thunder_xcv, ver 1.0 Feb 13 15:08:14.243989 kernel: thunder_bgx, ver 1.0 Feb 13 15:08:14.244007 kernel: nicpf, ver 1.0 Feb 13 15:08:14.244031 kernel: nicvf, ver 1.0 Feb 13 15:08:14.245624 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:08:14.245841 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:08:13 UTC (1739459293) Feb 13 15:08:14.245867 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:08:14.245886 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 15:08:14.245905 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:08:14.245923 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:08:14.245951 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:08:14.245970 kernel: Segment Routing with IPv6 Feb 13 15:08:14.245988 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:08:14.246007 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:08:14.246025 kernel: Key type dns_resolver registered Feb 13 15:08:14.246043 kernel: registered taskstats version 1 Feb 13 15:08:14.246062 kernel: Loading compiled-in X.509 certificates Feb 13 15:08:14.246080 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 03c2ececc548f4ae45f50171451f5c036e2757d4' Feb 13 15:08:14.246098 kernel: Key type .fscrypt registered Feb 13 15:08:14.246119 kernel: Key type fscrypt-provisioning registered Feb 13 15:08:14.246145 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:08:14.246164 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:08:14.246182 kernel: ima: No architecture policies found Feb 13 15:08:14.246289 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:08:14.246313 kernel: clk: Disabling unused clocks Feb 13 15:08:14.246726 kernel: Freeing unused kernel memory: 38336K Feb 13 15:08:14.246755 kernel: Run /init as init process Feb 13 15:08:14.246774 kernel: with arguments: Feb 13 15:08:14.246793 kernel: /init Feb 13 15:08:14.246831 kernel: with environment: Feb 13 15:08:14.246854 kernel: HOME=/ Feb 13 15:08:14.246873 kernel: TERM=linux Feb 13 15:08:14.246892 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:08:14.246914 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:08:14.246939 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:08:14.246961 systemd[1]: Detected virtualization amazon. Feb 13 15:08:14.246986 systemd[1]: Detected architecture arm64. Feb 13 15:08:14.247006 systemd[1]: Running in initrd. Feb 13 15:08:14.247027 systemd[1]: No hostname configured, using default hostname. Feb 13 15:08:14.247047 systemd[1]: Hostname set to . Feb 13 15:08:14.247067 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:08:14.247087 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:08:14.247107 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:08:14.247128 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:08:14.247149 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:08:14.247175 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:08:14.247311 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:08:14.247351 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:08:14.247376 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:08:14.247396 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:08:14.247416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:08:14.247443 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:08:14.247464 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:08:14.247483 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:08:14.247503 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:08:14.247523 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:08:14.247544 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:08:14.247564 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:08:14.247584 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:08:14.247603 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:08:14.247628 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:08:14.247648 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:08:14.247668 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:08:14.247688 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:08:14.247708 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:08:14.247728 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:08:14.247748 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:08:14.247768 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:08:14.247793 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:08:14.247813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:08:14.247833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:08:14.247853 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:08:14.247873 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:08:14.247894 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:08:14.247919 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:08:14.247939 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:08:14.247959 kernel: Bridge firewalling registered Feb 13 15:08:14.248027 systemd-journald[252]: Collecting audit messages is disabled. Feb 13 15:08:14.248076 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:08:14.248098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:14.248118 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:08:14.248140 systemd-journald[252]: Journal started Feb 13 15:08:14.248177 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2303417a61a58d98ffc2deed5d45a4) is 8M, max 75.3M, 67.3M free. Feb 13 15:08:14.255587 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:08:14.166670 systemd-modules-load[253]: Inserted module 'overlay' Feb 13 15:08:14.209249 systemd-modules-load[253]: Inserted module 'br_netfilter' Feb 13 15:08:14.261579 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:08:14.264225 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:08:14.273517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:08:14.276745 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:08:14.304246 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:08:14.317090 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:08:14.327366 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:08:14.330983 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:08:14.347548 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:08:14.356516 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:08:14.377489 dracut-cmdline[288]: dracut-dracut-053 Feb 13 15:08:14.383671 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:08:14.449667 systemd-resolved[289]: Positive Trust Anchors: Feb 13 15:08:14.449702 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:08:14.449766 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:08:14.546243 kernel: SCSI subsystem initialized Feb 13 15:08:14.553225 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:08:14.566244 kernel: iscsi: registered transport (tcp) Feb 13 15:08:14.589457 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:08:14.589554 kernel: QLogic iSCSI HBA Driver Feb 13 15:08:14.678255 kernel: random: crng init done Feb 13 15:08:14.678981 systemd-resolved[289]: Defaulting to hostname 'linux'. Feb 13 15:08:14.682348 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:08:14.687068 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:08:14.715416 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:08:14.726994 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:08:14.766132 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:08:14.766230 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:08:14.768213 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:08:14.836252 kernel: raid6: neonx8 gen() 6533 MB/s Feb 13 15:08:14.853245 kernel: raid6: neonx4 gen() 6498 MB/s Feb 13 15:08:14.870242 kernel: raid6: neonx2 gen() 5422 MB/s Feb 13 15:08:14.887239 kernel: raid6: neonx1 gen() 3934 MB/s Feb 13 15:08:14.904238 kernel: raid6: int64x8 gen() 3577 MB/s Feb 13 15:08:14.921245 kernel: raid6: int64x4 gen() 3673 MB/s Feb 13 15:08:14.938221 kernel: raid6: int64x2 gen() 3563 MB/s Feb 13 15:08:14.956000 kernel: raid6: int64x1 gen() 2750 MB/s Feb 13 15:08:14.956033 kernel: raid6: using algorithm neonx8 gen() 6533 MB/s Feb 13 15:08:14.974012 kernel: raid6: .... xor() 4720 MB/s, rmw enabled Feb 13 15:08:14.974060 kernel: raid6: using neon recovery algorithm Feb 13 15:08:14.982346 kernel: xor: measuring software checksum speed Feb 13 15:08:14.982416 kernel: 8regs : 12966 MB/sec Feb 13 15:08:14.983402 kernel: 32regs : 13048 MB/sec Feb 13 15:08:14.984586 kernel: arm64_neon : 9559 MB/sec Feb 13 15:08:14.984628 kernel: xor: using function: 32regs (13048 MB/sec) Feb 13 15:08:15.069235 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:08:15.089304 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:08:15.114483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:08:15.149227 systemd-udevd[472]: Using default interface naming scheme 'v255'. Feb 13 15:08:15.161263 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:08:15.174837 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:08:15.210730 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Feb 13 15:08:15.274260 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:08:15.284478 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:08:15.408034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:08:15.421530 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:08:15.462315 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:08:15.465059 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:08:15.467488 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:08:15.470393 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:08:15.501178 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:08:15.548006 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:08:15.638346 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:08:15.638464 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 15:08:15.663012 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:08:15.663053 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:08:15.673471 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:08:15.673901 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:08:15.674160 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:2a:de:2e:b1:03 Feb 13 15:08:15.674442 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:08:15.652327 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:08:15.652596 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:08:15.686799 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:08:15.686832 kernel: GPT:9289727 != 16777215 Feb 13 15:08:15.686856 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:08:15.686881 kernel: GPT:9289727 != 16777215 Feb 13 15:08:15.686905 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:08:15.655332 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:08:15.693608 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:08:15.657505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:08:15.659987 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:15.663253 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:08:15.689761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:08:15.695842 (udev-worker)[526]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:08:15.744249 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:15.755658 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:08:15.806592 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:08:15.818231 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (537) Feb 13 15:08:15.881223 kernel: BTRFS: device fsid b3d3c5e7-c505-4391-bb7a-de2a572c0855 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (518) Feb 13 15:08:15.951899 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:08:15.995996 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:08:16.021981 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:08:16.060770 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:08:16.065800 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:08:16.080508 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:08:16.101802 disk-uuid[661]: Primary Header is updated. Feb 13 15:08:16.101802 disk-uuid[661]: Secondary Entries is updated. Feb 13 15:08:16.101802 disk-uuid[661]: Secondary Header is updated. Feb 13 15:08:16.111243 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:08:16.120236 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:08:17.133671 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:08:17.134385 disk-uuid[662]: The operation has completed successfully. Feb 13 15:08:17.352414 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:08:17.354803 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:08:17.437488 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:08:17.446517 sh[923]: Success Feb 13 15:08:17.473232 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:08:17.584535 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:08:17.594846 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:08:17.605408 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:08:17.641654 kernel: BTRFS info (device dm-0): first mount of filesystem b3d3c5e7-c505-4391-bb7a-de2a572c0855 Feb 13 15:08:17.641739 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:08:17.641779 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:08:17.644651 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:08:17.644714 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:08:17.754232 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:08:17.777661 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:08:17.782240 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:08:17.796521 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:08:17.802518 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:08:17.833215 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:17.833374 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:08:17.833413 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:08:17.842344 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:08:17.865404 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:08:17.869908 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:17.882258 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:08:17.896620 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:08:18.026148 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:08:18.053626 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:08:18.115602 systemd-networkd[1118]: lo: Link UP Feb 13 15:08:18.115627 systemd-networkd[1118]: lo: Gained carrier Feb 13 15:08:18.119841 systemd-networkd[1118]: Enumeration completed Feb 13 15:08:18.121771 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:08:18.121818 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:08:18.121825 systemd-networkd[1118]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:08:18.130502 systemd[1]: Reached target network.target - Network. Feb 13 15:08:18.133504 systemd-networkd[1118]: eth0: Link UP Feb 13 15:08:18.133513 systemd-networkd[1118]: eth0: Gained carrier Feb 13 15:08:18.133533 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:08:18.174303 systemd-networkd[1118]: eth0: DHCPv4 address 172.31.30.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:08:18.304684 ignition[1027]: Ignition 2.20.0 Feb 13 15:08:18.304706 ignition[1027]: Stage: fetch-offline Feb 13 15:08:18.305164 ignition[1027]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:18.305213 ignition[1027]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:18.306030 ignition[1027]: Ignition finished successfully Feb 13 15:08:18.314849 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:08:18.330974 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:08:18.357398 ignition[1128]: Ignition 2.20.0 Feb 13 15:08:18.357924 ignition[1128]: Stage: fetch Feb 13 15:08:18.358610 ignition[1128]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:18.358681 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:18.358972 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:18.372721 ignition[1128]: PUT result: OK Feb 13 15:08:18.375949 ignition[1128]: parsed url from cmdline: "" Feb 13 15:08:18.375967 ignition[1128]: no config URL provided Feb 13 15:08:18.375986 ignition[1128]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:08:18.376013 ignition[1128]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:08:18.376049 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:18.378038 ignition[1128]: PUT result: OK Feb 13 15:08:18.378126 ignition[1128]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:08:18.381425 ignition[1128]: GET result: OK Feb 13 15:08:18.381687 ignition[1128]: parsing config with SHA512: 3a46c229acf3f85d5abee321a0f52cbdb2000071aafba909d61a49c174e5f430416b804293a9789e3406deee1a3462194abe5f831b3622851eade8d43b777b5b Feb 13 15:08:18.403304 unknown[1128]: fetched base config from "system" Feb 13 15:08:18.404069 ignition[1128]: fetch: fetch complete Feb 13 15:08:18.403326 unknown[1128]: fetched base config from "system" Feb 13 15:08:18.404081 ignition[1128]: fetch: fetch passed Feb 13 15:08:18.403340 unknown[1128]: fetched user config from "aws" Feb 13 15:08:18.404176 ignition[1128]: Ignition finished successfully Feb 13 15:08:18.417256 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:08:18.427490 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:08:18.462352 ignition[1134]: Ignition 2.20.0 Feb 13 15:08:18.462381 ignition[1134]: Stage: kargs Feb 13 15:08:18.463731 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:18.463768 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:18.463937 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:18.466011 ignition[1134]: PUT result: OK Feb 13 15:08:18.475738 ignition[1134]: kargs: kargs passed Feb 13 15:08:18.475865 ignition[1134]: Ignition finished successfully Feb 13 15:08:18.481516 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:08:18.493545 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:08:18.519719 ignition[1140]: Ignition 2.20.0 Feb 13 15:08:18.519750 ignition[1140]: Stage: disks Feb 13 15:08:18.520928 ignition[1140]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:18.520956 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:18.521122 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:18.522918 ignition[1140]: PUT result: OK Feb 13 15:08:18.533155 ignition[1140]: disks: disks passed Feb 13 15:08:18.533344 ignition[1140]: Ignition finished successfully Feb 13 15:08:18.537477 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:08:18.542380 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:08:18.546657 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:08:18.551431 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:08:18.553392 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:08:18.557259 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:08:18.572361 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:08:18.612135 systemd-fsck[1148]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:08:18.621663 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:08:18.647372 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:08:18.738235 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f78dcc36-7881-4d16-ad8b-28e23dfbdad0 r/w with ordered data mode. Quota mode: none. Feb 13 15:08:18.739317 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:08:18.742306 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:08:18.759461 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:08:18.768517 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:08:18.774165 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:08:18.774302 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:08:18.774433 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:08:18.799330 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1167) Feb 13 15:08:18.808744 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:18.808827 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:08:18.808856 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:08:18.808690 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:08:18.821928 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:08:18.825988 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:08:18.834805 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:08:19.151711 initrd-setup-root[1191]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:08:19.163743 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:08:19.174997 initrd-setup-root[1205]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:08:19.185365 initrd-setup-root[1212]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:08:19.426262 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:08:19.436447 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:08:19.446586 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:08:19.467321 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:19.504745 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:08:19.513804 ignition[1280]: INFO : Ignition 2.20.0 Feb 13 15:08:19.513804 ignition[1280]: INFO : Stage: mount Feb 13 15:08:19.517324 ignition[1280]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:19.517324 ignition[1280]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:19.517324 ignition[1280]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:19.525611 ignition[1280]: INFO : PUT result: OK Feb 13 15:08:19.528756 ignition[1280]: INFO : mount: mount passed Feb 13 15:08:19.531808 ignition[1280]: INFO : Ignition finished successfully Feb 13 15:08:19.533684 systemd-networkd[1118]: eth0: Gained IPv6LL Feb 13 15:08:19.534146 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:08:19.552028 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:08:19.639635 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:08:19.647850 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:08:19.682250 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1291) Feb 13 15:08:19.685903 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:19.686023 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:08:19.687322 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:08:19.693269 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:08:19.697285 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:08:19.736126 ignition[1308]: INFO : Ignition 2.20.0 Feb 13 15:08:19.739127 ignition[1308]: INFO : Stage: files Feb 13 15:08:19.739127 ignition[1308]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:19.739127 ignition[1308]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:19.739127 ignition[1308]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:19.748509 ignition[1308]: INFO : PUT result: OK Feb 13 15:08:19.752472 ignition[1308]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:08:19.755527 ignition[1308]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:08:19.755527 ignition[1308]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:08:19.789868 ignition[1308]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:08:19.795384 ignition[1308]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:08:19.799370 unknown[1308]: wrote ssh authorized keys file for user: core Feb 13 15:08:19.801977 ignition[1308]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:08:19.807353 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:08:19.807353 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:08:19.911250 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:08:20.061457 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:08:20.065444 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:08:20.065444 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:08:20.579269 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:08:20.914314 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:08:20.914314 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:08:20.922521 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:08:20.922521 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:08:20.922521 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:08:20.922521 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:08:20.922521 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:08:20.922521 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:08:20.922521 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:08:20.945570 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:08:20.945570 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:08:20.952714 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:08:20.952714 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:08:20.952714 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:08:20.966445 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 15:08:21.401426 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:08:21.838982 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:08:21.838982 ignition[1308]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:08:21.845622 ignition[1308]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:08:21.845622 ignition[1308]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:08:21.845622 ignition[1308]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:08:21.845622 ignition[1308]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:08:21.845622 ignition[1308]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:08:21.845622 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:08:21.845622 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:08:21.845622 ignition[1308]: INFO : files: files passed Feb 13 15:08:21.845622 ignition[1308]: INFO : Ignition finished successfully Feb 13 15:08:21.872619 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:08:21.882520 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:08:21.897533 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:08:21.904534 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:08:21.906319 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:08:21.925061 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:08:21.925061 initrd-setup-root-after-ignition[1337]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:08:21.932400 initrd-setup-root-after-ignition[1341]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:08:21.940381 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:08:21.943618 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:08:21.955528 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:08:22.018560 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:08:22.019002 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:08:22.026792 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:08:22.028941 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:08:22.031513 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:08:22.047521 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:08:22.082337 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:08:22.102662 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:08:22.126345 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:08:22.126745 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:08:22.128367 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:08:22.128968 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:08:22.129277 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:08:22.130458 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:08:22.131163 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:08:22.132680 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:08:22.133409 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:08:22.134095 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:08:22.135592 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:08:22.136300 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:08:22.137043 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:08:22.137775 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:08:22.138474 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:08:22.139138 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:08:22.139446 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:08:22.140711 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:08:22.142180 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:08:22.142829 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:08:22.179427 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:08:22.184342 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:08:22.184730 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:08:22.196970 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:08:22.197425 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:08:22.212551 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:08:22.212825 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:08:22.246040 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:08:22.252184 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:08:22.252561 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:08:22.261377 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:08:22.276434 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:08:22.276808 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:08:22.294278 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:08:22.298874 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:08:22.308880 ignition[1361]: INFO : Ignition 2.20.0 Feb 13 15:08:22.308880 ignition[1361]: INFO : Stage: umount Feb 13 15:08:22.308880 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:22.308880 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:22.319142 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:22.319142 ignition[1361]: INFO : PUT result: OK Feb 13 15:08:22.329051 ignition[1361]: INFO : umount: umount passed Feb 13 15:08:22.329051 ignition[1361]: INFO : Ignition finished successfully Feb 13 15:08:22.336587 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:08:22.336816 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:08:22.342420 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:08:22.342711 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:08:22.356134 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:08:22.356499 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:08:22.370802 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:08:22.370973 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:08:22.377481 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:08:22.377598 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:08:22.381561 systemd[1]: Stopped target network.target - Network. Feb 13 15:08:22.385480 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:08:22.387705 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:08:22.402305 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:08:22.404003 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:08:22.408501 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:08:22.410939 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:08:22.413351 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:08:22.421639 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:08:22.421740 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:08:22.424588 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:08:22.424677 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:08:22.430235 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:08:22.430392 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:08:22.432806 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:08:22.432913 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:08:22.435149 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:08:22.437341 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:08:22.452030 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:08:22.453919 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:08:22.454136 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:08:22.461565 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:08:22.462082 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:08:22.470798 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:08:22.471504 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:08:22.475900 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:08:22.484499 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:08:22.488062 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:08:22.489521 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:08:22.499016 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:08:22.499672 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:08:22.515498 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:08:22.519323 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:08:22.519462 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:08:22.522400 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:08:22.522528 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:08:22.525608 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:08:22.525729 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:08:22.545670 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:08:22.545806 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:08:22.553568 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:08:22.560660 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:08:22.560827 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:08:22.588688 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:08:22.590929 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:08:22.597814 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:08:22.598032 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:08:22.600664 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:08:22.602159 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:08:22.607754 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:08:22.607915 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:08:22.618600 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:08:22.618732 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:08:22.621385 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:08:22.621502 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:08:22.637290 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:08:22.644692 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:08:22.644989 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:08:22.651578 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:08:22.651707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:22.665776 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:08:22.665951 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:08:22.677876 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:08:22.678157 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:08:22.696245 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:08:22.696770 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:08:22.706731 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:08:22.718792 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:08:22.744430 systemd[1]: Switching root. Feb 13 15:08:22.786220 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Feb 13 15:08:22.786328 systemd-journald[252]: Journal stopped Feb 13 15:08:25.114437 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:08:25.114598 kernel: SELinux: policy capability open_perms=1 Feb 13 15:08:25.114655 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:08:25.114703 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:08:25.114735 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:08:25.114764 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:08:25.114796 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:08:25.114825 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:08:25.114856 kernel: audit: type=1403 audit(1739459303.067:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:08:25.114898 systemd[1]: Successfully loaded SELinux policy in 52.120ms. Feb 13 15:08:25.114949 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.750ms. Feb 13 15:08:25.114988 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:08:25.115021 systemd[1]: Detected virtualization amazon. Feb 13 15:08:25.115052 systemd[1]: Detected architecture arm64. Feb 13 15:08:25.115083 systemd[1]: Detected first boot. Feb 13 15:08:25.115124 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:08:25.115155 zram_generator::config[1413]: No configuration found. Feb 13 15:08:25.135050 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:08:25.135135 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:08:25.135792 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:08:25.135891 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:08:25.135926 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:08:25.135960 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:08:25.135991 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:08:25.136026 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:08:25.136060 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:08:25.144152 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:08:25.144421 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:08:25.145019 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:08:25.145079 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:08:25.145111 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:08:25.145146 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:08:25.149168 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:08:25.160790 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:08:25.160848 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:08:25.160880 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:08:25.160911 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:08:25.160955 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:08:25.160985 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:08:25.161014 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:08:25.161046 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:08:25.161078 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:08:25.161114 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:08:25.161148 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:08:25.161182 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:08:25.161277 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:08:25.161313 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:08:25.161344 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:08:25.161375 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:08:25.161405 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:08:25.161439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:08:25.161471 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:08:25.161504 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:08:25.161537 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:08:25.161575 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:08:25.161606 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:08:25.161638 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:08:25.161668 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:08:25.161705 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:08:25.161737 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:08:25.161767 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:08:25.161797 systemd[1]: Reached target machines.target - Containers. Feb 13 15:08:25.161831 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:08:25.161863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:08:25.161896 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:08:25.161928 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:08:25.161961 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:08:25.161995 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:08:25.162031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:08:25.162062 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:08:25.162093 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:08:25.162133 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:08:25.162165 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:08:25.169401 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:08:25.169516 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:08:25.169560 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:08:25.169594 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:08:25.169824 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:08:25.169884 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:08:25.169943 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:08:25.169980 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:08:25.170011 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:08:25.170049 kernel: ACPI: bus type drm_connector registered Feb 13 15:08:25.170082 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:08:25.170122 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:08:25.170157 systemd[1]: Stopped verity-setup.service. Feb 13 15:08:25.170336 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:08:25.170394 kernel: fuse: init (API version 7.39) Feb 13 15:08:25.170428 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:08:25.170461 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:08:25.170495 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:08:25.170528 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:08:25.170560 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:08:25.170600 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:08:25.170640 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:08:25.170674 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:08:25.170710 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:08:25.170740 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:08:25.170780 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:08:25.170813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:08:25.170843 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:08:25.170872 kernel: loop: module loaded Feb 13 15:08:25.170901 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:08:25.170935 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:08:25.170970 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:08:25.171004 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:08:25.171035 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:08:25.171071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:08:25.171104 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:08:25.171138 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:08:25.171168 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:08:25.173930 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:08:25.174045 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:08:25.174095 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:08:25.174131 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:08:25.174167 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:08:25.175275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:08:25.183906 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:08:25.183951 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:08:25.184057 systemd-journald[1489]: Collecting audit messages is disabled. Feb 13 15:08:25.184132 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:08:25.184170 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:08:25.188303 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:08:25.188430 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:08:25.188472 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:08:25.188503 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:08:25.188553 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:08:25.188590 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:08:25.188624 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:08:25.188654 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:08:25.188686 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:08:25.188721 systemd-journald[1489]: Journal started Feb 13 15:08:25.188773 systemd-journald[1489]: Runtime Journal (/run/log/journal/ec2303417a61a58d98ffc2deed5d45a4) is 8M, max 75.3M, 67.3M free. Feb 13 15:08:25.207309 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:08:24.318855 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:08:25.214469 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:08:24.334601 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:08:24.335668 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:08:25.292926 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 15:08:25.293389 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:08:25.314555 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:08:25.329967 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:08:25.346668 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:08:25.374888 systemd-journald[1489]: Time spent on flushing to /var/log/journal/ec2303417a61a58d98ffc2deed5d45a4 is 130.454ms for 926 entries. Feb 13 15:08:25.374888 systemd-journald[1489]: System Journal (/var/log/journal/ec2303417a61a58d98ffc2deed5d45a4) is 8M, max 195.6M, 187.6M free. Feb 13 15:08:25.529909 systemd-journald[1489]: Received client request to flush runtime journal. Feb 13 15:08:25.530035 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:08:25.530091 kernel: loop1: detected capacity change from 0 to 113512 Feb 13 15:08:25.376911 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:08:25.460292 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:08:25.470617 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:08:25.496841 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:08:25.507608 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:08:25.538236 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:08:25.561682 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:08:25.566273 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:08:25.569243 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 15:08:25.574877 udevadm[1557]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:08:25.620249 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Feb 13 15:08:25.620283 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Feb 13 15:08:25.643987 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:08:25.776257 kernel: loop3: detected capacity change from 0 to 53784 Feb 13 15:08:25.841302 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 15:08:25.873343 kernel: loop5: detected capacity change from 0 to 113512 Feb 13 15:08:25.905520 kernel: loop6: detected capacity change from 0 to 194096 Feb 13 15:08:25.943366 kernel: loop7: detected capacity change from 0 to 53784 Feb 13 15:08:25.963724 (sd-merge)[1569]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:08:25.965554 (sd-merge)[1569]: Merged extensions into '/usr'. Feb 13 15:08:25.980681 systemd[1]: Reload requested from client PID 1522 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:08:25.980707 systemd[1]: Reloading... Feb 13 15:08:26.179879 ldconfig[1518]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:08:26.265238 zram_generator::config[1600]: No configuration found. Feb 13 15:08:26.577536 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:08:26.738822 systemd[1]: Reloading finished in 756 ms. Feb 13 15:08:26.767653 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:08:26.770670 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:08:26.774130 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:08:26.789794 systemd[1]: Starting ensure-sysext.service... Feb 13 15:08:26.795591 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:08:26.803585 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:08:26.833249 systemd[1]: Reload requested from client PID 1650 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:08:26.833281 systemd[1]: Reloading... Feb 13 15:08:26.894098 systemd-tmpfiles[1651]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:08:26.897179 systemd-udevd[1652]: Using default interface naming scheme 'v255'. Feb 13 15:08:26.898834 systemd-tmpfiles[1651]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:08:26.903042 systemd-tmpfiles[1651]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:08:26.903870 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Feb 13 15:08:26.904034 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Feb 13 15:08:26.912695 systemd-tmpfiles[1651]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:08:26.912721 systemd-tmpfiles[1651]: Skipping /boot Feb 13 15:08:26.984626 systemd-tmpfiles[1651]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:08:26.984654 systemd-tmpfiles[1651]: Skipping /boot Feb 13 15:08:27.095232 zram_generator::config[1687]: No configuration found. Feb 13 15:08:27.307323 (udev-worker)[1684]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:08:27.489125 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:08:27.555244 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1716) Feb 13 15:08:27.690919 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:08:27.692552 systemd[1]: Reloading finished in 858 ms. Feb 13 15:08:27.723739 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:08:27.727456 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:08:27.814322 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:08:27.832114 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:08:27.841418 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:08:27.851478 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:08:27.860105 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:08:27.868909 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:08:27.876970 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:08:27.919174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:08:27.938939 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:08:27.949407 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:08:27.969045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:08:27.973537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:08:27.973976 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:08:27.981345 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:08:27.986865 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:08:27.988708 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:08:28.079725 systemd[1]: Finished ensure-sysext.service. Feb 13 15:08:28.083265 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:08:28.102708 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:08:28.103277 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:08:28.119925 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:08:28.122566 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:08:28.156407 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:08:28.172147 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:08:28.177290 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:08:28.181866 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:08:28.192704 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:08:28.198892 augenrules[1887]: No rules Feb 13 15:08:28.202578 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:08:28.210942 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:08:28.212965 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:08:28.217682 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:08:28.220408 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:08:28.220513 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:08:28.220587 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:08:28.237282 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:08:28.247752 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:08:28.250310 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:08:28.251982 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:08:28.254562 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:08:28.257317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:08:28.257795 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:08:28.261025 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:08:28.262459 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:08:28.270869 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:08:28.274068 lvm[1885]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:08:28.293308 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:28.303352 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:08:28.311144 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:08:28.343247 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:08:28.347082 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:08:28.359951 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:08:28.382247 lvm[1907]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:08:28.385042 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:08:28.452784 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:08:28.538942 systemd-networkd[1850]: lo: Link UP Feb 13 15:08:28.538963 systemd-networkd[1850]: lo: Gained carrier Feb 13 15:08:28.542242 systemd-networkd[1850]: Enumeration completed Feb 13 15:08:28.542460 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:08:28.544449 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:08:28.544457 systemd-networkd[1850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:08:28.546959 systemd-networkd[1850]: eth0: Link UP Feb 13 15:08:28.547357 systemd-networkd[1850]: eth0: Gained carrier Feb 13 15:08:28.547399 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:08:28.548544 systemd-resolved[1853]: Positive Trust Anchors: Feb 13 15:08:28.548580 systemd-resolved[1853]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:08:28.548645 systemd-resolved[1853]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:08:28.556559 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:08:28.561343 systemd-networkd[1850]: eth0: DHCPv4 address 172.31.30.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:08:28.563363 systemd-resolved[1853]: Defaulting to hostname 'linux'. Feb 13 15:08:28.567547 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:08:28.570104 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:08:28.575516 systemd[1]: Reached target network.target - Network. Feb 13 15:08:28.577395 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:08:28.581394 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:08:28.583513 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:08:28.585872 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:08:28.589688 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:08:28.591963 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:08:28.594631 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:08:28.597009 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:08:28.597063 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:08:28.598891 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:08:28.602685 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:08:28.607883 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:08:28.615503 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:08:28.618444 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:08:28.620922 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:08:28.627467 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:08:28.630405 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:08:28.634774 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:08:28.639231 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:08:28.643004 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:08:28.645471 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:08:28.648291 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:08:28.648377 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:08:28.659395 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:08:28.665609 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:08:28.674608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:08:28.686089 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:08:28.693281 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:08:28.693694 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:08:28.700767 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:08:28.719909 jq[1925]: false Feb 13 15:08:28.715642 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:08:28.734780 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:08:28.751733 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:08:28.758872 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:08:28.767555 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:08:28.778168 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:08:28.783425 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:08:28.784668 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:08:28.788691 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:08:28.793483 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:08:28.801619 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:08:28.803295 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:08:28.894138 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:08:28.896273 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:08:28.920019 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:08:28.916115 dbus-daemon[1924]: [system] SELinux support is enabled Feb 13 15:08:28.929162 extend-filesystems[1926]: Found loop4 Feb 13 15:08:28.929162 extend-filesystems[1926]: Found loop5 Feb 13 15:08:28.929162 extend-filesystems[1926]: Found loop6 Feb 13 15:08:28.929162 extend-filesystems[1926]: Found loop7 Feb 13 15:08:28.929162 extend-filesystems[1926]: Found nvme0n1 Feb 13 15:08:28.929162 extend-filesystems[1926]: Found nvme0n1p1 Feb 13 15:08:28.929162 extend-filesystems[1926]: Found nvme0n1p2 Feb 13 15:08:28.929162 extend-filesystems[1926]: Found nvme0n1p3 Feb 13 15:08:28.929162 extend-filesystems[1926]: Found usr Feb 13 15:08:28.929162 extend-filesystems[1926]: Found nvme0n1p4 Feb 13 15:08:28.929162 extend-filesystems[1926]: Found nvme0n1p6 Feb 13 15:08:28.929162 extend-filesystems[1926]: Found nvme0n1p7 Feb 13 15:08:28.929162 extend-filesystems[1926]: Found nvme0n1p9 Feb 13 15:08:28.929162 extend-filesystems[1926]: Checking size of /dev/nvme0n1p9 Feb 13 15:08:29.010145 tar[1940]: linux-arm64/helm Feb 13 15:08:28.931316 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:08:28.992210 dbus-daemon[1924]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1850 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:08:29.014935 update_engine[1937]: I20250213 15:08:28.994081 1937 main.cc:92] Flatcar Update Engine starting Feb 13 15:08:28.931376 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:08:28.936036 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:08:29.023451 jq[1938]: true Feb 13 15:08:28.936089 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:08:29.004010 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:08:29.018410 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:08:29.021425 (ntainerd)[1955]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:08:29.041988 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:08:29.060417 extend-filesystems[1926]: Resized partition /dev/nvme0n1p9 Feb 13 15:08:29.068343 update_engine[1937]: I20250213 15:08:29.046288 1937 update_check_scheduler.cc:74] Next update check in 11m7s Feb 13 15:08:29.064818 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:08:29.081017 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:08:29.081538 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:08:29.087903 extend-filesystems[1972]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:08:29.117236 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:08:29.113053 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:08:29.147608 jq[1966]: true Feb 13 15:08:29.170655 ntpd[1928]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:12 UTC 2025 (1): Starting Feb 13 15:08:29.174853 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:12 UTC 2025 (1): Starting Feb 13 15:08:29.174853 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:08:29.174853 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: ---------------------------------------------------- Feb 13 15:08:29.174853 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:08:29.174853 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:08:29.174853 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: corporation. Support and training for ntp-4 are Feb 13 15:08:29.174853 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: available at https://www.nwtime.org/support Feb 13 15:08:29.174853 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: ---------------------------------------------------- Feb 13 15:08:29.170716 ntpd[1928]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:08:29.170736 ntpd[1928]: ---------------------------------------------------- Feb 13 15:08:29.170756 ntpd[1928]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:08:29.170774 ntpd[1928]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:08:29.170791 ntpd[1928]: corporation. Support and training for ntp-4 are Feb 13 15:08:29.170809 ntpd[1928]: available at https://www.nwtime.org/support Feb 13 15:08:29.170827 ntpd[1928]: ---------------------------------------------------- Feb 13 15:08:29.182437 ntpd[1928]: proto: precision = 0.096 usec (-23) Feb 13 15:08:29.192863 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: proto: precision = 0.096 usec (-23) Feb 13 15:08:29.192863 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: basedate set to 2025-02-01 Feb 13 15:08:29.192863 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: gps base set to 2025-02-02 (week 2352) Feb 13 15:08:29.187669 ntpd[1928]: basedate set to 2025-02-01 Feb 13 15:08:29.187703 ntpd[1928]: gps base set to 2025-02-02 (week 2352) Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetch successful Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetch successful Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetch successful Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetch successful Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetch failed with 404: resource not found Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetch successful Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetch successful Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetch successful Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetch successful Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:08:29.206645 coreos-metadata[1923]: Feb 13 15:08:29.205 INFO Fetch successful Feb 13 15:08:29.224555 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:08:29.224555 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:08:29.224555 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:08:29.224555 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: Listen normally on 3 eth0 172.31.30.163:123 Feb 13 15:08:29.224555 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: Listen normally on 4 lo [::1]:123 Feb 13 15:08:29.224555 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: bind(21) AF_INET6 fe80::42a:deff:fe2e:b103%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:08:29.224555 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: unable to create socket on eth0 (5) for fe80::42a:deff:fe2e:b103%2#123 Feb 13 15:08:29.224555 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: failed to init interface for address fe80::42a:deff:fe2e:b103%2 Feb 13 15:08:29.224555 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: Listening on routing socket on fd #21 for interface updates Feb 13 15:08:29.208466 ntpd[1928]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:08:29.208561 ntpd[1928]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:08:29.208858 ntpd[1928]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:08:29.208923 ntpd[1928]: Listen normally on 3 eth0 172.31.30.163:123 Feb 13 15:08:29.208990 ntpd[1928]: Listen normally on 4 lo [::1]:123 Feb 13 15:08:29.209063 ntpd[1928]: bind(21) AF_INET6 fe80::42a:deff:fe2e:b103%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:08:29.209100 ntpd[1928]: unable to create socket on eth0 (5) for fe80::42a:deff:fe2e:b103%2#123 Feb 13 15:08:29.209158 ntpd[1928]: failed to init interface for address fe80::42a:deff:fe2e:b103%2 Feb 13 15:08:29.221311 ntpd[1928]: Listening on routing socket on fd #21 for interface updates Feb 13 15:08:29.235399 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:08:29.259992 extend-filesystems[1972]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:08:29.259992 extend-filesystems[1972]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:08:29.259992 extend-filesystems[1972]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:08:29.260773 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:08:29.270535 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:08:29.270535 ntpd[1928]: 13 Feb 15:08:29 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:08:29.270626 extend-filesystems[1926]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:08:29.260848 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:08:29.281413 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:08:29.281871 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:08:29.350328 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:08:29.352980 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:08:29.421236 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1685) Feb 13 15:08:29.464436 bash[2017]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:08:29.477759 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:08:29.490559 systemd[1]: Starting sshkeys.service... Feb 13 15:08:29.505637 systemd-logind[1936]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:08:29.505696 systemd-logind[1936]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 15:08:29.506098 systemd-logind[1936]: New seat seat0. Feb 13 15:08:29.507387 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:08:29.658076 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:08:29.668696 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:08:29.710029 locksmithd[1971]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:08:29.763231 containerd[1955]: time="2025-02-13T15:08:29.761728020Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:08:30.021315 containerd[1955]: time="2025-02-13T15:08:30.021116542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:30.034484 coreos-metadata[2068]: Feb 13 15:08:30.034 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:08:30.037221 coreos-metadata[2068]: Feb 13 15:08:30.035 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:08:30.039329 coreos-metadata[2068]: Feb 13 15:08:30.039 INFO Fetch successful Feb 13 15:08:30.039441 coreos-metadata[2068]: Feb 13 15:08:30.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:08:30.041577 coreos-metadata[2068]: Feb 13 15:08:30.041 INFO Fetch successful Feb 13 15:08:30.043066 containerd[1955]: time="2025-02-13T15:08:30.043000678Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:30.045112 unknown[2068]: wrote ssh authorized keys file for user: core Feb 13 15:08:30.049223 containerd[1955]: time="2025-02-13T15:08:30.045699742Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:08:30.049223 containerd[1955]: time="2025-02-13T15:08:30.045761494Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:08:30.049223 containerd[1955]: time="2025-02-13T15:08:30.046068394Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:08:30.049223 containerd[1955]: time="2025-02-13T15:08:30.046102582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:30.049223 containerd[1955]: time="2025-02-13T15:08:30.046245886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:30.049223 containerd[1955]: time="2025-02-13T15:08:30.046292746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:30.049223 containerd[1955]: time="2025-02-13T15:08:30.046724014Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:30.049223 containerd[1955]: time="2025-02-13T15:08:30.046768498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:30.049223 containerd[1955]: time="2025-02-13T15:08:30.046819714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:30.049223 containerd[1955]: time="2025-02-13T15:08:30.046849846Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:30.049223 containerd[1955]: time="2025-02-13T15:08:30.047102002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:30.058279 containerd[1955]: time="2025-02-13T15:08:30.054975646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:30.058279 containerd[1955]: time="2025-02-13T15:08:30.055343038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:30.058279 containerd[1955]: time="2025-02-13T15:08:30.055377394Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:08:30.058279 containerd[1955]: time="2025-02-13T15:08:30.055812106Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:08:30.058279 containerd[1955]: time="2025-02-13T15:08:30.055944106Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:08:30.059045 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:08:30.069940 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:08:30.073253 containerd[1955]: time="2025-02-13T15:08:30.072117082Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:08:30.073253 containerd[1955]: time="2025-02-13T15:08:30.072240082Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:08:30.073253 containerd[1955]: time="2025-02-13T15:08:30.072279958Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:08:30.073253 containerd[1955]: time="2025-02-13T15:08:30.072336634Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:08:30.073253 containerd[1955]: time="2025-02-13T15:08:30.072372490Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:08:30.073253 containerd[1955]: time="2025-02-13T15:08:30.072650254Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:08:30.073253 containerd[1955]: time="2025-02-13T15:08:30.073085074Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:08:30.079548 containerd[1955]: time="2025-02-13T15:08:30.079087054Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:08:30.079548 containerd[1955]: time="2025-02-13T15:08:30.079256338Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:08:30.079548 containerd[1955]: time="2025-02-13T15:08:30.079298494Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:08:30.079548 containerd[1955]: time="2025-02-13T15:08:30.079448722Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.082507582Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.082607266Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.082646098Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.082707634Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.082743046Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.082773622Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.082804366Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.082845670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.082877446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.082937134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.082974550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.083004622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.083034586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.083767 containerd[1955]: time="2025-02-13T15:08:30.083063758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083093746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083124082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083158594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083244022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083278270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083335222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083377162Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083425822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083459602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083489614Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083628094Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083669218Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:08:30.084850 containerd[1955]: time="2025-02-13T15:08:30.083693494Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:08:30.084779 dbus-daemon[1924]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1964 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:08:30.085515 containerd[1955]: time="2025-02-13T15:08:30.083722186Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:08:30.085515 containerd[1955]: time="2025-02-13T15:08:30.083744686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.085515 containerd[1955]: time="2025-02-13T15:08:30.083784526Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:08:30.085515 containerd[1955]: time="2025-02-13T15:08:30.083810338Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:08:30.085515 containerd[1955]: time="2025-02-13T15:08:30.083848930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:08:30.096790 containerd[1955]: time="2025-02-13T15:08:30.089776570Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:08:30.096790 containerd[1955]: time="2025-02-13T15:08:30.089904166Z" level=info msg="Connect containerd service" Feb 13 15:08:30.096790 containerd[1955]: time="2025-02-13T15:08:30.090006826Z" level=info msg="using legacy CRI server" Feb 13 15:08:30.096790 containerd[1955]: time="2025-02-13T15:08:30.090026830Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:08:30.096790 containerd[1955]: time="2025-02-13T15:08:30.090310858Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:08:30.096790 containerd[1955]: time="2025-02-13T15:08:30.093094630Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:08:30.096790 containerd[1955]: time="2025-02-13T15:08:30.093861346Z" level=info msg="Start subscribing containerd event" Feb 13 15:08:30.096790 containerd[1955]: time="2025-02-13T15:08:30.093950986Z" level=info msg="Start recovering state" Feb 13 15:08:30.096790 containerd[1955]: time="2025-02-13T15:08:30.094085650Z" level=info msg="Start event monitor" Feb 13 15:08:30.096790 containerd[1955]: time="2025-02-13T15:08:30.094113154Z" level=info msg="Start snapshots syncer" Feb 13 15:08:30.096790 containerd[1955]: time="2025-02-13T15:08:30.094137742Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:08:30.096790 containerd[1955]: time="2025-02-13T15:08:30.094158514Z" level=info msg="Start streaming server" Feb 13 15:08:30.099864 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:08:30.114037 update-ssh-keys[2116]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:08:30.114557 containerd[1955]: time="2025-02-13T15:08:30.113713750Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:08:30.114557 containerd[1955]: time="2025-02-13T15:08:30.113881666Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:08:30.114557 containerd[1955]: time="2025-02-13T15:08:30.114048430Z" level=info msg="containerd successfully booted in 0.363327s" Feb 13 15:08:30.115490 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:08:30.122851 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:08:30.133997 systemd[1]: Finished sshkeys.service. Feb 13 15:08:30.158945 polkitd[2118]: Started polkitd version 121 Feb 13 15:08:30.173520 ntpd[1928]: bind(24) AF_INET6 fe80::42a:deff:fe2e:b103%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:08:30.175460 ntpd[1928]: 13 Feb 15:08:30 ntpd[1928]: bind(24) AF_INET6 fe80::42a:deff:fe2e:b103%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:08:30.175460 ntpd[1928]: 13 Feb 15:08:30 ntpd[1928]: unable to create socket on eth0 (6) for fe80::42a:deff:fe2e:b103%2#123 Feb 13 15:08:30.175460 ntpd[1928]: 13 Feb 15:08:30 ntpd[1928]: failed to init interface for address fe80::42a:deff:fe2e:b103%2 Feb 13 15:08:30.173607 ntpd[1928]: unable to create socket on eth0 (6) for fe80::42a:deff:fe2e:b103%2#123 Feb 13 15:08:30.173638 ntpd[1928]: failed to init interface for address fe80::42a:deff:fe2e:b103%2 Feb 13 15:08:30.182168 polkitd[2118]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:08:30.189974 polkitd[2118]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:08:30.194489 polkitd[2118]: Finished loading, compiling and executing 2 rules Feb 13 15:08:30.196406 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:08:30.196698 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:08:30.199991 polkitd[2118]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:08:30.237685 systemd-hostnamed[1964]: Hostname set to (transient) Feb 13 15:08:30.237720 systemd-resolved[1853]: System hostname changed to 'ip-172-31-30-163'. Feb 13 15:08:30.412407 systemd-networkd[1850]: eth0: Gained IPv6LL Feb 13 15:08:30.422081 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:08:30.426579 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:08:30.441004 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:08:30.454874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:30.469655 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:08:30.593475 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:08:30.603032 amazon-ssm-agent[2130]: Initializing new seelog logger Feb 13 15:08:30.605218 amazon-ssm-agent[2130]: New Seelog Logger Creation Complete Feb 13 15:08:30.605218 amazon-ssm-agent[2130]: 2025/02/13 15:08:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:30.605218 amazon-ssm-agent[2130]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:30.608473 amazon-ssm-agent[2130]: 2025/02/13 15:08:30 processing appconfig overrides Feb 13 15:08:30.609077 amazon-ssm-agent[2130]: 2025/02/13 15:08:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:30.609077 amazon-ssm-agent[2130]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:30.610159 amazon-ssm-agent[2130]: 2025/02/13 15:08:30 processing appconfig overrides Feb 13 15:08:30.610484 amazon-ssm-agent[2130]: 2025/02/13 15:08:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:30.610484 amazon-ssm-agent[2130]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:30.612213 amazon-ssm-agent[2130]: 2025/02/13 15:08:30 processing appconfig overrides Feb 13 15:08:30.612213 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO Proxy environment variables: Feb 13 15:08:30.614984 amazon-ssm-agent[2130]: 2025/02/13 15:08:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:30.614984 amazon-ssm-agent[2130]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:30.615167 amazon-ssm-agent[2130]: 2025/02/13 15:08:30 processing appconfig overrides Feb 13 15:08:30.711528 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO https_proxy: Feb 13 15:08:30.811304 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO http_proxy: Feb 13 15:08:30.912458 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO no_proxy: Feb 13 15:08:31.016693 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:08:31.046898 tar[1940]: linux-arm64/LICENSE Feb 13 15:08:31.046898 tar[1940]: linux-arm64/README.md Feb 13 15:08:31.080340 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:08:31.115583 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:08:31.216231 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO Agent will take identity from EC2 Feb 13 15:08:31.313262 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:08:31.413237 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:08:31.514219 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:08:31.570045 sshd_keygen[1952]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:08:31.613344 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:08:31.649984 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:08:31.665719 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:08:31.670763 systemd[1]: Started sshd@0-172.31.30.163:22-139.178.68.195:40114.service - OpenSSH per-connection server daemon (139.178.68.195:40114). Feb 13 15:08:31.700457 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:08:31.702293 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:08:31.715307 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 15:08:31.720697 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:08:31.766987 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:08:31.779944 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:08:31.795726 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:08:31.798733 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:08:31.816143 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:08:31.916353 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:08:31.944361 sshd[2161]: Accepted publickey for core from 139.178.68.195 port 40114 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:31.950882 sshd-session[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:31.968057 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:08:31.979820 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:08:32.005505 systemd-logind[1936]: New session 1 of user core. Feb 13 15:08:32.019653 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO [Registrar] Starting registrar module Feb 13 15:08:32.031087 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:08:32.053136 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:08:32.070978 (systemd)[2172]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:08:32.076738 systemd-logind[1936]: New session c1 of user core. Feb 13 15:08:32.118299 amazon-ssm-agent[2130]: 2025-02-13 15:08:30 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:08:32.222467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:32.226240 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:08:32.246798 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:08:32.433315 systemd[2172]: Queued start job for default target default.target. Feb 13 15:08:32.441887 systemd[2172]: Created slice app.slice - User Application Slice. Feb 13 15:08:32.441937 systemd[2172]: Reached target paths.target - Paths. Feb 13 15:08:32.442025 systemd[2172]: Reached target timers.target - Timers. Feb 13 15:08:32.451622 systemd[2172]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:08:32.489105 systemd[2172]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:08:32.489613 systemd[2172]: Reached target sockets.target - Sockets. Feb 13 15:08:32.489736 systemd[2172]: Reached target basic.target - Basic System. Feb 13 15:08:32.489831 systemd[2172]: Reached target default.target - Main User Target. Feb 13 15:08:32.489891 systemd[2172]: Startup finished in 392ms. Feb 13 15:08:32.490269 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:08:32.502532 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:08:32.504994 systemd[1]: Startup finished in 1.214s (kernel) + 9.279s (initrd) + 9.486s (userspace) = 19.980s. Feb 13 15:08:32.534471 amazon-ssm-agent[2130]: 2025-02-13 15:08:32 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:08:32.576562 amazon-ssm-agent[2130]: 2025-02-13 15:08:32 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:08:32.576562 amazon-ssm-agent[2130]: 2025-02-13 15:08:32 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:08:32.576562 amazon-ssm-agent[2130]: 2025-02-13 15:08:32 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:08:32.637945 amazon-ssm-agent[2130]: 2025-02-13 15:08:32 INFO [CredentialRefresher] Next credential rotation will be in 31.841638978666666 minutes Feb 13 15:08:32.680896 systemd[1]: Started sshd@1-172.31.30.163:22-139.178.68.195:37402.service - OpenSSH per-connection server daemon (139.178.68.195:37402). Feb 13 15:08:32.863820 sshd[2197]: Accepted publickey for core from 139.178.68.195 port 37402 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:32.866945 sshd-session[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:32.878284 systemd-logind[1936]: New session 2 of user core. Feb 13 15:08:32.886593 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:08:33.014997 sshd[2199]: Connection closed by 139.178.68.195 port 37402 Feb 13 15:08:33.015496 sshd-session[2197]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:33.025238 systemd-logind[1936]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:08:33.026582 systemd[1]: sshd@1-172.31.30.163:22-139.178.68.195:37402.service: Deactivated successfully. Feb 13 15:08:33.030761 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:08:33.055451 systemd-logind[1936]: Removed session 2. Feb 13 15:08:33.063319 systemd[1]: Started sshd@2-172.31.30.163:22-139.178.68.195:37416.service - OpenSSH per-connection server daemon (139.178.68.195:37416). Feb 13 15:08:33.171623 ntpd[1928]: Listen normally on 7 eth0 [fe80::42a:deff:fe2e:b103%2]:123 Feb 13 15:08:33.172971 ntpd[1928]: 13 Feb 15:08:33 ntpd[1928]: Listen normally on 7 eth0 [fe80::42a:deff:fe2e:b103%2]:123 Feb 13 15:08:33.250988 sshd[2205]: Accepted publickey for core from 139.178.68.195 port 37416 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:33.254046 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:33.265505 systemd-logind[1936]: New session 3 of user core. Feb 13 15:08:33.271493 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:08:33.332089 kubelet[2183]: E0213 15:08:33.331994 2183 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:08:33.336871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:08:33.337249 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:08:33.339332 systemd[1]: kubelet.service: Consumed 1.338s CPU time, 242.5M memory peak. Feb 13 15:08:33.391257 sshd[2209]: Connection closed by 139.178.68.195 port 37416 Feb 13 15:08:33.392035 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:33.399719 systemd-logind[1936]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:08:33.399899 systemd[1]: sshd@2-172.31.30.163:22-139.178.68.195:37416.service: Deactivated successfully. Feb 13 15:08:33.403441 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:08:33.406536 systemd-logind[1936]: Removed session 3. Feb 13 15:08:33.432769 systemd[1]: Started sshd@3-172.31.30.163:22-139.178.68.195:37428.service - OpenSSH per-connection server daemon (139.178.68.195:37428). Feb 13 15:08:33.604915 amazon-ssm-agent[2130]: 2025-02-13 15:08:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:08:33.629538 sshd[2216]: Accepted publickey for core from 139.178.68.195 port 37428 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:33.632606 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:33.642452 systemd-logind[1936]: New session 4 of user core. Feb 13 15:08:33.654504 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:08:33.708132 amazon-ssm-agent[2130]: 2025-02-13 15:08:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2219) started Feb 13 15:08:33.792803 sshd[2223]: Connection closed by 139.178.68.195 port 37428 Feb 13 15:08:33.794687 sshd-session[2216]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:33.803120 systemd[1]: sshd@3-172.31.30.163:22-139.178.68.195:37428.service: Deactivated successfully. Feb 13 15:08:33.807869 amazon-ssm-agent[2130]: 2025-02-13 15:08:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:08:33.812650 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:08:33.818766 systemd-logind[1936]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:08:33.839840 systemd[1]: Started sshd@4-172.31.30.163:22-139.178.68.195:37430.service - OpenSSH per-connection server daemon (139.178.68.195:37430). Feb 13 15:08:33.842438 systemd-logind[1936]: Removed session 4. Feb 13 15:08:34.038700 sshd[2234]: Accepted publickey for core from 139.178.68.195 port 37430 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:34.041245 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:34.050791 systemd-logind[1936]: New session 5 of user core. Feb 13 15:08:34.060522 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:08:34.177705 sudo[2238]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:08:34.178383 sudo[2238]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:08:34.202686 sudo[2238]: pam_unix(sudo:session): session closed for user root Feb 13 15:08:34.226058 sshd[2237]: Connection closed by 139.178.68.195 port 37430 Feb 13 15:08:34.227421 sshd-session[2234]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:34.235990 systemd[1]: sshd@4-172.31.30.163:22-139.178.68.195:37430.service: Deactivated successfully. Feb 13 15:08:34.241897 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:08:34.243640 systemd-logind[1936]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:08:34.245578 systemd-logind[1936]: Removed session 5. Feb 13 15:08:34.274692 systemd[1]: Started sshd@5-172.31.30.163:22-139.178.68.195:37442.service - OpenSSH per-connection server daemon (139.178.68.195:37442). Feb 13 15:08:34.454232 sshd[2244]: Accepted publickey for core from 139.178.68.195 port 37442 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:34.455973 sshd-session[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:34.464794 systemd-logind[1936]: New session 6 of user core. Feb 13 15:08:34.471450 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:08:34.576287 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:08:34.576930 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:08:34.585799 sudo[2248]: pam_unix(sudo:session): session closed for user root Feb 13 15:08:34.597474 sudo[2247]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:08:34.598807 sudo[2247]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:08:34.619931 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:08:34.683084 augenrules[2270]: No rules Feb 13 15:08:34.685651 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:08:34.686117 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:08:34.689641 sudo[2247]: pam_unix(sudo:session): session closed for user root Feb 13 15:08:34.714117 sshd[2246]: Connection closed by 139.178.68.195 port 37442 Feb 13 15:08:34.713037 sshd-session[2244]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:34.719643 systemd-logind[1936]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:08:34.720760 systemd[1]: sshd@5-172.31.30.163:22-139.178.68.195:37442.service: Deactivated successfully. Feb 13 15:08:34.724052 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:08:34.729067 systemd-logind[1936]: Removed session 6. Feb 13 15:08:34.753895 systemd[1]: Started sshd@6-172.31.30.163:22-139.178.68.195:37452.service - OpenSSH per-connection server daemon (139.178.68.195:37452). Feb 13 15:08:34.945726 sshd[2279]: Accepted publickey for core from 139.178.68.195 port 37452 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:34.948271 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:34.959300 systemd-logind[1936]: New session 7 of user core. Feb 13 15:08:34.965486 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:08:35.069853 sudo[2282]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:08:35.071042 sudo[2282]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:08:35.641713 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:08:35.645508 (dockerd)[2300]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:08:35.996063 dockerd[2300]: time="2025-02-13T15:08:35.995877199Z" level=info msg="Starting up" Feb 13 15:08:36.113582 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3669541286-merged.mount: Deactivated successfully. Feb 13 15:08:36.196751 dockerd[2300]: time="2025-02-13T15:08:36.196618744Z" level=info msg="Loading containers: start." Feb 13 15:08:36.450446 kernel: Initializing XFRM netlink socket Feb 13 15:08:36.481906 (udev-worker)[2325]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:08:36.586043 systemd-networkd[1850]: docker0: Link UP Feb 13 15:08:36.623635 dockerd[2300]: time="2025-02-13T15:08:36.623566254Z" level=info msg="Loading containers: done." Feb 13 15:08:36.652778 dockerd[2300]: time="2025-02-13T15:08:36.652619011Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:08:36.652778 dockerd[2300]: time="2025-02-13T15:08:36.652771999Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:08:36.653069 dockerd[2300]: time="2025-02-13T15:08:36.652993435Z" level=info msg="Daemon has completed initialization" Feb 13 15:08:36.705975 dockerd[2300]: time="2025-02-13T15:08:36.705072259Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:08:36.706148 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:08:37.849077 containerd[1955]: time="2025-02-13T15:08:37.849001231Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:08:38.560266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount449010809.mount: Deactivated successfully. Feb 13 15:08:41.692965 containerd[1955]: time="2025-02-13T15:08:41.692682224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:41.695099 containerd[1955]: time="2025-02-13T15:08:41.695005618Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 15:08:41.696685 containerd[1955]: time="2025-02-13T15:08:41.696571015Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:41.703600 containerd[1955]: time="2025-02-13T15:08:41.703490929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:41.706237 containerd[1955]: time="2025-02-13T15:08:41.705882401Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 3.856811713s" Feb 13 15:08:41.706237 containerd[1955]: time="2025-02-13T15:08:41.705964128Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 15:08:41.753036 containerd[1955]: time="2025-02-13T15:08:41.752916092Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:08:43.588138 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:08:43.600592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:43.934618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:43.944748 (kubelet)[2561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:08:44.043686 kubelet[2561]: E0213 15:08:44.043622 2561 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:08:44.053933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:08:44.054503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:08:44.056305 systemd[1]: kubelet.service: Consumed 331ms CPU time, 96.6M memory peak. Feb 13 15:08:44.882491 containerd[1955]: time="2025-02-13T15:08:44.882424141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:44.885228 containerd[1955]: time="2025-02-13T15:08:44.884398065Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 15:08:44.886850 containerd[1955]: time="2025-02-13T15:08:44.886772745Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:44.891767 containerd[1955]: time="2025-02-13T15:08:44.891697494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:44.894323 containerd[1955]: time="2025-02-13T15:08:44.894274860Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 3.141006409s" Feb 13 15:08:44.894473 containerd[1955]: time="2025-02-13T15:08:44.894443340Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 15:08:44.937905 containerd[1955]: time="2025-02-13T15:08:44.937849266Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:08:47.571290 containerd[1955]: time="2025-02-13T15:08:47.570077822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:47.575584 containerd[1955]: time="2025-02-13T15:08:47.575475493Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 15:08:47.599316 containerd[1955]: time="2025-02-13T15:08:47.599221011Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:47.607989 containerd[1955]: time="2025-02-13T15:08:47.607890658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:47.610478 containerd[1955]: time="2025-02-13T15:08:47.610249446Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 2.672334777s" Feb 13 15:08:47.610478 containerd[1955]: time="2025-02-13T15:08:47.610313302Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 15:08:47.653171 containerd[1955]: time="2025-02-13T15:08:47.653101957Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:08:49.109011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556956321.mount: Deactivated successfully. Feb 13 15:08:49.601830 containerd[1955]: time="2025-02-13T15:08:49.601175479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:49.603257 containerd[1955]: time="2025-02-13T15:08:49.603138621Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 15:08:49.604595 containerd[1955]: time="2025-02-13T15:08:49.604494003Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:49.608632 containerd[1955]: time="2025-02-13T15:08:49.608508778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:49.610600 containerd[1955]: time="2025-02-13T15:08:49.610365821Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.957201243s" Feb 13 15:08:49.610600 containerd[1955]: time="2025-02-13T15:08:49.610432220Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 15:08:49.657778 containerd[1955]: time="2025-02-13T15:08:49.657512065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:08:50.173857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811924551.mount: Deactivated successfully. Feb 13 15:08:51.383467 containerd[1955]: time="2025-02-13T15:08:51.383365455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:51.385830 containerd[1955]: time="2025-02-13T15:08:51.385733227Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 15:08:51.386810 containerd[1955]: time="2025-02-13T15:08:51.386756759Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:51.396334 containerd[1955]: time="2025-02-13T15:08:51.396184571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:51.399089 containerd[1955]: time="2025-02-13T15:08:51.399010813Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.741231761s" Feb 13 15:08:51.399706 containerd[1955]: time="2025-02-13T15:08:51.399383370Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:08:51.443379 containerd[1955]: time="2025-02-13T15:08:51.442929423Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:08:51.924387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974193250.mount: Deactivated successfully. Feb 13 15:08:51.935200 containerd[1955]: time="2025-02-13T15:08:51.935105781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:51.936711 containerd[1955]: time="2025-02-13T15:08:51.936636983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 15:08:51.937699 containerd[1955]: time="2025-02-13T15:08:51.937611699Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:51.942032 containerd[1955]: time="2025-02-13T15:08:51.941940106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:51.944217 containerd[1955]: time="2025-02-13T15:08:51.943998480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 501.010191ms" Feb 13 15:08:51.944217 containerd[1955]: time="2025-02-13T15:08:51.944052297Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:08:51.986143 containerd[1955]: time="2025-02-13T15:08:51.986088209Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:08:52.645216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount620660628.mount: Deactivated successfully. Feb 13 15:08:54.304785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:08:54.320568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:54.618883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:54.634877 (kubelet)[2705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:08:54.715414 kubelet[2705]: E0213 15:08:54.715303 2705 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:08:54.720589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:08:54.721101 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:08:54.722144 systemd[1]: kubelet.service: Consumed 295ms CPU time, 95.1M memory peak. Feb 13 15:08:57.374675 containerd[1955]: time="2025-02-13T15:08:57.374580623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:57.377229 containerd[1955]: time="2025-02-13T15:08:57.377086853Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 15:08:57.379944 containerd[1955]: time="2025-02-13T15:08:57.379778007Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:57.389031 containerd[1955]: time="2025-02-13T15:08:57.388934227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:57.391681 containerd[1955]: time="2025-02-13T15:08:57.391613099Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 5.405361015s" Feb 13 15:08:57.392038 containerd[1955]: time="2025-02-13T15:08:57.391875875Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 15:09:00.245975 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:09:04.971824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:09:04.980714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:09:05.325683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:05.338408 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:09:05.466605 kubelet[2786]: E0213 15:09:05.466466 2786 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:09:05.472834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:09:05.474100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:09:05.476437 systemd[1]: kubelet.service: Consumed 331ms CPU time, 96.7M memory peak. Feb 13 15:09:08.232870 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:08.233915 systemd[1]: kubelet.service: Consumed 331ms CPU time, 96.7M memory peak. Feb 13 15:09:08.247689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:09:08.286115 systemd[1]: Reload requested from client PID 2800 ('systemctl') (unit session-7.scope)... Feb 13 15:09:08.286266 systemd[1]: Reloading... Feb 13 15:09:08.586361 zram_generator::config[2848]: No configuration found. Feb 13 15:09:08.853757 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:09:09.106620 systemd[1]: Reloading finished in 819 ms. Feb 13 15:09:09.214547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:09.226494 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:09:09.228630 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:09:09.229153 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:09.229471 systemd[1]: kubelet.service: Consumed 216ms CPU time, 82M memory peak. Feb 13 15:09:09.238766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:09:09.566118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:09.584036 (kubelet)[2910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:09:09.680607 kubelet[2910]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:09:09.680607 kubelet[2910]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:09:09.680607 kubelet[2910]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:09:09.681509 kubelet[2910]: I0213 15:09:09.680757 2910 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:09:11.493284 kubelet[2910]: I0213 15:09:11.492681 2910 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:09:11.493284 kubelet[2910]: I0213 15:09:11.492730 2910 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:09:11.493284 kubelet[2910]: I0213 15:09:11.493073 2910 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:09:11.528694 kubelet[2910]: E0213 15:09:11.528633 2910 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.163:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:11.530626 kubelet[2910]: I0213 15:09:11.530089 2910 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:09:11.550595 kubelet[2910]: I0213 15:09:11.550537 2910 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:09:11.552899 kubelet[2910]: I0213 15:09:11.552810 2910 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:09:11.553197 kubelet[2910]: I0213 15:09:11.552890 2910 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:09:11.553389 kubelet[2910]: I0213 15:09:11.553233 2910 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:09:11.553389 kubelet[2910]: I0213 15:09:11.553258 2910 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:09:11.553534 kubelet[2910]: I0213 15:09:11.553506 2910 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:09:11.555085 kubelet[2910]: I0213 15:09:11.555029 2910 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:09:11.555085 kubelet[2910]: I0213 15:09:11.555076 2910 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:09:11.555271 kubelet[2910]: I0213 15:09:11.555154 2910 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:09:11.555271 kubelet[2910]: I0213 15:09:11.555258 2910 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:09:11.557281 kubelet[2910]: W0213 15:09:11.557010 2910 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-163&limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:11.557281 kubelet[2910]: E0213 15:09:11.557108 2910 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-163&limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:11.559222 kubelet[2910]: W0213 15:09:11.557711 2910 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:11.559222 kubelet[2910]: E0213 15:09:11.557801 2910 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:11.559222 kubelet[2910]: I0213 15:09:11.557993 2910 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:09:11.559222 kubelet[2910]: I0213 15:09:11.558359 2910 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:09:11.559222 kubelet[2910]: W0213 15:09:11.558449 2910 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:09:11.561709 kubelet[2910]: I0213 15:09:11.561670 2910 server.go:1264] "Started kubelet" Feb 13 15:09:11.565037 kubelet[2910]: I0213 15:09:11.564993 2910 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:09:11.575844 kubelet[2910]: I0213 15:09:11.575778 2910 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:09:11.577755 kubelet[2910]: I0213 15:09:11.577713 2910 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:09:11.578122 kubelet[2910]: I0213 15:09:11.578075 2910 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:09:11.579942 kubelet[2910]: I0213 15:09:11.579852 2910 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:09:11.580521 kubelet[2910]: I0213 15:09:11.580489 2910 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:09:11.585103 kubelet[2910]: E0213 15:09:11.585035 2910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-163?timeout=10s\": dial tcp 172.31.30.163:6443: connect: connection refused" interval="200ms" Feb 13 15:09:11.585564 kubelet[2910]: E0213 15:09:11.585509 2910 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:09:11.586073 kubelet[2910]: I0213 15:09:11.586033 2910 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:09:11.586446 kubelet[2910]: I0213 15:09:11.586412 2910 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:09:11.588139 kubelet[2910]: E0213 15:09:11.587806 2910 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.163:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.163:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-163.1823cd106f0fed28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-163,UID:ip-172-31-30-163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-163,},FirstTimestamp:2025-02-13 15:09:11.561628968 +0000 UTC m=+1.966219617,LastTimestamp:2025-02-13 15:09:11.561628968 +0000 UTC m=+1.966219617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-163,}" Feb 13 15:09:11.588954 kubelet[2910]: I0213 15:09:11.588908 2910 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:09:11.590952 kubelet[2910]: I0213 15:09:11.590875 2910 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:09:11.591651 kubelet[2910]: W0213 15:09:11.591567 2910 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:11.591999 kubelet[2910]: E0213 15:09:11.591663 2910 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:11.593265 kubelet[2910]: I0213 15:09:11.592084 2910 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:09:11.617992 kubelet[2910]: I0213 15:09:11.617929 2910 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:09:11.620506 kubelet[2910]: I0213 15:09:11.620454 2910 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:09:11.620663 kubelet[2910]: I0213 15:09:11.620520 2910 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:09:11.620663 kubelet[2910]: I0213 15:09:11.620550 2910 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:09:11.620663 kubelet[2910]: E0213 15:09:11.620614 2910 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:09:11.623509 kubelet[2910]: W0213 15:09:11.623434 2910 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:11.623733 kubelet[2910]: E0213 15:09:11.623709 2910 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:11.643124 kubelet[2910]: I0213 15:09:11.643086 2910 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:09:11.643445 kubelet[2910]: I0213 15:09:11.643421 2910 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:09:11.643564 kubelet[2910]: I0213 15:09:11.643547 2910 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:09:11.649395 kubelet[2910]: I0213 15:09:11.649359 2910 policy_none.go:49] "None policy: Start" Feb 13 15:09:11.651246 kubelet[2910]: I0213 15:09:11.650691 2910 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:09:11.651246 kubelet[2910]: I0213 15:09:11.650741 2910 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:09:11.662703 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:09:11.680972 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:09:11.682314 kubelet[2910]: I0213 15:09:11.682249 2910 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-163" Feb 13 15:09:11.683139 kubelet[2910]: E0213 15:09:11.683069 2910 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.163:6443/api/v1/nodes\": dial tcp 172.31.30.163:6443: connect: connection refused" node="ip-172-31-30-163" Feb 13 15:09:11.688986 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:09:11.701906 kubelet[2910]: I0213 15:09:11.701861 2910 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:09:11.702418 kubelet[2910]: I0213 15:09:11.702162 2910 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:09:11.702418 kubelet[2910]: I0213 15:09:11.702383 2910 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:09:11.705756 kubelet[2910]: E0213 15:09:11.705700 2910 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-163\" not found" Feb 13 15:09:11.721372 kubelet[2910]: I0213 15:09:11.721293 2910 topology_manager.go:215] "Topology Admit Handler" podUID="2169666fa585e99b5b54a580a02a3196" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-163" Feb 13 15:09:11.725275 kubelet[2910]: I0213 15:09:11.724462 2910 topology_manager.go:215] "Topology Admit Handler" podUID="a0be49a6ef71d4ab0fccce1d08d43fb3" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:11.728980 kubelet[2910]: I0213 15:09:11.728925 2910 topology_manager.go:215] "Topology Admit Handler" podUID="aa2bb4f897e1b2df4ae78fdf8b67fc36" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-163" Feb 13 15:09:11.743052 systemd[1]: Created slice kubepods-burstable-pod2169666fa585e99b5b54a580a02a3196.slice - libcontainer container kubepods-burstable-pod2169666fa585e99b5b54a580a02a3196.slice. Feb 13 15:09:11.767553 systemd[1]: Created slice kubepods-burstable-poda0be49a6ef71d4ab0fccce1d08d43fb3.slice - libcontainer container kubepods-burstable-poda0be49a6ef71d4ab0fccce1d08d43fb3.slice. Feb 13 15:09:11.785233 systemd[1]: Created slice kubepods-burstable-podaa2bb4f897e1b2df4ae78fdf8b67fc36.slice - libcontainer container kubepods-burstable-podaa2bb4f897e1b2df4ae78fdf8b67fc36.slice. Feb 13 15:09:11.787425 kubelet[2910]: E0213 15:09:11.787353 2910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-163?timeout=10s\": dial tcp 172.31.30.163:6443: connect: connection refused" interval="400ms" Feb 13 15:09:11.792141 kubelet[2910]: I0213 15:09:11.792096 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2169666fa585e99b5b54a580a02a3196-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-163\" (UID: \"2169666fa585e99b5b54a580a02a3196\") " pod="kube-system/kube-apiserver-ip-172-31-30-163" Feb 13 15:09:11.792727 kubelet[2910]: I0213 15:09:11.792433 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0be49a6ef71d4ab0fccce1d08d43fb3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-163\" (UID: \"a0be49a6ef71d4ab0fccce1d08d43fb3\") " pod="kube-system/kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:11.792727 kubelet[2910]: I0213 15:09:11.792519 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0be49a6ef71d4ab0fccce1d08d43fb3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-163\" (UID: \"a0be49a6ef71d4ab0fccce1d08d43fb3\") " pod="kube-system/kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:11.792727 kubelet[2910]: I0213 15:09:11.792586 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2169666fa585e99b5b54a580a02a3196-ca-certs\") pod \"kube-apiserver-ip-172-31-30-163\" (UID: \"2169666fa585e99b5b54a580a02a3196\") " pod="kube-system/kube-apiserver-ip-172-31-30-163" Feb 13 15:09:11.792727 kubelet[2910]: I0213 15:09:11.792651 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0be49a6ef71d4ab0fccce1d08d43fb3-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-163\" (UID: \"a0be49a6ef71d4ab0fccce1d08d43fb3\") " pod="kube-system/kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:11.792727 kubelet[2910]: I0213 15:09:11.792693 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a0be49a6ef71d4ab0fccce1d08d43fb3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-163\" (UID: \"a0be49a6ef71d4ab0fccce1d08d43fb3\") " pod="kube-system/kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:11.793317 kubelet[2910]: I0213 15:09:11.793086 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0be49a6ef71d4ab0fccce1d08d43fb3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-163\" (UID: \"a0be49a6ef71d4ab0fccce1d08d43fb3\") " pod="kube-system/kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:11.793317 kubelet[2910]: I0213 15:09:11.793161 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa2bb4f897e1b2df4ae78fdf8b67fc36-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-163\" (UID: \"aa2bb4f897e1b2df4ae78fdf8b67fc36\") " pod="kube-system/kube-scheduler-ip-172-31-30-163" Feb 13 15:09:11.793317 kubelet[2910]: I0213 15:09:11.793258 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2169666fa585e99b5b54a580a02a3196-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-163\" (UID: \"2169666fa585e99b5b54a580a02a3196\") " pod="kube-system/kube-apiserver-ip-172-31-30-163" Feb 13 15:09:11.886603 kubelet[2910]: I0213 15:09:11.886482 2910 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-163" Feb 13 15:09:11.887053 kubelet[2910]: E0213 15:09:11.886993 2910 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.163:6443/api/v1/nodes\": dial tcp 172.31.30.163:6443: connect: connection refused" node="ip-172-31-30-163" Feb 13 15:09:12.063387 containerd[1955]: time="2025-02-13T15:09:12.063168863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-163,Uid:2169666fa585e99b5b54a580a02a3196,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:12.081325 containerd[1955]: time="2025-02-13T15:09:12.081141956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-163,Uid:a0be49a6ef71d4ab0fccce1d08d43fb3,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:12.094737 containerd[1955]: time="2025-02-13T15:09:12.094328375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-163,Uid:aa2bb4f897e1b2df4ae78fdf8b67fc36,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:12.189083 kubelet[2910]: E0213 15:09:12.188915 2910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-163?timeout=10s\": dial tcp 172.31.30.163:6443: connect: connection refused" interval="800ms" Feb 13 15:09:12.291316 kubelet[2910]: I0213 15:09:12.290833 2910 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-163" Feb 13 15:09:12.291591 kubelet[2910]: E0213 15:09:12.291534 2910 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.163:6443/api/v1/nodes\": dial tcp 172.31.30.163:6443: connect: connection refused" node="ip-172-31-30-163" Feb 13 15:09:12.399250 kubelet[2910]: W0213 15:09:12.398960 2910 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:12.399250 kubelet[2910]: E0213 15:09:12.399064 2910 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:12.525176 kubelet[2910]: W0213 15:09:12.525069 2910 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-163&limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:12.525176 kubelet[2910]: E0213 15:09:12.525175 2910 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-163&limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:12.598146 kubelet[2910]: W0213 15:09:12.597968 2910 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:12.598146 kubelet[2910]: E0213 15:09:12.598094 2910 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:12.680335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867564703.mount: Deactivated successfully. Feb 13 15:09:12.698427 containerd[1955]: time="2025-02-13T15:09:12.697132616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:12.706386 containerd[1955]: time="2025-02-13T15:09:12.706101046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:09:12.709757 containerd[1955]: time="2025-02-13T15:09:12.708547174Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:12.711986 containerd[1955]: time="2025-02-13T15:09:12.711850909Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:12.715457 containerd[1955]: time="2025-02-13T15:09:12.715374243Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:12.717938 containerd[1955]: time="2025-02-13T15:09:12.717754512Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:09:12.719983 containerd[1955]: time="2025-02-13T15:09:12.719851231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:09:12.722382 containerd[1955]: time="2025-02-13T15:09:12.722171542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:12.728695 containerd[1955]: time="2025-02-13T15:09:12.728275529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 646.955606ms" Feb 13 15:09:12.751783 containerd[1955]: time="2025-02-13T15:09:12.751583913Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 657.135345ms" Feb 13 15:09:12.755230 containerd[1955]: time="2025-02-13T15:09:12.754961687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 691.625436ms" Feb 13 15:09:12.982414 containerd[1955]: time="2025-02-13T15:09:12.982248108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:12.983049 containerd[1955]: time="2025-02-13T15:09:12.982788821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:12.983307 containerd[1955]: time="2025-02-13T15:09:12.983020641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:12.984317 containerd[1955]: time="2025-02-13T15:09:12.984076509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:12.986812 containerd[1955]: time="2025-02-13T15:09:12.986608898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:12.986812 containerd[1955]: time="2025-02-13T15:09:12.986737750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:12.987221 containerd[1955]: time="2025-02-13T15:09:12.986792970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:12.987221 containerd[1955]: time="2025-02-13T15:09:12.986970110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:12.990346 kubelet[2910]: E0213 15:09:12.990048 2910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-163?timeout=10s\": dial tcp 172.31.30.163:6443: connect: connection refused" interval="1.6s" Feb 13 15:09:13.000273 containerd[1955]: time="2025-02-13T15:09:12.998278365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:13.000273 containerd[1955]: time="2025-02-13T15:09:12.998470568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:13.000273 containerd[1955]: time="2025-02-13T15:09:12.998563798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:13.000273 containerd[1955]: time="2025-02-13T15:09:12.998973729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:13.052011 systemd[1]: Started cri-containerd-6222a944be36a885e1751d0518511b2855fe514956f043a09ab9a60edfa36b6f.scope - libcontainer container 6222a944be36a885e1751d0518511b2855fe514956f043a09ab9a60edfa36b6f. Feb 13 15:09:13.069728 systemd[1]: Started cri-containerd-b8e42cd310de3363094862d78ba4ea88b974a9ed5c1f9ea29b9f791efe30a360.scope - libcontainer container b8e42cd310de3363094862d78ba4ea88b974a9ed5c1f9ea29b9f791efe30a360. Feb 13 15:09:13.099736 systemd[1]: Started cri-containerd-a41140ba7d8fbdeb52b6db55247d8510028feb8e292b660abc42686d26223c50.scope - libcontainer container a41140ba7d8fbdeb52b6db55247d8510028feb8e292b660abc42686d26223c50. Feb 13 15:09:13.100472 kubelet[2910]: I0213 15:09:13.100378 2910 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-163" Feb 13 15:09:13.102002 kubelet[2910]: W0213 15:09:13.101928 2910 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:13.103295 kubelet[2910]: E0213 15:09:13.102476 2910 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:13.103295 kubelet[2910]: E0213 15:09:13.102029 2910 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.163:6443/api/v1/nodes\": dial tcp 172.31.30.163:6443: connect: connection refused" node="ip-172-31-30-163" Feb 13 15:09:13.190263 containerd[1955]: time="2025-02-13T15:09:13.187809880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-163,Uid:2169666fa585e99b5b54a580a02a3196,Namespace:kube-system,Attempt:0,} returns sandbox id \"6222a944be36a885e1751d0518511b2855fe514956f043a09ab9a60edfa36b6f\"" Feb 13 15:09:13.208894 containerd[1955]: time="2025-02-13T15:09:13.208680304Z" level=info msg="CreateContainer within sandbox \"6222a944be36a885e1751d0518511b2855fe514956f043a09ab9a60edfa36b6f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:09:13.223767 containerd[1955]: time="2025-02-13T15:09:13.223610639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-163,Uid:aa2bb4f897e1b2df4ae78fdf8b67fc36,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8e42cd310de3363094862d78ba4ea88b974a9ed5c1f9ea29b9f791efe30a360\"" Feb 13 15:09:13.234228 containerd[1955]: time="2025-02-13T15:09:13.233058864Z" level=info msg="CreateContainer within sandbox \"b8e42cd310de3363094862d78ba4ea88b974a9ed5c1f9ea29b9f791efe30a360\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:09:13.254300 containerd[1955]: time="2025-02-13T15:09:13.254169864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-163,Uid:a0be49a6ef71d4ab0fccce1d08d43fb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a41140ba7d8fbdeb52b6db55247d8510028feb8e292b660abc42686d26223c50\"" Feb 13 15:09:13.262317 containerd[1955]: time="2025-02-13T15:09:13.262113838Z" level=info msg="CreateContainer within sandbox \"a41140ba7d8fbdeb52b6db55247d8510028feb8e292b660abc42686d26223c50\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:09:13.269688 containerd[1955]: time="2025-02-13T15:09:13.269541782Z" level=info msg="CreateContainer within sandbox \"6222a944be36a885e1751d0518511b2855fe514956f043a09ab9a60edfa36b6f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3e58a9dbe8ea7c783334e6c748b6c3822d3e65487cc92e43cd74e18cad0bb726\"" Feb 13 15:09:13.270661 containerd[1955]: time="2025-02-13T15:09:13.270549386Z" level=info msg="StartContainer for \"3e58a9dbe8ea7c783334e6c748b6c3822d3e65487cc92e43cd74e18cad0bb726\"" Feb 13 15:09:13.279371 containerd[1955]: time="2025-02-13T15:09:13.279131824Z" level=info msg="CreateContainer within sandbox \"b8e42cd310de3363094862d78ba4ea88b974a9ed5c1f9ea29b9f791efe30a360\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6675776be65899db00f26ed75f78bd281b6e440322df634dc5676ab938e1b20d\"" Feb 13 15:09:13.281671 containerd[1955]: time="2025-02-13T15:09:13.280939116Z" level=info msg="StartContainer for \"6675776be65899db00f26ed75f78bd281b6e440322df634dc5676ab938e1b20d\"" Feb 13 15:09:13.311150 containerd[1955]: time="2025-02-13T15:09:13.311066941Z" level=info msg="CreateContainer within sandbox \"a41140ba7d8fbdeb52b6db55247d8510028feb8e292b660abc42686d26223c50\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2b9665bdd3f48bbcdd82551252f1906744a27029dfcc2623af559b655989f5a4\"" Feb 13 15:09:13.312028 containerd[1955]: time="2025-02-13T15:09:13.311971481Z" level=info msg="StartContainer for \"2b9665bdd3f48bbcdd82551252f1906744a27029dfcc2623af559b655989f5a4\"" Feb 13 15:09:13.340582 systemd[1]: Started cri-containerd-3e58a9dbe8ea7c783334e6c748b6c3822d3e65487cc92e43cd74e18cad0bb726.scope - libcontainer container 3e58a9dbe8ea7c783334e6c748b6c3822d3e65487cc92e43cd74e18cad0bb726. Feb 13 15:09:13.368581 systemd[1]: Started cri-containerd-6675776be65899db00f26ed75f78bd281b6e440322df634dc5676ab938e1b20d.scope - libcontainer container 6675776be65899db00f26ed75f78bd281b6e440322df634dc5676ab938e1b20d. Feb 13 15:09:13.410560 systemd[1]: Started cri-containerd-2b9665bdd3f48bbcdd82551252f1906744a27029dfcc2623af559b655989f5a4.scope - libcontainer container 2b9665bdd3f48bbcdd82551252f1906744a27029dfcc2623af559b655989f5a4. Feb 13 15:09:13.504681 containerd[1955]: time="2025-02-13T15:09:13.504506713Z" level=info msg="StartContainer for \"3e58a9dbe8ea7c783334e6c748b6c3822d3e65487cc92e43cd74e18cad0bb726\" returns successfully" Feb 13 15:09:13.532347 containerd[1955]: time="2025-02-13T15:09:13.531605501Z" level=info msg="StartContainer for \"6675776be65899db00f26ed75f78bd281b6e440322df634dc5676ab938e1b20d\" returns successfully" Feb 13 15:09:13.573838 containerd[1955]: time="2025-02-13T15:09:13.572447698Z" level=info msg="StartContainer for \"2b9665bdd3f48bbcdd82551252f1906744a27029dfcc2623af559b655989f5a4\" returns successfully" Feb 13 15:09:13.672988 kubelet[2910]: E0213 15:09:13.672821 2910 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.163:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.163:6443: connect: connection refused Feb 13 15:09:14.525406 update_engine[1937]: I20250213 15:09:14.525250 1937 update_attempter.cc:509] Updating boot flags... Feb 13 15:09:14.660233 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3197) Feb 13 15:09:14.708682 kubelet[2910]: I0213 15:09:14.708087 2910 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-163" Feb 13 15:09:18.252762 kubelet[2910]: E0213 15:09:18.252688 2910 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-163\" not found" node="ip-172-31-30-163" Feb 13 15:09:18.289294 kubelet[2910]: I0213 15:09:18.289184 2910 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-163" Feb 13 15:09:18.362753 kubelet[2910]: E0213 15:09:18.362352 2910 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-30-163.1823cd106f0fed28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-163,UID:ip-172-31-30-163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-163,},FirstTimestamp:2025-02-13 15:09:11.561628968 +0000 UTC m=+1.966219617,LastTimestamp:2025-02-13 15:09:11.561628968 +0000 UTC m=+1.966219617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-163,}" Feb 13 15:09:18.452230 kubelet[2910]: E0213 15:09:18.452044 2910 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-30-163.1823cd10707c0863 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-163,UID:ip-172-31-30-163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-30-163,},FirstTimestamp:2025-02-13 15:09:11.585491043 +0000 UTC m=+1.990081728,LastTimestamp:2025-02-13 15:09:11.585491043 +0000 UTC m=+1.990081728,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-163,}" Feb 13 15:09:18.561355 kubelet[2910]: I0213 15:09:18.560919 2910 apiserver.go:52] "Watching apiserver" Feb 13 15:09:18.589184 kubelet[2910]: I0213 15:09:18.589103 2910 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:09:21.068269 systemd[1]: Reload requested from client PID 3283 ('systemctl') (unit session-7.scope)... Feb 13 15:09:21.068900 systemd[1]: Reloading... Feb 13 15:09:21.326240 zram_generator::config[3331]: No configuration found. Feb 13 15:09:21.595603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:09:21.688440 kubelet[2910]: I0213 15:09:21.688334 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-163" podStartSLOduration=1.688297939 podStartE2EDuration="1.688297939s" podCreationTimestamp="2025-02-13 15:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:21.687922551 +0000 UTC m=+12.092513200" watchObservedRunningTime="2025-02-13 15:09:21.688297939 +0000 UTC m=+12.092888624" Feb 13 15:09:21.690619 kubelet[2910]: I0213 15:09:21.688512 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-163" podStartSLOduration=1.688500302 podStartE2EDuration="1.688500302s" podCreationTimestamp="2025-02-13 15:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:21.670107731 +0000 UTC m=+12.074698416" watchObservedRunningTime="2025-02-13 15:09:21.688500302 +0000 UTC m=+12.093090963" Feb 13 15:09:21.886113 systemd[1]: Reloading finished in 816 ms. Feb 13 15:09:21.952077 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:09:21.970591 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:09:21.971129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:21.971259 systemd[1]: kubelet.service: Consumed 2.883s CPU time, 114.1M memory peak. Feb 13 15:09:21.980775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:09:22.321866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:22.338325 (kubelet)[3390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:09:22.468114 kubelet[3390]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:09:22.468114 kubelet[3390]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:09:22.468114 kubelet[3390]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:09:22.469423 kubelet[3390]: I0213 15:09:22.468264 3390 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:09:22.478835 kubelet[3390]: I0213 15:09:22.478016 3390 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:09:22.478835 kubelet[3390]: I0213 15:09:22.478082 3390 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:09:22.478835 kubelet[3390]: I0213 15:09:22.478612 3390 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:09:22.483326 kubelet[3390]: I0213 15:09:22.483277 3390 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:09:22.491049 kubelet[3390]: I0213 15:09:22.491002 3390 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:09:22.493753 sudo[3403]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:09:22.494453 sudo[3403]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:09:22.515717 kubelet[3390]: I0213 15:09:22.515507 3390 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:09:22.517028 kubelet[3390]: I0213 15:09:22.516161 3390 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:09:22.517028 kubelet[3390]: I0213 15:09:22.516252 3390 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:09:22.517028 kubelet[3390]: I0213 15:09:22.516532 3390 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:09:22.517028 kubelet[3390]: I0213 15:09:22.516551 3390 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:09:22.517028 kubelet[3390]: I0213 15:09:22.516613 3390 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:09:22.517549 kubelet[3390]: I0213 15:09:22.516812 3390 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:09:22.517549 kubelet[3390]: I0213 15:09:22.516839 3390 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:09:22.517549 kubelet[3390]: I0213 15:09:22.516894 3390 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:09:22.517549 kubelet[3390]: I0213 15:09:22.516931 3390 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:09:22.527012 kubelet[3390]: I0213 15:09:22.526444 3390 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:09:22.527012 kubelet[3390]: I0213 15:09:22.526745 3390 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:09:22.528366 kubelet[3390]: I0213 15:09:22.527520 3390 server.go:1264] "Started kubelet" Feb 13 15:09:22.542238 kubelet[3390]: I0213 15:09:22.540834 3390 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:09:22.551250 kubelet[3390]: I0213 15:09:22.549510 3390 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:09:22.559164 kubelet[3390]: I0213 15:09:22.558961 3390 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:09:22.565592 kubelet[3390]: I0213 15:09:22.565490 3390 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:09:22.566911 kubelet[3390]: I0213 15:09:22.565879 3390 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:09:22.575348 kubelet[3390]: I0213 15:09:22.575079 3390 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:09:22.582712 kubelet[3390]: I0213 15:09:22.582434 3390 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:09:22.583040 kubelet[3390]: I0213 15:09:22.582804 3390 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:09:22.606793 kubelet[3390]: I0213 15:09:22.606385 3390 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:09:22.610577 kubelet[3390]: I0213 15:09:22.610531 3390 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:09:22.611269 kubelet[3390]: I0213 15:09:22.610764 3390 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:09:22.611269 kubelet[3390]: I0213 15:09:22.610802 3390 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:09:22.611269 kubelet[3390]: E0213 15:09:22.610871 3390 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:09:22.645684 kubelet[3390]: I0213 15:09:22.645614 3390 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:09:22.645846 kubelet[3390]: I0213 15:09:22.645804 3390 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:09:22.658653 kubelet[3390]: E0213 15:09:22.658260 3390 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:09:22.661044 kubelet[3390]: I0213 15:09:22.661009 3390 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:09:22.695356 kubelet[3390]: I0213 15:09:22.695052 3390 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-163" Feb 13 15:09:22.711264 kubelet[3390]: E0213 15:09:22.710981 3390 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:09:22.726096 kubelet[3390]: I0213 15:09:22.723598 3390 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-30-163" Feb 13 15:09:22.726096 kubelet[3390]: I0213 15:09:22.723708 3390 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-163" Feb 13 15:09:22.819403 kubelet[3390]: I0213 15:09:22.818547 3390 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:09:22.819403 kubelet[3390]: I0213 15:09:22.818577 3390 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:09:22.819403 kubelet[3390]: I0213 15:09:22.818612 3390 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:09:22.819403 kubelet[3390]: I0213 15:09:22.818843 3390 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:09:22.819403 kubelet[3390]: I0213 15:09:22.818862 3390 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:09:22.819403 kubelet[3390]: I0213 15:09:22.818897 3390 policy_none.go:49] "None policy: Start" Feb 13 15:09:22.821594 kubelet[3390]: I0213 15:09:22.821465 3390 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:09:22.821594 kubelet[3390]: I0213 15:09:22.821518 3390 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:09:22.821836 kubelet[3390]: I0213 15:09:22.821801 3390 state_mem.go:75] "Updated machine memory state" Feb 13 15:09:22.837263 kubelet[3390]: I0213 15:09:22.834862 3390 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:09:22.837263 kubelet[3390]: I0213 15:09:22.835172 3390 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:09:22.837263 kubelet[3390]: I0213 15:09:22.835518 3390 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:09:22.911813 kubelet[3390]: I0213 15:09:22.911726 3390 topology_manager.go:215] "Topology Admit Handler" podUID="aa2bb4f897e1b2df4ae78fdf8b67fc36" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-163" Feb 13 15:09:22.911979 kubelet[3390]: I0213 15:09:22.911947 3390 topology_manager.go:215] "Topology Admit Handler" podUID="2169666fa585e99b5b54a580a02a3196" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-163" Feb 13 15:09:22.912038 kubelet[3390]: I0213 15:09:22.912021 3390 topology_manager.go:215] "Topology Admit Handler" podUID="a0be49a6ef71d4ab0fccce1d08d43fb3" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:22.928441 kubelet[3390]: E0213 15:09:22.928356 3390 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-163\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-163" Feb 13 15:09:22.930886 kubelet[3390]: E0213 15:09:22.930822 3390 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-163\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:22.985044 kubelet[3390]: I0213 15:09:22.984918 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa2bb4f897e1b2df4ae78fdf8b67fc36-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-163\" (UID: \"aa2bb4f897e1b2df4ae78fdf8b67fc36\") " pod="kube-system/kube-scheduler-ip-172-31-30-163" Feb 13 15:09:22.985044 kubelet[3390]: I0213 15:09:22.984982 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2169666fa585e99b5b54a580a02a3196-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-163\" (UID: \"2169666fa585e99b5b54a580a02a3196\") " pod="kube-system/kube-apiserver-ip-172-31-30-163" Feb 13 15:09:22.985044 kubelet[3390]: I0213 15:09:22.985027 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a0be49a6ef71d4ab0fccce1d08d43fb3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-163\" (UID: \"a0be49a6ef71d4ab0fccce1d08d43fb3\") " pod="kube-system/kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:22.985976 kubelet[3390]: I0213 15:09:22.985089 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0be49a6ef71d4ab0fccce1d08d43fb3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-163\" (UID: \"a0be49a6ef71d4ab0fccce1d08d43fb3\") " pod="kube-system/kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:22.985976 kubelet[3390]: I0213 15:09:22.985127 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2169666fa585e99b5b54a580a02a3196-ca-certs\") pod \"kube-apiserver-ip-172-31-30-163\" (UID: \"2169666fa585e99b5b54a580a02a3196\") " pod="kube-system/kube-apiserver-ip-172-31-30-163" Feb 13 15:09:22.985976 kubelet[3390]: I0213 15:09:22.985162 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2169666fa585e99b5b54a580a02a3196-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-163\" (UID: \"2169666fa585e99b5b54a580a02a3196\") " pod="kube-system/kube-apiserver-ip-172-31-30-163" Feb 13 15:09:22.985976 kubelet[3390]: I0213 15:09:22.985235 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0be49a6ef71d4ab0fccce1d08d43fb3-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-163\" (UID: \"a0be49a6ef71d4ab0fccce1d08d43fb3\") " pod="kube-system/kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:22.985976 kubelet[3390]: I0213 15:09:22.985272 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0be49a6ef71d4ab0fccce1d08d43fb3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-163\" (UID: \"a0be49a6ef71d4ab0fccce1d08d43fb3\") " pod="kube-system/kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:22.986496 kubelet[3390]: I0213 15:09:22.985322 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0be49a6ef71d4ab0fccce1d08d43fb3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-163\" (UID: \"a0be49a6ef71d4ab0fccce1d08d43fb3\") " pod="kube-system/kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:23.470412 sudo[3403]: pam_unix(sudo:session): session closed for user root Feb 13 15:09:23.520494 kubelet[3390]: I0213 15:09:23.520323 3390 apiserver.go:52] "Watching apiserver" Feb 13 15:09:23.583731 kubelet[3390]: I0213 15:09:23.583676 3390 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:09:23.754414 kubelet[3390]: E0213 15:09:23.753804 3390 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-163\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-163" Feb 13 15:09:23.805068 kubelet[3390]: I0213 15:09:23.804627 3390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-163" podStartSLOduration=1.804603791 podStartE2EDuration="1.804603791s" podCreationTimestamp="2025-02-13 15:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:23.784151035 +0000 UTC m=+1.436760990" watchObservedRunningTime="2025-02-13 15:09:23.804603791 +0000 UTC m=+1.457213770" Feb 13 15:09:26.196703 sudo[2282]: pam_unix(sudo:session): session closed for user root Feb 13 15:09:26.221136 sshd[2281]: Connection closed by 139.178.68.195 port 37452 Feb 13 15:09:26.222276 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Feb 13 15:09:26.229084 systemd[1]: sshd@6-172.31.30.163:22-139.178.68.195:37452.service: Deactivated successfully. Feb 13 15:09:26.236497 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:09:26.237319 systemd[1]: session-7.scope: Consumed 14.841s CPU time, 290.5M memory peak. Feb 13 15:09:26.243703 systemd-logind[1936]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:09:26.249369 systemd-logind[1936]: Removed session 7. Feb 13 15:09:35.625345 kubelet[3390]: I0213 15:09:35.624021 3390 topology_manager.go:215] "Topology Admit Handler" podUID="5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92" podNamespace="kube-system" podName="kube-proxy-ll7ls" Feb 13 15:09:35.645288 systemd[1]: Created slice kubepods-besteffort-pod5e27b98a_c7ab_45b4_8d77_b7b80e1e7a92.slice - libcontainer container kubepods-besteffort-pod5e27b98a_c7ab_45b4_8d77_b7b80e1e7a92.slice. Feb 13 15:09:35.653617 kubelet[3390]: I0213 15:09:35.652429 3390 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:09:35.655373 containerd[1955]: time="2025-02-13T15:09:35.654443958Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:09:35.655941 kubelet[3390]: I0213 15:09:35.654999 3390 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:09:35.677269 kubelet[3390]: I0213 15:09:35.674649 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92-xtables-lock\") pod \"kube-proxy-ll7ls\" (UID: \"5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92\") " pod="kube-system/kube-proxy-ll7ls" Feb 13 15:09:35.677269 kubelet[3390]: I0213 15:09:35.674714 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92-lib-modules\") pod \"kube-proxy-ll7ls\" (UID: \"5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92\") " pod="kube-system/kube-proxy-ll7ls" Feb 13 15:09:35.677269 kubelet[3390]: I0213 15:09:35.674762 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6jzx\" (UniqueName: \"kubernetes.io/projected/5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92-kube-api-access-m6jzx\") pod \"kube-proxy-ll7ls\" (UID: \"5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92\") " pod="kube-system/kube-proxy-ll7ls" Feb 13 15:09:35.677269 kubelet[3390]: I0213 15:09:35.674862 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92-kube-proxy\") pod \"kube-proxy-ll7ls\" (UID: \"5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92\") " pod="kube-system/kube-proxy-ll7ls" Feb 13 15:09:35.677269 kubelet[3390]: I0213 15:09:35.675574 3390 topology_manager.go:215] "Topology Admit Handler" podUID="b5b173ef-28d3-4508-b5e3-8909b5f882ed" podNamespace="kube-system" podName="cilium-mtnf5" Feb 13 15:09:35.695744 systemd[1]: Created slice kubepods-burstable-podb5b173ef_28d3_4508_b5e3_8909b5f882ed.slice - libcontainer container kubepods-burstable-podb5b173ef_28d3_4508_b5e3_8909b5f882ed.slice. Feb 13 15:09:35.775838 kubelet[3390]: I0213 15:09:35.775767 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-host-proc-sys-kernel\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776006 kubelet[3390]: I0213 15:09:35.775863 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-host-proc-sys-net\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776006 kubelet[3390]: I0213 15:09:35.775905 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5b173ef-28d3-4508-b5e3-8909b5f882ed-hubble-tls\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776006 kubelet[3390]: I0213 15:09:35.775941 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4ht8\" (UniqueName: \"kubernetes.io/projected/b5b173ef-28d3-4508-b5e3-8909b5f882ed-kube-api-access-p4ht8\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776006 kubelet[3390]: I0213 15:09:35.775986 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-bpf-maps\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776273 kubelet[3390]: I0213 15:09:35.776021 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-xtables-lock\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776273 kubelet[3390]: I0213 15:09:35.776059 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-etc-cni-netd\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776273 kubelet[3390]: I0213 15:09:35.776095 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cilium-run\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776273 kubelet[3390]: I0213 15:09:35.776130 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cilium-cgroup\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776273 kubelet[3390]: I0213 15:09:35.776180 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5b173ef-28d3-4508-b5e3-8909b5f882ed-clustermesh-secrets\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776273 kubelet[3390]: I0213 15:09:35.776249 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cilium-config-path\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776565 kubelet[3390]: I0213 15:09:35.776346 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cni-path\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776565 kubelet[3390]: I0213 15:09:35.776380 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-lib-modules\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.776565 kubelet[3390]: I0213 15:09:35.776421 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-hostproc\") pod \"cilium-mtnf5\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " pod="kube-system/cilium-mtnf5" Feb 13 15:09:35.828790 kubelet[3390]: E0213 15:09:35.828266 3390 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:09:35.828790 kubelet[3390]: E0213 15:09:35.828342 3390 projected.go:200] Error preparing data for projected volume kube-api-access-m6jzx for pod kube-system/kube-proxy-ll7ls: configmap "kube-root-ca.crt" not found Feb 13 15:09:35.828790 kubelet[3390]: E0213 15:09:35.828439 3390 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92-kube-api-access-m6jzx podName:5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92 nodeName:}" failed. No retries permitted until 2025-02-13 15:09:36.32840605 +0000 UTC m=+13.981015993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m6jzx" (UniqueName: "kubernetes.io/projected/5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92-kube-api-access-m6jzx") pod "kube-proxy-ll7ls" (UID: "5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92") : configmap "kube-root-ca.crt" not found Feb 13 15:09:35.937846 kubelet[3390]: E0213 15:09:35.937788 3390 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:09:35.938008 kubelet[3390]: E0213 15:09:35.937841 3390 projected.go:200] Error preparing data for projected volume kube-api-access-p4ht8 for pod kube-system/cilium-mtnf5: configmap "kube-root-ca.crt" not found Feb 13 15:09:35.938008 kubelet[3390]: E0213 15:09:35.937930 3390 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b5b173ef-28d3-4508-b5e3-8909b5f882ed-kube-api-access-p4ht8 podName:b5b173ef-28d3-4508-b5e3-8909b5f882ed nodeName:}" failed. No retries permitted until 2025-02-13 15:09:36.437901699 +0000 UTC m=+14.090511630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p4ht8" (UniqueName: "kubernetes.io/projected/b5b173ef-28d3-4508-b5e3-8909b5f882ed-kube-api-access-p4ht8") pod "cilium-mtnf5" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed") : configmap "kube-root-ca.crt" not found Feb 13 15:09:36.032597 kubelet[3390]: I0213 15:09:36.032001 3390 topology_manager.go:215] "Topology Admit Handler" podUID="34221128-5d81-4557-9459-90b630feeb49" podNamespace="kube-system" podName="cilium-operator-599987898-2tzzc" Feb 13 15:09:36.052546 systemd[1]: Created slice kubepods-besteffort-pod34221128_5d81_4557_9459_90b630feeb49.slice - libcontainer container kubepods-besteffort-pod34221128_5d81_4557_9459_90b630feeb49.slice. Feb 13 15:09:36.079675 kubelet[3390]: I0213 15:09:36.079414 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46tsv\" (UniqueName: \"kubernetes.io/projected/34221128-5d81-4557-9459-90b630feeb49-kube-api-access-46tsv\") pod \"cilium-operator-599987898-2tzzc\" (UID: \"34221128-5d81-4557-9459-90b630feeb49\") " pod="kube-system/cilium-operator-599987898-2tzzc" Feb 13 15:09:36.079675 kubelet[3390]: I0213 15:09:36.079501 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34221128-5d81-4557-9459-90b630feeb49-cilium-config-path\") pod \"cilium-operator-599987898-2tzzc\" (UID: \"34221128-5d81-4557-9459-90b630feeb49\") " pod="kube-system/cilium-operator-599987898-2tzzc" Feb 13 15:09:36.362726 containerd[1955]: time="2025-02-13T15:09:36.362498320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2tzzc,Uid:34221128-5d81-4557-9459-90b630feeb49,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:36.434670 containerd[1955]: time="2025-02-13T15:09:36.434422824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:36.435154 containerd[1955]: time="2025-02-13T15:09:36.434924161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:36.435154 containerd[1955]: time="2025-02-13T15:09:36.435015424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:36.435884 containerd[1955]: time="2025-02-13T15:09:36.435617942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:36.471653 systemd[1]: Started cri-containerd-59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c.scope - libcontainer container 59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c. Feb 13 15:09:36.549647 containerd[1955]: time="2025-02-13T15:09:36.549551767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2tzzc,Uid:34221128-5d81-4557-9459-90b630feeb49,Namespace:kube-system,Attempt:0,} returns sandbox id \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\"" Feb 13 15:09:36.554967 containerd[1955]: time="2025-02-13T15:09:36.554903514Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:09:36.561304 containerd[1955]: time="2025-02-13T15:09:36.560697252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ll7ls,Uid:5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:36.604358 containerd[1955]: time="2025-02-13T15:09:36.604138956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mtnf5,Uid:b5b173ef-28d3-4508-b5e3-8909b5f882ed,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:36.610578 containerd[1955]: time="2025-02-13T15:09:36.609619986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:36.610578 containerd[1955]: time="2025-02-13T15:09:36.609741785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:36.610578 containerd[1955]: time="2025-02-13T15:09:36.609779123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:36.610578 containerd[1955]: time="2025-02-13T15:09:36.609978307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:36.658279 systemd[1]: Started cri-containerd-bd6c3459aac878b60c570890d1f7d4dfad8ce6918a8be5459d013730158e4858.scope - libcontainer container bd6c3459aac878b60c570890d1f7d4dfad8ce6918a8be5459d013730158e4858. Feb 13 15:09:36.676502 containerd[1955]: time="2025-02-13T15:09:36.676143064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:36.676502 containerd[1955]: time="2025-02-13T15:09:36.676382980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:36.676502 containerd[1955]: time="2025-02-13T15:09:36.676442878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:36.680251 containerd[1955]: time="2025-02-13T15:09:36.676893553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:36.719682 systemd[1]: Started cri-containerd-781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8.scope - libcontainer container 781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8. Feb 13 15:09:36.749903 containerd[1955]: time="2025-02-13T15:09:36.749424773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ll7ls,Uid:5e27b98a-c7ab-45b4-8d77-b7b80e1e7a92,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd6c3459aac878b60c570890d1f7d4dfad8ce6918a8be5459d013730158e4858\"" Feb 13 15:09:36.760691 containerd[1955]: time="2025-02-13T15:09:36.760555013Z" level=info msg="CreateContainer within sandbox \"bd6c3459aac878b60c570890d1f7d4dfad8ce6918a8be5459d013730158e4858\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:09:36.818866 containerd[1955]: time="2025-02-13T15:09:36.818606225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mtnf5,Uid:b5b173ef-28d3-4508-b5e3-8909b5f882ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\"" Feb 13 15:09:36.827724 containerd[1955]: time="2025-02-13T15:09:36.827542931Z" level=info msg="CreateContainer within sandbox \"bd6c3459aac878b60c570890d1f7d4dfad8ce6918a8be5459d013730158e4858\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6caecdb150f244062c76dec30959168fe2091f527aae4ea50b6f587a4317c924\"" Feb 13 15:09:36.830704 containerd[1955]: time="2025-02-13T15:09:36.829841473Z" level=info msg="StartContainer for \"6caecdb150f244062c76dec30959168fe2091f527aae4ea50b6f587a4317c924\"" Feb 13 15:09:36.879510 systemd[1]: Started cri-containerd-6caecdb150f244062c76dec30959168fe2091f527aae4ea50b6f587a4317c924.scope - libcontainer container 6caecdb150f244062c76dec30959168fe2091f527aae4ea50b6f587a4317c924. Feb 13 15:09:36.978011 containerd[1955]: time="2025-02-13T15:09:36.977910234Z" level=info msg="StartContainer for \"6caecdb150f244062c76dec30959168fe2091f527aae4ea50b6f587a4317c924\" returns successfully" Feb 13 15:09:37.814669 kubelet[3390]: I0213 15:09:37.814567 3390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ll7ls" podStartSLOduration=2.814543958 podStartE2EDuration="2.814543958s" podCreationTimestamp="2025-02-13 15:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:37.812736163 +0000 UTC m=+15.465346094" watchObservedRunningTime="2025-02-13 15:09:37.814543958 +0000 UTC m=+15.467153901" Feb 13 15:09:38.126891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878853021.mount: Deactivated successfully. Feb 13 15:09:38.817278 containerd[1955]: time="2025-02-13T15:09:38.816970218Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:38.819256 containerd[1955]: time="2025-02-13T15:09:38.819138805Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:09:38.821670 containerd[1955]: time="2025-02-13T15:09:38.821584106Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:38.825003 containerd[1955]: time="2025-02-13T15:09:38.824798617Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.269600746s" Feb 13 15:09:38.825003 containerd[1955]: time="2025-02-13T15:09:38.824864848Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:09:38.828263 containerd[1955]: time="2025-02-13T15:09:38.827631013Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:09:38.830726 containerd[1955]: time="2025-02-13T15:09:38.830506658Z" level=info msg="CreateContainer within sandbox \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:09:38.865862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732276198.mount: Deactivated successfully. Feb 13 15:09:38.874606 containerd[1955]: time="2025-02-13T15:09:38.874469934Z" level=info msg="CreateContainer within sandbox \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\"" Feb 13 15:09:38.875385 containerd[1955]: time="2025-02-13T15:09:38.875181885Z" level=info msg="StartContainer for \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\"" Feb 13 15:09:38.923501 systemd[1]: Started cri-containerd-62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e.scope - libcontainer container 62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e. Feb 13 15:09:38.982401 containerd[1955]: time="2025-02-13T15:09:38.980537673Z" level=info msg="StartContainer for \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\" returns successfully" Feb 13 15:09:39.896214 kubelet[3390]: I0213 15:09:39.895980 3390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-2tzzc" podStartSLOduration=1.623292707 podStartE2EDuration="3.895937795s" podCreationTimestamp="2025-02-13 15:09:36 +0000 UTC" firstStartedPulling="2025-02-13 15:09:36.554094003 +0000 UTC m=+14.206703946" lastFinishedPulling="2025-02-13 15:09:38.826739091 +0000 UTC m=+16.479349034" observedRunningTime="2025-02-13 15:09:39.895573333 +0000 UTC m=+17.548183264" watchObservedRunningTime="2025-02-13 15:09:39.895937795 +0000 UTC m=+17.548548014" Feb 13 15:09:45.824593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2052696551.mount: Deactivated successfully. Feb 13 15:09:48.498504 containerd[1955]: time="2025-02-13T15:09:48.498430699Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:48.500272 containerd[1955]: time="2025-02-13T15:09:48.500162476Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:09:48.501116 containerd[1955]: time="2025-02-13T15:09:48.501008341Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:48.504941 containerd[1955]: time="2025-02-13T15:09:48.504738991Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.677041303s" Feb 13 15:09:48.504941 containerd[1955]: time="2025-02-13T15:09:48.504800772Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:09:48.510640 containerd[1955]: time="2025-02-13T15:09:48.510570714Z" level=info msg="CreateContainer within sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:09:48.535128 containerd[1955]: time="2025-02-13T15:09:48.535043847Z" level=info msg="CreateContainer within sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae\"" Feb 13 15:09:48.537070 containerd[1955]: time="2025-02-13T15:09:48.536445910Z" level=info msg="StartContainer for \"a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae\"" Feb 13 15:09:48.607532 systemd[1]: Started cri-containerd-a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae.scope - libcontainer container a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae. Feb 13 15:09:48.660379 containerd[1955]: time="2025-02-13T15:09:48.660300080Z" level=info msg="StartContainer for \"a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae\" returns successfully" Feb 13 15:09:48.688880 systemd[1]: cri-containerd-a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae.scope: Deactivated successfully. Feb 13 15:09:49.522576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae-rootfs.mount: Deactivated successfully. Feb 13 15:09:49.814300 containerd[1955]: time="2025-02-13T15:09:49.813645966Z" level=info msg="shim disconnected" id=a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae namespace=k8s.io Feb 13 15:09:49.814300 containerd[1955]: time="2025-02-13T15:09:49.813748862Z" level=warning msg="cleaning up after shim disconnected" id=a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae namespace=k8s.io Feb 13 15:09:49.814300 containerd[1955]: time="2025-02-13T15:09:49.813769780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:09:49.865557 containerd[1955]: time="2025-02-13T15:09:49.864542457Z" level=info msg="CreateContainer within sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:09:49.899985 containerd[1955]: time="2025-02-13T15:09:49.899050838Z" level=info msg="CreateContainer within sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f\"" Feb 13 15:09:49.903095 containerd[1955]: time="2025-02-13T15:09:49.902834154Z" level=info msg="StartContainer for \"b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f\"" Feb 13 15:09:49.986593 systemd[1]: Started cri-containerd-b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f.scope - libcontainer container b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f. Feb 13 15:09:50.032890 containerd[1955]: time="2025-02-13T15:09:50.032784778Z" level=info msg="StartContainer for \"b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f\" returns successfully" Feb 13 15:09:50.056608 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:09:50.057799 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:09:50.058467 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:09:50.070862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:09:50.071505 systemd[1]: cri-containerd-b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f.scope: Deactivated successfully. Feb 13 15:09:50.108394 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:09:50.132956 containerd[1955]: time="2025-02-13T15:09:50.132878391Z" level=info msg="shim disconnected" id=b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f namespace=k8s.io Feb 13 15:09:50.133378 containerd[1955]: time="2025-02-13T15:09:50.133332855Z" level=warning msg="cleaning up after shim disconnected" id=b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f namespace=k8s.io Feb 13 15:09:50.133525 containerd[1955]: time="2025-02-13T15:09:50.133496058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:09:50.522816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f-rootfs.mount: Deactivated successfully. Feb 13 15:09:50.869500 containerd[1955]: time="2025-02-13T15:09:50.869318617Z" level=info msg="CreateContainer within sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:09:50.901251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573653711.mount: Deactivated successfully. Feb 13 15:09:50.907350 containerd[1955]: time="2025-02-13T15:09:50.904561510Z" level=info msg="CreateContainer within sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8\"" Feb 13 15:09:50.910643 containerd[1955]: time="2025-02-13T15:09:50.909102379Z" level=info msg="StartContainer for \"184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8\"" Feb 13 15:09:50.989583 systemd[1]: Started cri-containerd-184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8.scope - libcontainer container 184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8. Feb 13 15:09:51.051728 containerd[1955]: time="2025-02-13T15:09:51.051662104Z" level=info msg="StartContainer for \"184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8\" returns successfully" Feb 13 15:09:51.063351 systemd[1]: cri-containerd-184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8.scope: Deactivated successfully. Feb 13 15:09:51.111551 containerd[1955]: time="2025-02-13T15:09:51.111280188Z" level=info msg="shim disconnected" id=184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8 namespace=k8s.io Feb 13 15:09:51.111551 containerd[1955]: time="2025-02-13T15:09:51.111358485Z" level=warning msg="cleaning up after shim disconnected" id=184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8 namespace=k8s.io Feb 13 15:09:51.111551 containerd[1955]: time="2025-02-13T15:09:51.111379091Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:09:51.523589 systemd[1]: run-containerd-runc-k8s.io-184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8-runc.I3zdzY.mount: Deactivated successfully. Feb 13 15:09:51.523784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8-rootfs.mount: Deactivated successfully. Feb 13 15:09:51.873617 containerd[1955]: time="2025-02-13T15:09:51.873431748Z" level=info msg="CreateContainer within sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:09:51.905976 containerd[1955]: time="2025-02-13T15:09:51.905797485Z" level=info msg="CreateContainer within sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd\"" Feb 13 15:09:51.908958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2839932983.mount: Deactivated successfully. Feb 13 15:09:51.915138 containerd[1955]: time="2025-02-13T15:09:51.911687415Z" level=info msg="StartContainer for \"53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd\"" Feb 13 15:09:51.988863 systemd[1]: Started cri-containerd-53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd.scope - libcontainer container 53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd. Feb 13 15:09:52.045269 systemd[1]: cri-containerd-53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd.scope: Deactivated successfully. Feb 13 15:09:52.047729 containerd[1955]: time="2025-02-13T15:09:52.047546691Z" level=info msg="StartContainer for \"53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd\" returns successfully" Feb 13 15:09:52.097502 containerd[1955]: time="2025-02-13T15:09:52.097411375Z" level=info msg="shim disconnected" id=53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd namespace=k8s.io Feb 13 15:09:52.097979 containerd[1955]: time="2025-02-13T15:09:52.097512988Z" level=warning msg="cleaning up after shim disconnected" id=53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd namespace=k8s.io Feb 13 15:09:52.097979 containerd[1955]: time="2025-02-13T15:09:52.097538307Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:09:52.523641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd-rootfs.mount: Deactivated successfully. Feb 13 15:09:52.886543 containerd[1955]: time="2025-02-13T15:09:52.886244343Z" level=info msg="CreateContainer within sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:09:52.927643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839134530.mount: Deactivated successfully. Feb 13 15:09:52.936391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155524170.mount: Deactivated successfully. Feb 13 15:09:52.945524 containerd[1955]: time="2025-02-13T15:09:52.943809115Z" level=info msg="CreateContainer within sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\"" Feb 13 15:09:52.950852 containerd[1955]: time="2025-02-13T15:09:52.950387056Z" level=info msg="StartContainer for \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\"" Feb 13 15:09:53.046830 systemd[1]: Started cri-containerd-0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a.scope - libcontainer container 0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a. Feb 13 15:09:53.125962 containerd[1955]: time="2025-02-13T15:09:53.125884080Z" level=info msg="StartContainer for \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\" returns successfully" Feb 13 15:09:53.281055 kubelet[3390]: I0213 15:09:53.281000 3390 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:09:53.335379 kubelet[3390]: I0213 15:09:53.335302 3390 topology_manager.go:215] "Topology Admit Handler" podUID="54e7024a-7b43-4d4d-bf3c-a748f5b2384b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v2cqn" Feb 13 15:09:53.356631 kubelet[3390]: I0213 15:09:53.356118 3390 topology_manager.go:215] "Topology Admit Handler" podUID="1669f341-77e6-4e5a-b2c3-28247b098bd2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-d5g5p" Feb 13 15:09:53.360877 systemd[1]: Created slice kubepods-burstable-pod54e7024a_7b43_4d4d_bf3c_a748f5b2384b.slice - libcontainer container kubepods-burstable-pod54e7024a_7b43_4d4d_bf3c_a748f5b2384b.slice. Feb 13 15:09:53.383576 systemd[1]: Created slice kubepods-burstable-pod1669f341_77e6_4e5a_b2c3_28247b098bd2.slice - libcontainer container kubepods-burstable-pod1669f341_77e6_4e5a_b2c3_28247b098bd2.slice. Feb 13 15:09:53.409271 kubelet[3390]: I0213 15:09:53.408959 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjlwb\" (UniqueName: \"kubernetes.io/projected/1669f341-77e6-4e5a-b2c3-28247b098bd2-kube-api-access-jjlwb\") pod \"coredns-7db6d8ff4d-d5g5p\" (UID: \"1669f341-77e6-4e5a-b2c3-28247b098bd2\") " pod="kube-system/coredns-7db6d8ff4d-d5g5p" Feb 13 15:09:53.409271 kubelet[3390]: I0213 15:09:53.409030 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1669f341-77e6-4e5a-b2c3-28247b098bd2-config-volume\") pod \"coredns-7db6d8ff4d-d5g5p\" (UID: \"1669f341-77e6-4e5a-b2c3-28247b098bd2\") " pod="kube-system/coredns-7db6d8ff4d-d5g5p" Feb 13 15:09:53.409271 kubelet[3390]: I0213 15:09:53.409072 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lxqr\" (UniqueName: \"kubernetes.io/projected/54e7024a-7b43-4d4d-bf3c-a748f5b2384b-kube-api-access-2lxqr\") pod \"coredns-7db6d8ff4d-v2cqn\" (UID: \"54e7024a-7b43-4d4d-bf3c-a748f5b2384b\") " pod="kube-system/coredns-7db6d8ff4d-v2cqn" Feb 13 15:09:53.409271 kubelet[3390]: I0213 15:09:53.409132 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54e7024a-7b43-4d4d-bf3c-a748f5b2384b-config-volume\") pod \"coredns-7db6d8ff4d-v2cqn\" (UID: \"54e7024a-7b43-4d4d-bf3c-a748f5b2384b\") " pod="kube-system/coredns-7db6d8ff4d-v2cqn" Feb 13 15:09:53.672690 containerd[1955]: time="2025-02-13T15:09:53.670862260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v2cqn,Uid:54e7024a-7b43-4d4d-bf3c-a748f5b2384b,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:53.694639 containerd[1955]: time="2025-02-13T15:09:53.694551262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d5g5p,Uid:1669f341-77e6-4e5a-b2c3-28247b098bd2,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:56.156957 (udev-worker)[4181]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:09:56.161398 systemd-networkd[1850]: cilium_host: Link UP Feb 13 15:09:56.162107 systemd-networkd[1850]: cilium_net: Link UP Feb 13 15:09:56.165125 systemd-networkd[1850]: cilium_net: Gained carrier Feb 13 15:09:56.166829 (udev-worker)[4219]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:09:56.167819 systemd-networkd[1850]: cilium_host: Gained carrier Feb 13 15:09:56.168067 systemd-networkd[1850]: cilium_net: Gained IPv6LL Feb 13 15:09:56.169628 systemd-networkd[1850]: cilium_host: Gained IPv6LL Feb 13 15:09:56.365762 systemd-networkd[1850]: cilium_vxlan: Link UP Feb 13 15:09:56.365781 systemd-networkd[1850]: cilium_vxlan: Gained carrier Feb 13 15:09:56.874244 kernel: NET: Registered PF_ALG protocol family Feb 13 15:09:58.284512 systemd-networkd[1850]: cilium_vxlan: Gained IPv6LL Feb 13 15:09:58.379034 (udev-worker)[4232]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:09:58.381813 systemd-networkd[1850]: lxc_health: Link UP Feb 13 15:09:58.399578 systemd-networkd[1850]: lxc_health: Gained carrier Feb 13 15:09:58.649230 kubelet[3390]: I0213 15:09:58.648859 3390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mtnf5" podStartSLOduration=11.96394201 podStartE2EDuration="23.64880611s" podCreationTimestamp="2025-02-13 15:09:35 +0000 UTC" firstStartedPulling="2025-02-13 15:09:36.821603202 +0000 UTC m=+14.474213157" lastFinishedPulling="2025-02-13 15:09:48.506467314 +0000 UTC m=+26.159077257" observedRunningTime="2025-02-13 15:09:53.934438003 +0000 UTC m=+31.587049158" watchObservedRunningTime="2025-02-13 15:09:58.64880611 +0000 UTC m=+36.301416053" Feb 13 15:09:58.780789 systemd-networkd[1850]: lxc2291c69d70d4: Link UP Feb 13 15:09:58.789426 kernel: eth0: renamed from tmp79b1e Feb 13 15:09:58.804147 systemd-networkd[1850]: lxc2291c69d70d4: Gained carrier Feb 13 15:09:58.827101 kernel: eth0: renamed from tmpd3b39 Feb 13 15:09:58.841342 systemd-networkd[1850]: lxcc80afd4ad308: Link UP Feb 13 15:09:58.842017 systemd-networkd[1850]: lxcc80afd4ad308: Gained carrier Feb 13 15:09:59.564475 systemd-networkd[1850]: lxc_health: Gained IPv6LL Feb 13 15:10:00.780456 systemd-networkd[1850]: lxc2291c69d70d4: Gained IPv6LL Feb 13 15:10:00.780907 systemd-networkd[1850]: lxcc80afd4ad308: Gained IPv6LL Feb 13 15:10:03.171794 ntpd[1928]: Listen normally on 8 cilium_host 192.168.0.167:123 Feb 13 15:10:03.174078 ntpd[1928]: 13 Feb 15:10:03 ntpd[1928]: Listen normally on 8 cilium_host 192.168.0.167:123 Feb 13 15:10:03.174078 ntpd[1928]: 13 Feb 15:10:03 ntpd[1928]: Listen normally on 9 cilium_net [fe80::43:c2ff:fecd:ad2b%4]:123 Feb 13 15:10:03.174078 ntpd[1928]: 13 Feb 15:10:03 ntpd[1928]: Listen normally on 10 cilium_host [fe80::fcce:4aff:fef7:4a36%5]:123 Feb 13 15:10:03.174078 ntpd[1928]: 13 Feb 15:10:03 ntpd[1928]: Listen normally on 11 cilium_vxlan [fe80::5cc7:12ff:fec9:397%6]:123 Feb 13 15:10:03.174078 ntpd[1928]: 13 Feb 15:10:03 ntpd[1928]: Listen normally on 12 lxc_health [fe80::f88b:deff:fe91:c61b%8]:123 Feb 13 15:10:03.174078 ntpd[1928]: 13 Feb 15:10:03 ntpd[1928]: Listen normally on 13 lxc2291c69d70d4 [fe80::4074:2aff:feca:62f2%10]:123 Feb 13 15:10:03.174078 ntpd[1928]: 13 Feb 15:10:03 ntpd[1928]: Listen normally on 14 lxcc80afd4ad308 [fe80::c86d:ecff:fe14:936c%12]:123 Feb 13 15:10:03.171943 ntpd[1928]: Listen normally on 9 cilium_net [fe80::43:c2ff:fecd:ad2b%4]:123 Feb 13 15:10:03.172028 ntpd[1928]: Listen normally on 10 cilium_host [fe80::fcce:4aff:fef7:4a36%5]:123 Feb 13 15:10:03.172098 ntpd[1928]: Listen normally on 11 cilium_vxlan [fe80::5cc7:12ff:fec9:397%6]:123 Feb 13 15:10:03.172165 ntpd[1928]: Listen normally on 12 lxc_health [fe80::f88b:deff:fe91:c61b%8]:123 Feb 13 15:10:03.172272 ntpd[1928]: Listen normally on 13 lxc2291c69d70d4 [fe80::4074:2aff:feca:62f2%10]:123 Feb 13 15:10:03.172342 ntpd[1928]: Listen normally on 14 lxcc80afd4ad308 [fe80::c86d:ecff:fe14:936c%12]:123 Feb 13 15:10:06.863872 systemd[1]: Started sshd@7-172.31.30.163:22-139.178.68.195:47322.service - OpenSSH per-connection server daemon (139.178.68.195:47322). Feb 13 15:10:07.064234 sshd[4587]: Accepted publickey for core from 139.178.68.195 port 47322 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:07.068257 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:07.078761 systemd-logind[1936]: New session 8 of user core. Feb 13 15:10:07.087899 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:10:07.421028 sshd[4589]: Connection closed by 139.178.68.195 port 47322 Feb 13 15:10:07.423781 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:07.431879 systemd[1]: sshd@7-172.31.30.163:22-139.178.68.195:47322.service: Deactivated successfully. Feb 13 15:10:07.441706 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:10:07.451011 systemd-logind[1936]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:10:07.455196 systemd-logind[1936]: Removed session 8. Feb 13 15:10:08.219070 containerd[1955]: time="2025-02-13T15:10:08.218692754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:10:08.225268 containerd[1955]: time="2025-02-13T15:10:08.224223067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:10:08.225268 containerd[1955]: time="2025-02-13T15:10:08.224299529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:08.230383 containerd[1955]: time="2025-02-13T15:10:08.226773915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:08.305573 systemd[1]: Started cri-containerd-d3b3935bc8e2fb1fcf5ae762c9f68d24d0ccadd6a75227e29f19c7710597c7a2.scope - libcontainer container d3b3935bc8e2fb1fcf5ae762c9f68d24d0ccadd6a75227e29f19c7710597c7a2. Feb 13 15:10:08.365011 containerd[1955]: time="2025-02-13T15:10:08.364395794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:10:08.365011 containerd[1955]: time="2025-02-13T15:10:08.364497035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:10:08.365011 containerd[1955]: time="2025-02-13T15:10:08.364533209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:08.365741 containerd[1955]: time="2025-02-13T15:10:08.365385191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:08.446527 systemd[1]: Started cri-containerd-79b1e4eca9ec6c0aa8d11a7deb2d4b38be52ddc39b69b6b729d25ae64994f528.scope - libcontainer container 79b1e4eca9ec6c0aa8d11a7deb2d4b38be52ddc39b69b6b729d25ae64994f528. Feb 13 15:10:08.533284 containerd[1955]: time="2025-02-13T15:10:08.533175111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d5g5p,Uid:1669f341-77e6-4e5a-b2c3-28247b098bd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3b3935bc8e2fb1fcf5ae762c9f68d24d0ccadd6a75227e29f19c7710597c7a2\"" Feb 13 15:10:08.546286 containerd[1955]: time="2025-02-13T15:10:08.546032163Z" level=info msg="CreateContainer within sandbox \"d3b3935bc8e2fb1fcf5ae762c9f68d24d0ccadd6a75227e29f19c7710597c7a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:10:08.582125 containerd[1955]: time="2025-02-13T15:10:08.581698733Z" level=info msg="CreateContainer within sandbox \"d3b3935bc8e2fb1fcf5ae762c9f68d24d0ccadd6a75227e29f19c7710597c7a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8253997089234fd6cfa47fb7b5fed5709a0f2def3b7def3e438aeb41a7d3dae4\"" Feb 13 15:10:08.584259 containerd[1955]: time="2025-02-13T15:10:08.583092975Z" level=info msg="StartContainer for \"8253997089234fd6cfa47fb7b5fed5709a0f2def3b7def3e438aeb41a7d3dae4\"" Feb 13 15:10:08.648581 systemd[1]: Started cri-containerd-8253997089234fd6cfa47fb7b5fed5709a0f2def3b7def3e438aeb41a7d3dae4.scope - libcontainer container 8253997089234fd6cfa47fb7b5fed5709a0f2def3b7def3e438aeb41a7d3dae4. Feb 13 15:10:08.680674 containerd[1955]: time="2025-02-13T15:10:08.680139032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v2cqn,Uid:54e7024a-7b43-4d4d-bf3c-a748f5b2384b,Namespace:kube-system,Attempt:0,} returns sandbox id \"79b1e4eca9ec6c0aa8d11a7deb2d4b38be52ddc39b69b6b729d25ae64994f528\"" Feb 13 15:10:08.694228 containerd[1955]: time="2025-02-13T15:10:08.693929878Z" level=info msg="CreateContainer within sandbox \"79b1e4eca9ec6c0aa8d11a7deb2d4b38be52ddc39b69b6b729d25ae64994f528\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:10:08.714880 containerd[1955]: time="2025-02-13T15:10:08.714822862Z" level=info msg="CreateContainer within sandbox \"79b1e4eca9ec6c0aa8d11a7deb2d4b38be52ddc39b69b6b729d25ae64994f528\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c443c65e940e70adcfee75f4d452d05e6248cde918bb9d1f8d9a3f2890a2418\"" Feb 13 15:10:08.721293 containerd[1955]: time="2025-02-13T15:10:08.718426735Z" level=info msg="StartContainer for \"6c443c65e940e70adcfee75f4d452d05e6248cde918bb9d1f8d9a3f2890a2418\"" Feb 13 15:10:08.791256 containerd[1955]: time="2025-02-13T15:10:08.791001626Z" level=info msg="StartContainer for \"8253997089234fd6cfa47fb7b5fed5709a0f2def3b7def3e438aeb41a7d3dae4\" returns successfully" Feb 13 15:10:08.844296 systemd[1]: Started cri-containerd-6c443c65e940e70adcfee75f4d452d05e6248cde918bb9d1f8d9a3f2890a2418.scope - libcontainer container 6c443c65e940e70adcfee75f4d452d05e6248cde918bb9d1f8d9a3f2890a2418. Feb 13 15:10:08.941910 containerd[1955]: time="2025-02-13T15:10:08.941721768Z" level=info msg="StartContainer for \"6c443c65e940e70adcfee75f4d452d05e6248cde918bb9d1f8d9a3f2890a2418\" returns successfully" Feb 13 15:10:08.991399 kubelet[3390]: I0213 15:10:08.991321 3390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-d5g5p" podStartSLOduration=32.991298596 podStartE2EDuration="32.991298596s" podCreationTimestamp="2025-02-13 15:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:10:08.989326518 +0000 UTC m=+46.641936485" watchObservedRunningTime="2025-02-13 15:10:08.991298596 +0000 UTC m=+46.643908527" Feb 13 15:10:09.036931 kubelet[3390]: I0213 15:10:09.036822 3390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v2cqn" podStartSLOduration=33.036798806 podStartE2EDuration="33.036798806s" podCreationTimestamp="2025-02-13 15:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:10:09.031953722 +0000 UTC m=+46.684563665" watchObservedRunningTime="2025-02-13 15:10:09.036798806 +0000 UTC m=+46.689408749" Feb 13 15:10:09.242594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2731783723.mount: Deactivated successfully. Feb 13 15:10:12.462747 systemd[1]: Started sshd@8-172.31.30.163:22-139.178.68.195:47336.service - OpenSSH per-connection server daemon (139.178.68.195:47336). Feb 13 15:10:12.658429 sshd[4777]: Accepted publickey for core from 139.178.68.195 port 47336 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:12.662408 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:12.673257 systemd-logind[1936]: New session 9 of user core. Feb 13 15:10:12.682639 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:10:12.946017 sshd[4779]: Connection closed by 139.178.68.195 port 47336 Feb 13 15:10:12.947122 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:12.954389 systemd[1]: sshd@8-172.31.30.163:22-139.178.68.195:47336.service: Deactivated successfully. Feb 13 15:10:12.958094 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:10:12.959820 systemd-logind[1936]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:10:12.962480 systemd-logind[1936]: Removed session 9. Feb 13 15:10:17.987880 systemd[1]: Started sshd@9-172.31.30.163:22-139.178.68.195:40710.service - OpenSSH per-connection server daemon (139.178.68.195:40710). Feb 13 15:10:18.186411 sshd[4794]: Accepted publickey for core from 139.178.68.195 port 40710 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:18.189580 sshd-session[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:18.201615 systemd-logind[1936]: New session 10 of user core. Feb 13 15:10:18.207547 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:10:18.476397 sshd[4796]: Connection closed by 139.178.68.195 port 40710 Feb 13 15:10:18.477686 sshd-session[4794]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:18.484527 systemd-logind[1936]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:10:18.485154 systemd[1]: sshd@9-172.31.30.163:22-139.178.68.195:40710.service: Deactivated successfully. Feb 13 15:10:18.491498 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:10:18.496691 systemd-logind[1936]: Removed session 10. Feb 13 15:10:23.515733 systemd[1]: Started sshd@10-172.31.30.163:22-139.178.68.195:40724.service - OpenSSH per-connection server daemon (139.178.68.195:40724). Feb 13 15:10:23.712509 sshd[4811]: Accepted publickey for core from 139.178.68.195 port 40724 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:23.715752 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:23.726332 systemd-logind[1936]: New session 11 of user core. Feb 13 15:10:23.732500 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:10:23.983036 sshd[4813]: Connection closed by 139.178.68.195 port 40724 Feb 13 15:10:23.983997 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:23.992472 systemd[1]: sshd@10-172.31.30.163:22-139.178.68.195:40724.service: Deactivated successfully. Feb 13 15:10:23.997599 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:10:24.002013 systemd-logind[1936]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:10:24.004356 systemd-logind[1936]: Removed session 11. Feb 13 15:10:29.025737 systemd[1]: Started sshd@11-172.31.30.163:22-139.178.68.195:41632.service - OpenSSH per-connection server daemon (139.178.68.195:41632). Feb 13 15:10:29.225286 sshd[4826]: Accepted publickey for core from 139.178.68.195 port 41632 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:29.227760 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:29.237409 systemd-logind[1936]: New session 12 of user core. Feb 13 15:10:29.242492 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:10:29.497831 sshd[4828]: Connection closed by 139.178.68.195 port 41632 Feb 13 15:10:29.496968 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:29.504178 systemd-logind[1936]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:10:29.505229 systemd[1]: sshd@11-172.31.30.163:22-139.178.68.195:41632.service: Deactivated successfully. Feb 13 15:10:29.508720 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:10:29.511120 systemd-logind[1936]: Removed session 12. Feb 13 15:10:29.538000 systemd[1]: Started sshd@12-172.31.30.163:22-139.178.68.195:41634.service - OpenSSH per-connection server daemon (139.178.68.195:41634). Feb 13 15:10:29.723239 sshd[4841]: Accepted publickey for core from 139.178.68.195 port 41634 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:29.725886 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:29.735702 systemd-logind[1936]: New session 13 of user core. Feb 13 15:10:29.744470 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:10:30.081683 sshd[4843]: Connection closed by 139.178.68.195 port 41634 Feb 13 15:10:30.083591 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:30.102569 systemd[1]: sshd@12-172.31.30.163:22-139.178.68.195:41634.service: Deactivated successfully. Feb 13 15:10:30.110233 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:10:30.141159 systemd-logind[1936]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:10:30.148776 systemd[1]: Started sshd@13-172.31.30.163:22-139.178.68.195:41640.service - OpenSSH per-connection server daemon (139.178.68.195:41640). Feb 13 15:10:30.151968 systemd-logind[1936]: Removed session 13. Feb 13 15:10:30.351921 sshd[4853]: Accepted publickey for core from 139.178.68.195 port 41640 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:30.354988 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:30.364616 systemd-logind[1936]: New session 14 of user core. Feb 13 15:10:30.370497 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:10:30.622727 sshd[4856]: Connection closed by 139.178.68.195 port 41640 Feb 13 15:10:30.622494 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:30.630902 systemd[1]: sshd@13-172.31.30.163:22-139.178.68.195:41640.service: Deactivated successfully. Feb 13 15:10:30.637037 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:10:30.639586 systemd-logind[1936]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:10:30.641972 systemd-logind[1936]: Removed session 14. Feb 13 15:10:35.669296 systemd[1]: Started sshd@14-172.31.30.163:22-139.178.68.195:41652.service - OpenSSH per-connection server daemon (139.178.68.195:41652). Feb 13 15:10:35.858616 sshd[4869]: Accepted publickey for core from 139.178.68.195 port 41652 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:35.861591 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:35.872074 systemd-logind[1936]: New session 15 of user core. Feb 13 15:10:35.879505 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:10:36.138030 sshd[4872]: Connection closed by 139.178.68.195 port 41652 Feb 13 15:10:36.140546 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:36.149171 systemd[1]: sshd@14-172.31.30.163:22-139.178.68.195:41652.service: Deactivated successfully. Feb 13 15:10:36.154553 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:10:36.156680 systemd-logind[1936]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:10:36.160690 systemd-logind[1936]: Removed session 15. Feb 13 15:10:41.184797 systemd[1]: Started sshd@15-172.31.30.163:22-139.178.68.195:34140.service - OpenSSH per-connection server daemon (139.178.68.195:34140). Feb 13 15:10:41.379456 sshd[4887]: Accepted publickey for core from 139.178.68.195 port 34140 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:41.382433 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:41.392502 systemd-logind[1936]: New session 16 of user core. Feb 13 15:10:41.397513 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:10:41.655961 sshd[4889]: Connection closed by 139.178.68.195 port 34140 Feb 13 15:10:41.657434 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:41.664512 systemd[1]: sshd@15-172.31.30.163:22-139.178.68.195:34140.service: Deactivated successfully. Feb 13 15:10:41.668017 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:10:41.669905 systemd-logind[1936]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:10:41.671790 systemd-logind[1936]: Removed session 16. Feb 13 15:10:46.694754 systemd[1]: Started sshd@16-172.31.30.163:22-139.178.68.195:60252.service - OpenSSH per-connection server daemon (139.178.68.195:60252). Feb 13 15:10:46.887483 sshd[4902]: Accepted publickey for core from 139.178.68.195 port 60252 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:46.890360 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:46.899985 systemd-logind[1936]: New session 17 of user core. Feb 13 15:10:46.909160 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:10:47.161230 sshd[4904]: Connection closed by 139.178.68.195 port 60252 Feb 13 15:10:47.161556 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:47.168624 systemd[1]: sshd@16-172.31.30.163:22-139.178.68.195:60252.service: Deactivated successfully. Feb 13 15:10:47.173691 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:10:47.175361 systemd-logind[1936]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:10:47.178588 systemd-logind[1936]: Removed session 17. Feb 13 15:10:47.201900 systemd[1]: Started sshd@17-172.31.30.163:22-139.178.68.195:60258.service - OpenSSH per-connection server daemon (139.178.68.195:60258). Feb 13 15:10:47.395726 sshd[4916]: Accepted publickey for core from 139.178.68.195 port 60258 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:47.398957 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:47.409855 systemd-logind[1936]: New session 18 of user core. Feb 13 15:10:47.417696 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:10:47.725561 sshd[4918]: Connection closed by 139.178.68.195 port 60258 Feb 13 15:10:47.727365 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:47.734861 systemd[1]: sshd@17-172.31.30.163:22-139.178.68.195:60258.service: Deactivated successfully. Feb 13 15:10:47.740758 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:10:47.743577 systemd-logind[1936]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:10:47.745718 systemd-logind[1936]: Removed session 18. Feb 13 15:10:47.767704 systemd[1]: Started sshd@18-172.31.30.163:22-139.178.68.195:60274.service - OpenSSH per-connection server daemon (139.178.68.195:60274). Feb 13 15:10:47.971790 sshd[4927]: Accepted publickey for core from 139.178.68.195 port 60274 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:47.974499 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:47.983292 systemd-logind[1936]: New session 19 of user core. Feb 13 15:10:47.996486 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:10:50.856158 sshd[4929]: Connection closed by 139.178.68.195 port 60274 Feb 13 15:10:50.857917 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:50.869878 systemd[1]: sshd@18-172.31.30.163:22-139.178.68.195:60274.service: Deactivated successfully. Feb 13 15:10:50.879752 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:10:50.887238 systemd-logind[1936]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:10:50.919126 systemd[1]: Started sshd@19-172.31.30.163:22-139.178.68.195:60288.service - OpenSSH per-connection server daemon (139.178.68.195:60288). Feb 13 15:10:50.922220 systemd-logind[1936]: Removed session 19. Feb 13 15:10:51.119910 sshd[4945]: Accepted publickey for core from 139.178.68.195 port 60288 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:51.122868 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:51.133079 systemd-logind[1936]: New session 20 of user core. Feb 13 15:10:51.139474 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:10:51.651697 sshd[4948]: Connection closed by 139.178.68.195 port 60288 Feb 13 15:10:51.652516 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:51.660482 systemd[1]: sshd@19-172.31.30.163:22-139.178.68.195:60288.service: Deactivated successfully. Feb 13 15:10:51.664751 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:10:51.666220 systemd-logind[1936]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:10:51.668090 systemd-logind[1936]: Removed session 20. Feb 13 15:10:51.691734 systemd[1]: Started sshd@20-172.31.30.163:22-139.178.68.195:60302.service - OpenSSH per-connection server daemon (139.178.68.195:60302). Feb 13 15:10:51.882589 sshd[4958]: Accepted publickey for core from 139.178.68.195 port 60302 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:51.885772 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:51.895683 systemd-logind[1936]: New session 21 of user core. Feb 13 15:10:51.904501 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:10:52.159210 sshd[4960]: Connection closed by 139.178.68.195 port 60302 Feb 13 15:10:52.158441 sshd-session[4958]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:52.165616 systemd-logind[1936]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:10:52.166948 systemd[1]: sshd@20-172.31.30.163:22-139.178.68.195:60302.service: Deactivated successfully. Feb 13 15:10:52.170297 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:10:52.174972 systemd-logind[1936]: Removed session 21. Feb 13 15:10:57.209147 systemd[1]: Started sshd@21-172.31.30.163:22-139.178.68.195:41150.service - OpenSSH per-connection server daemon (139.178.68.195:41150). Feb 13 15:10:57.403756 sshd[4971]: Accepted publickey for core from 139.178.68.195 port 41150 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:57.406283 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:57.415984 systemd-logind[1936]: New session 22 of user core. Feb 13 15:10:57.422460 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:10:57.681495 sshd[4973]: Connection closed by 139.178.68.195 port 41150 Feb 13 15:10:57.683129 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:57.690454 systemd[1]: sshd@21-172.31.30.163:22-139.178.68.195:41150.service: Deactivated successfully. Feb 13 15:10:57.695576 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:10:57.698430 systemd-logind[1936]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:10:57.700918 systemd-logind[1936]: Removed session 22. Feb 13 15:11:02.724692 systemd[1]: Started sshd@22-172.31.30.163:22-139.178.68.195:41152.service - OpenSSH per-connection server daemon (139.178.68.195:41152). Feb 13 15:11:02.909533 sshd[4988]: Accepted publickey for core from 139.178.68.195 port 41152 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:02.913026 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:02.924299 systemd-logind[1936]: New session 23 of user core. Feb 13 15:11:02.931507 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:11:03.185480 sshd[4990]: Connection closed by 139.178.68.195 port 41152 Feb 13 15:11:03.186403 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:03.193898 systemd[1]: sshd@22-172.31.30.163:22-139.178.68.195:41152.service: Deactivated successfully. Feb 13 15:11:03.201141 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:11:03.206159 systemd-logind[1936]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:11:03.211433 systemd-logind[1936]: Removed session 23. Feb 13 15:11:08.225919 systemd[1]: Started sshd@23-172.31.30.163:22-139.178.68.195:48862.service - OpenSSH per-connection server daemon (139.178.68.195:48862). Feb 13 15:11:08.422136 sshd[5003]: Accepted publickey for core from 139.178.68.195 port 48862 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:08.425544 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:08.435770 systemd-logind[1936]: New session 24 of user core. Feb 13 15:11:08.445505 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:11:08.693456 sshd[5005]: Connection closed by 139.178.68.195 port 48862 Feb 13 15:11:08.693336 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:08.700120 systemd[1]: sshd@23-172.31.30.163:22-139.178.68.195:48862.service: Deactivated successfully. Feb 13 15:11:08.703845 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:11:08.705879 systemd-logind[1936]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:11:08.708077 systemd-logind[1936]: Removed session 24. Feb 13 15:11:13.741800 systemd[1]: Started sshd@24-172.31.30.163:22-139.178.68.195:48864.service - OpenSSH per-connection server daemon (139.178.68.195:48864). Feb 13 15:11:13.933275 sshd[5017]: Accepted publickey for core from 139.178.68.195 port 48864 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:13.936431 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:13.946361 systemd-logind[1936]: New session 25 of user core. Feb 13 15:11:13.953460 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:11:14.200684 sshd[5019]: Connection closed by 139.178.68.195 port 48864 Feb 13 15:11:14.201249 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:14.207905 systemd[1]: sshd@24-172.31.30.163:22-139.178.68.195:48864.service: Deactivated successfully. Feb 13 15:11:14.212519 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:11:14.215013 systemd-logind[1936]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:11:14.217705 systemd-logind[1936]: Removed session 25. Feb 13 15:11:14.245752 systemd[1]: Started sshd@25-172.31.30.163:22-139.178.68.195:48874.service - OpenSSH per-connection server daemon (139.178.68.195:48874). Feb 13 15:11:14.432327 sshd[5031]: Accepted publickey for core from 139.178.68.195 port 48874 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:14.434836 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:14.442726 systemd-logind[1936]: New session 26 of user core. Feb 13 15:11:14.451496 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:11:16.388158 containerd[1955]: time="2025-02-13T15:11:16.386567531Z" level=info msg="StopContainer for \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\" with timeout 30 (s)" Feb 13 15:11:16.388158 containerd[1955]: time="2025-02-13T15:11:16.387623591Z" level=info msg="Stop container \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\" with signal terminated" Feb 13 15:11:16.437696 systemd[1]: cri-containerd-62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e.scope: Deactivated successfully. Feb 13 15:11:16.460163 containerd[1955]: time="2025-02-13T15:11:16.460057007Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:11:16.482665 containerd[1955]: time="2025-02-13T15:11:16.482513207Z" level=info msg="StopContainer for \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\" with timeout 2 (s)" Feb 13 15:11:16.483478 containerd[1955]: time="2025-02-13T15:11:16.483182879Z" level=info msg="Stop container \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\" with signal terminated" Feb 13 15:11:16.506016 systemd-networkd[1850]: lxc_health: Link DOWN Feb 13 15:11:16.506031 systemd-networkd[1850]: lxc_health: Lost carrier Feb 13 15:11:16.510523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e-rootfs.mount: Deactivated successfully. Feb 13 15:11:16.536025 systemd[1]: cri-containerd-0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a.scope: Deactivated successfully. Feb 13 15:11:16.537262 systemd[1]: cri-containerd-0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a.scope: Consumed 15.815s CPU time, 125M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 15:11:16.543496 containerd[1955]: time="2025-02-13T15:11:16.543290315Z" level=info msg="shim disconnected" id=62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e namespace=k8s.io Feb 13 15:11:16.543496 containerd[1955]: time="2025-02-13T15:11:16.543439991Z" level=warning msg="cleaning up after shim disconnected" id=62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e namespace=k8s.io Feb 13 15:11:16.543919 containerd[1955]: time="2025-02-13T15:11:16.543465395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:16.590058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a-rootfs.mount: Deactivated successfully. Feb 13 15:11:16.592855 containerd[1955]: time="2025-02-13T15:11:16.592800288Z" level=info msg="StopContainer for \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\" returns successfully" Feb 13 15:11:16.596879 containerd[1955]: time="2025-02-13T15:11:16.596803176Z" level=info msg="StopPodSandbox for \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\"" Feb 13 15:11:16.598996 containerd[1955]: time="2025-02-13T15:11:16.596969148Z" level=info msg="Container to stop \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:11:16.603819 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c-shm.mount: Deactivated successfully. Feb 13 15:11:16.608505 containerd[1955]: time="2025-02-13T15:11:16.608384064Z" level=info msg="shim disconnected" id=0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a namespace=k8s.io Feb 13 15:11:16.608505 containerd[1955]: time="2025-02-13T15:11:16.608478408Z" level=warning msg="cleaning up after shim disconnected" id=0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a namespace=k8s.io Feb 13 15:11:16.608505 containerd[1955]: time="2025-02-13T15:11:16.608512236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:16.625932 systemd[1]: cri-containerd-59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c.scope: Deactivated successfully. Feb 13 15:11:16.653992 containerd[1955]: time="2025-02-13T15:11:16.653668308Z" level=info msg="StopContainer for \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\" returns successfully" Feb 13 15:11:16.657420 containerd[1955]: time="2025-02-13T15:11:16.655370916Z" level=info msg="StopPodSandbox for \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\"" Feb 13 15:11:16.657420 containerd[1955]: time="2025-02-13T15:11:16.655442316Z" level=info msg="Container to stop \"b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:11:16.657420 containerd[1955]: time="2025-02-13T15:11:16.655469052Z" level=info msg="Container to stop \"a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:11:16.657420 containerd[1955]: time="2025-02-13T15:11:16.655490208Z" level=info msg="Container to stop \"184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:11:16.657420 containerd[1955]: time="2025-02-13T15:11:16.655511424Z" level=info msg="Container to stop \"53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:11:16.657420 containerd[1955]: time="2025-02-13T15:11:16.655532304Z" level=info msg="Container to stop \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:11:16.673945 systemd[1]: cri-containerd-781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8.scope: Deactivated successfully. Feb 13 15:11:16.691459 containerd[1955]: time="2025-02-13T15:11:16.691302696Z" level=info msg="shim disconnected" id=59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c namespace=k8s.io Feb 13 15:11:16.691459 containerd[1955]: time="2025-02-13T15:11:16.691385040Z" level=warning msg="cleaning up after shim disconnected" id=59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c namespace=k8s.io Feb 13 15:11:16.692678 containerd[1955]: time="2025-02-13T15:11:16.691407192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:16.726109 containerd[1955]: time="2025-02-13T15:11:16.726031980Z" level=info msg="TearDown network for sandbox \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\" successfully" Feb 13 15:11:16.726109 containerd[1955]: time="2025-02-13T15:11:16.726086904Z" level=info msg="StopPodSandbox for \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\" returns successfully" Feb 13 15:11:16.728647 containerd[1955]: time="2025-02-13T15:11:16.728026356Z" level=info msg="shim disconnected" id=781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8 namespace=k8s.io Feb 13 15:11:16.728647 containerd[1955]: time="2025-02-13T15:11:16.728110404Z" level=warning msg="cleaning up after shim disconnected" id=781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8 namespace=k8s.io Feb 13 15:11:16.728647 containerd[1955]: time="2025-02-13T15:11:16.728161068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:16.773793 containerd[1955]: time="2025-02-13T15:11:16.773548488Z" level=info msg="TearDown network for sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" successfully" Feb 13 15:11:16.773793 containerd[1955]: time="2025-02-13T15:11:16.773640708Z" level=info msg="StopPodSandbox for \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" returns successfully" Feb 13 15:11:16.777507 kubelet[3390]: I0213 15:11:16.775846 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46tsv\" (UniqueName: \"kubernetes.io/projected/34221128-5d81-4557-9459-90b630feeb49-kube-api-access-46tsv\") pod \"34221128-5d81-4557-9459-90b630feeb49\" (UID: \"34221128-5d81-4557-9459-90b630feeb49\") " Feb 13 15:11:16.777507 kubelet[3390]: I0213 15:11:16.775932 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34221128-5d81-4557-9459-90b630feeb49-cilium-config-path\") pod \"34221128-5d81-4557-9459-90b630feeb49\" (UID: \"34221128-5d81-4557-9459-90b630feeb49\") " Feb 13 15:11:16.794287 kubelet[3390]: I0213 15:11:16.793796 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34221128-5d81-4557-9459-90b630feeb49-kube-api-access-46tsv" (OuterVolumeSpecName: "kube-api-access-46tsv") pod "34221128-5d81-4557-9459-90b630feeb49" (UID: "34221128-5d81-4557-9459-90b630feeb49"). InnerVolumeSpecName "kube-api-access-46tsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:11:16.794918 kubelet[3390]: I0213 15:11:16.794873 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34221128-5d81-4557-9459-90b630feeb49-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "34221128-5d81-4557-9459-90b630feeb49" (UID: "34221128-5d81-4557-9459-90b630feeb49"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:11:16.876560 kubelet[3390]: I0213 15:11:16.876472 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cilium-cgroup\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.877483 kubelet[3390]: I0213 15:11:16.877081 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cilium-run\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.877483 kubelet[3390]: I0213 15:11:16.877262 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cni-path\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.877483 kubelet[3390]: I0213 15:11:16.876984 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:16.877483 kubelet[3390]: I0213 15:11:16.877148 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:16.877483 kubelet[3390]: I0213 15:11:16.877414 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cni-path" (OuterVolumeSpecName: "cni-path") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:16.878212 kubelet[3390]: I0213 15:11:16.877835 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:16.878212 kubelet[3390]: I0213 15:11:16.877920 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-host-proc-sys-kernel\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.878212 kubelet[3390]: I0213 15:11:16.877965 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-hostproc\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.878212 kubelet[3390]: I0213 15:11:16.878040 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-etc-cni-netd\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.878212 kubelet[3390]: I0213 15:11:16.878094 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-hostproc" (OuterVolumeSpecName: "hostproc") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:16.878993 kubelet[3390]: I0213 15:11:16.878578 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4ht8\" (UniqueName: \"kubernetes.io/projected/b5b173ef-28d3-4508-b5e3-8909b5f882ed-kube-api-access-p4ht8\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.878993 kubelet[3390]: I0213 15:11:16.878639 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:16.878993 kubelet[3390]: I0213 15:11:16.878691 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-bpf-maps\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.878993 kubelet[3390]: I0213 15:11:16.878940 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-host-proc-sys-net\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.881226 kubelet[3390]: I0213 15:11:16.879698 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5b173ef-28d3-4508-b5e3-8909b5f882ed-hubble-tls\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.881226 kubelet[3390]: I0213 15:11:16.879761 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-xtables-lock\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.881226 kubelet[3390]: I0213 15:11:16.879806 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cilium-config-path\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.881226 kubelet[3390]: I0213 15:11:16.879846 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5b173ef-28d3-4508-b5e3-8909b5f882ed-clustermesh-secrets\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.881226 kubelet[3390]: I0213 15:11:16.879878 3390 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-lib-modules\") pod \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\" (UID: \"b5b173ef-28d3-4508-b5e3-8909b5f882ed\") " Feb 13 15:11:16.881226 kubelet[3390]: I0213 15:11:16.879948 3390 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cilium-cgroup\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.881694 kubelet[3390]: I0213 15:11:16.879978 3390 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-46tsv\" (UniqueName: \"kubernetes.io/projected/34221128-5d81-4557-9459-90b630feeb49-kube-api-access-46tsv\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.881694 kubelet[3390]: I0213 15:11:16.880002 3390 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cilium-run\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.881694 kubelet[3390]: I0213 15:11:16.880023 3390 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cni-path\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.881694 kubelet[3390]: I0213 15:11:16.880045 3390 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-host-proc-sys-kernel\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.881694 kubelet[3390]: I0213 15:11:16.880067 3390 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-etc-cni-netd\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.881694 kubelet[3390]: I0213 15:11:16.880088 3390 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-hostproc\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.881694 kubelet[3390]: I0213 15:11:16.880107 3390 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34221128-5d81-4557-9459-90b630feeb49-cilium-config-path\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.882077 kubelet[3390]: I0213 15:11:16.878843 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:16.882077 kubelet[3390]: I0213 15:11:16.880151 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:16.882077 kubelet[3390]: I0213 15:11:16.880303 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:16.883531 kubelet[3390]: I0213 15:11:16.883438 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:16.897493 kubelet[3390]: I0213 15:11:16.897316 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b173ef-28d3-4508-b5e3-8909b5f882ed-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:11:16.900302 kubelet[3390]: I0213 15:11:16.898102 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b173ef-28d3-4508-b5e3-8909b5f882ed-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:11:16.900302 kubelet[3390]: I0213 15:11:16.898473 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b173ef-28d3-4508-b5e3-8909b5f882ed-kube-api-access-p4ht8" (OuterVolumeSpecName: "kube-api-access-p4ht8") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "kube-api-access-p4ht8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:11:16.904533 kubelet[3390]: I0213 15:11:16.904374 3390 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b5b173ef-28d3-4508-b5e3-8909b5f882ed" (UID: "b5b173ef-28d3-4508-b5e3-8909b5f882ed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:11:16.981103 kubelet[3390]: I0213 15:11:16.981039 3390 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5b173ef-28d3-4508-b5e3-8909b5f882ed-clustermesh-secrets\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.981103 kubelet[3390]: I0213 15:11:16.981097 3390 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-lib-modules\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.981365 kubelet[3390]: I0213 15:11:16.981120 3390 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-p4ht8\" (UniqueName: \"kubernetes.io/projected/b5b173ef-28d3-4508-b5e3-8909b5f882ed-kube-api-access-p4ht8\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.981365 kubelet[3390]: I0213 15:11:16.981144 3390 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-bpf-maps\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.981365 kubelet[3390]: I0213 15:11:16.981167 3390 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-host-proc-sys-net\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.981365 kubelet[3390]: I0213 15:11:16.981226 3390 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5b173ef-28d3-4508-b5e3-8909b5f882ed-hubble-tls\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.981365 kubelet[3390]: I0213 15:11:16.981247 3390 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5b173ef-28d3-4508-b5e3-8909b5f882ed-xtables-lock\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:16.981365 kubelet[3390]: I0213 15:11:16.981267 3390 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5b173ef-28d3-4508-b5e3-8909b5f882ed-cilium-config-path\") on node \"ip-172-31-30-163\" DevicePath \"\"" Feb 13 15:11:17.150569 kubelet[3390]: I0213 15:11:17.150516 3390 scope.go:117] "RemoveContainer" containerID="62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e" Feb 13 15:11:17.162935 containerd[1955]: time="2025-02-13T15:11:17.161769478Z" level=info msg="RemoveContainer for \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\"" Feb 13 15:11:17.178639 systemd[1]: Removed slice kubepods-besteffort-pod34221128_5d81_4557_9459_90b630feeb49.slice - libcontainer container kubepods-besteffort-pod34221128_5d81_4557_9459_90b630feeb49.slice. Feb 13 15:11:17.178974 containerd[1955]: time="2025-02-13T15:11:17.178897162Z" level=info msg="RemoveContainer for \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\" returns successfully" Feb 13 15:11:17.182803 kubelet[3390]: I0213 15:11:17.181012 3390 scope.go:117] "RemoveContainer" containerID="62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e" Feb 13 15:11:17.183130 containerd[1955]: time="2025-02-13T15:11:17.181622314Z" level=error msg="ContainerStatus for \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\": not found" Feb 13 15:11:17.185237 kubelet[3390]: E0213 15:11:17.184159 3390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\": not found" containerID="62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e" Feb 13 15:11:17.187895 kubelet[3390]: I0213 15:11:17.187636 3390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e"} err="failed to get container status \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\": rpc error: code = NotFound desc = an error occurred when try to find container \"62ec4cbd070295b889c1ca1f7e14127091b976460b29aac8cf1108ba2e5aa19e\": not found" Feb 13 15:11:17.187895 kubelet[3390]: I0213 15:11:17.187875 3390 scope.go:117] "RemoveContainer" containerID="0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a" Feb 13 15:11:17.195241 containerd[1955]: time="2025-02-13T15:11:17.192646295Z" level=info msg="RemoveContainer for \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\"" Feb 13 15:11:17.202037 systemd[1]: Removed slice kubepods-burstable-podb5b173ef_28d3_4508_b5e3_8909b5f882ed.slice - libcontainer container kubepods-burstable-podb5b173ef_28d3_4508_b5e3_8909b5f882ed.slice. Feb 13 15:11:17.202715 systemd[1]: kubepods-burstable-podb5b173ef_28d3_4508_b5e3_8909b5f882ed.slice: Consumed 15.982s CPU time, 125.5M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 15:11:17.203316 containerd[1955]: time="2025-02-13T15:11:17.203021495Z" level=info msg="RemoveContainer for \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\" returns successfully" Feb 13 15:11:17.204261 kubelet[3390]: I0213 15:11:17.203687 3390 scope.go:117] "RemoveContainer" containerID="53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd" Feb 13 15:11:17.212184 containerd[1955]: time="2025-02-13T15:11:17.212084759Z" level=info msg="RemoveContainer for \"53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd\"" Feb 13 15:11:17.219787 containerd[1955]: time="2025-02-13T15:11:17.219124343Z" level=info msg="RemoveContainer for \"53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd\" returns successfully" Feb 13 15:11:17.220295 kubelet[3390]: I0213 15:11:17.220262 3390 scope.go:117] "RemoveContainer" containerID="184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8" Feb 13 15:11:17.223573 containerd[1955]: time="2025-02-13T15:11:17.223487015Z" level=info msg="RemoveContainer for \"184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8\"" Feb 13 15:11:17.230566 containerd[1955]: time="2025-02-13T15:11:17.230504027Z" level=info msg="RemoveContainer for \"184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8\" returns successfully" Feb 13 15:11:17.230985 kubelet[3390]: I0213 15:11:17.230866 3390 scope.go:117] "RemoveContainer" containerID="b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f" Feb 13 15:11:17.233395 containerd[1955]: time="2025-02-13T15:11:17.233319263Z" level=info msg="RemoveContainer for \"b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f\"" Feb 13 15:11:17.242510 containerd[1955]: time="2025-02-13T15:11:17.242447567Z" level=info msg="RemoveContainer for \"b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f\" returns successfully" Feb 13 15:11:17.243110 kubelet[3390]: I0213 15:11:17.242846 3390 scope.go:117] "RemoveContainer" containerID="a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae" Feb 13 15:11:17.245053 containerd[1955]: time="2025-02-13T15:11:17.244994711Z" level=info msg="RemoveContainer for \"a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae\"" Feb 13 15:11:17.250907 containerd[1955]: time="2025-02-13T15:11:17.250847255Z" level=info msg="RemoveContainer for \"a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae\" returns successfully" Feb 13 15:11:17.251858 kubelet[3390]: I0213 15:11:17.251782 3390 scope.go:117] "RemoveContainer" containerID="0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a" Feb 13 15:11:17.252415 containerd[1955]: time="2025-02-13T15:11:17.252292499Z" level=error msg="ContainerStatus for \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\": not found" Feb 13 15:11:17.252629 kubelet[3390]: E0213 15:11:17.252520 3390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\": not found" containerID="0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a" Feb 13 15:11:17.252815 kubelet[3390]: I0213 15:11:17.252674 3390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a"} err="failed to get container status \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"0fe78e2ad0154121fe8bf4de7fac00090f7e40ccbe21c8f109f4c2c72dd76e5a\": not found" Feb 13 15:11:17.252886 kubelet[3390]: I0213 15:11:17.252812 3390 scope.go:117] "RemoveContainer" containerID="53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd" Feb 13 15:11:17.253396 containerd[1955]: time="2025-02-13T15:11:17.253344527Z" level=error msg="ContainerStatus for \"53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd\": not found" Feb 13 15:11:17.253979 kubelet[3390]: E0213 15:11:17.253908 3390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd\": not found" containerID="53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd" Feb 13 15:11:17.254100 kubelet[3390]: I0213 15:11:17.253978 3390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd"} err="failed to get container status \"53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd\": rpc error: code = NotFound desc = an error occurred when try to find container \"53cd4200eacb0ce10942cce7ce5504b3e806de8bca377b72afa542af5f3ecbcd\": not found" Feb 13 15:11:17.254100 kubelet[3390]: I0213 15:11:17.254019 3390 scope.go:117] "RemoveContainer" containerID="184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8" Feb 13 15:11:17.254603 containerd[1955]: time="2025-02-13T15:11:17.254532383Z" level=error msg="ContainerStatus for \"184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8\": not found" Feb 13 15:11:17.255081 kubelet[3390]: E0213 15:11:17.254940 3390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8\": not found" containerID="184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8" Feb 13 15:11:17.255177 kubelet[3390]: I0213 15:11:17.255102 3390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8"} err="failed to get container status \"184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"184d9aee424e8da984548e81545d6789af257c0baa880651f73f95b6e533c0d8\": not found" Feb 13 15:11:17.255177 kubelet[3390]: I0213 15:11:17.255136 3390 scope.go:117] "RemoveContainer" containerID="b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f" Feb 13 15:11:17.255871 containerd[1955]: time="2025-02-13T15:11:17.255750035Z" level=error msg="ContainerStatus for \"b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f\": not found" Feb 13 15:11:17.256247 kubelet[3390]: E0213 15:11:17.256134 3390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f\": not found" containerID="b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f" Feb 13 15:11:17.256416 kubelet[3390]: I0213 15:11:17.256227 3390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f"} err="failed to get container status \"b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8c4490748e5005d34e7aef925a1771342307f9c8831f4fd6f47dc2e0946899f\": not found" Feb 13 15:11:17.256416 kubelet[3390]: I0213 15:11:17.256288 3390 scope.go:117] "RemoveContainer" containerID="a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae" Feb 13 15:11:17.258733 containerd[1955]: time="2025-02-13T15:11:17.258592103Z" level=error msg="ContainerStatus for \"a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae\": not found" Feb 13 15:11:17.259273 kubelet[3390]: E0213 15:11:17.259222 3390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae\": not found" containerID="a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae" Feb 13 15:11:17.259384 kubelet[3390]: I0213 15:11:17.259281 3390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae"} err="failed to get container status \"a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"a013fca6f1743e9f444abd7f31b97a218a7fdc05ba8c11c24e1c76c883ad60ae\": not found" Feb 13 15:11:17.411386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8-rootfs.mount: Deactivated successfully. Feb 13 15:11:17.411630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8-shm.mount: Deactivated successfully. Feb 13 15:11:17.411782 systemd[1]: var-lib-kubelet-pods-b5b173ef\x2d28d3\x2d4508\x2db5e3\x2d8909b5f882ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp4ht8.mount: Deactivated successfully. Feb 13 15:11:17.411925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c-rootfs.mount: Deactivated successfully. Feb 13 15:11:17.412065 systemd[1]: var-lib-kubelet-pods-34221128\x2d5d81\x2d4557\x2d9459\x2d90b630feeb49-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d46tsv.mount: Deactivated successfully. Feb 13 15:11:17.412803 systemd[1]: var-lib-kubelet-pods-b5b173ef\x2d28d3\x2d4508\x2db5e3\x2d8909b5f882ed-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:11:17.413084 systemd[1]: var-lib-kubelet-pods-b5b173ef\x2d28d3\x2d4508\x2db5e3\x2d8909b5f882ed-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:11:17.874752 kubelet[3390]: E0213 15:11:17.874659 3390 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:11:18.288454 sshd[5033]: Connection closed by 139.178.68.195 port 48874 Feb 13 15:11:18.289623 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:18.298168 systemd[1]: sshd@25-172.31.30.163:22-139.178.68.195:48874.service: Deactivated successfully. Feb 13 15:11:18.304147 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:11:18.305030 systemd[1]: session-26.scope: Consumed 1.158s CPU time, 21.6M memory peak. Feb 13 15:11:18.307261 systemd-logind[1936]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:11:18.333761 systemd[1]: Started sshd@26-172.31.30.163:22-139.178.68.195:37844.service - OpenSSH per-connection server daemon (139.178.68.195:37844). Feb 13 15:11:18.335375 systemd-logind[1936]: Removed session 26. Feb 13 15:11:18.534465 sshd[5193]: Accepted publickey for core from 139.178.68.195 port 37844 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:18.537891 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:18.549590 systemd-logind[1936]: New session 27 of user core. Feb 13 15:11:18.557607 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:11:18.616569 kubelet[3390]: I0213 15:11:18.616517 3390 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34221128-5d81-4557-9459-90b630feeb49" path="/var/lib/kubelet/pods/34221128-5d81-4557-9459-90b630feeb49/volumes" Feb 13 15:11:18.617646 kubelet[3390]: I0213 15:11:18.617597 3390 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b173ef-28d3-4508-b5e3-8909b5f882ed" path="/var/lib/kubelet/pods/b5b173ef-28d3-4508-b5e3-8909b5f882ed/volumes" Feb 13 15:11:19.171642 ntpd[1928]: Deleting interface #12 lxc_health, fe80::f88b:deff:fe91:c61b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Feb 13 15:11:19.172142 ntpd[1928]: 13 Feb 15:11:19 ntpd[1928]: Deleting interface #12 lxc_health, fe80::f88b:deff:fe91:c61b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Feb 13 15:11:20.214242 sshd[5196]: Connection closed by 139.178.68.195 port 37844 Feb 13 15:11:20.217546 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:20.229717 systemd[1]: sshd@26-172.31.30.163:22-139.178.68.195:37844.service: Deactivated successfully. Feb 13 15:11:20.234591 kubelet[3390]: I0213 15:11:20.234507 3390 topology_manager.go:215] "Topology Admit Handler" podUID="f688626a-cce7-420a-90ec-673aa1e20aeb" podNamespace="kube-system" podName="cilium-dqwnp" Feb 13 15:11:20.235131 kubelet[3390]: E0213 15:11:20.234602 3390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34221128-5d81-4557-9459-90b630feeb49" containerName="cilium-operator" Feb 13 15:11:20.235131 kubelet[3390]: E0213 15:11:20.234623 3390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5b173ef-28d3-4508-b5e3-8909b5f882ed" containerName="mount-cgroup" Feb 13 15:11:20.235131 kubelet[3390]: E0213 15:11:20.234639 3390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5b173ef-28d3-4508-b5e3-8909b5f882ed" containerName="apply-sysctl-overwrites" Feb 13 15:11:20.235131 kubelet[3390]: E0213 15:11:20.234653 3390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5b173ef-28d3-4508-b5e3-8909b5f882ed" containerName="mount-bpf-fs" Feb 13 15:11:20.235131 kubelet[3390]: E0213 15:11:20.234672 3390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5b173ef-28d3-4508-b5e3-8909b5f882ed" containerName="clean-cilium-state" Feb 13 15:11:20.235131 kubelet[3390]: E0213 15:11:20.234688 3390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5b173ef-28d3-4508-b5e3-8909b5f882ed" containerName="cilium-agent" Feb 13 15:11:20.235131 kubelet[3390]: I0213 15:11:20.234729 3390 memory_manager.go:354] "RemoveStaleState removing state" podUID="34221128-5d81-4557-9459-90b630feeb49" containerName="cilium-operator" Feb 13 15:11:20.235131 kubelet[3390]: I0213 15:11:20.234745 3390 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b173ef-28d3-4508-b5e3-8909b5f882ed" containerName="cilium-agent" Feb 13 15:11:20.239824 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:11:20.240818 systemd[1]: session-27.scope: Consumed 1.446s CPU time, 23.6M memory peak. Feb 13 15:11:20.242867 systemd-logind[1936]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:11:20.275779 systemd[1]: Started sshd@27-172.31.30.163:22-139.178.68.195:37850.service - OpenSSH per-connection server daemon (139.178.68.195:37850). Feb 13 15:11:20.279335 systemd-logind[1936]: Removed session 27. Feb 13 15:11:20.301569 systemd[1]: Created slice kubepods-burstable-podf688626a_cce7_420a_90ec_673aa1e20aeb.slice - libcontainer container kubepods-burstable-podf688626a_cce7_420a_90ec_673aa1e20aeb.slice. Feb 13 15:11:20.306366 kubelet[3390]: I0213 15:11:20.305423 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqx84\" (UniqueName: \"kubernetes.io/projected/f688626a-cce7-420a-90ec-673aa1e20aeb-kube-api-access-nqx84\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.306366 kubelet[3390]: I0213 15:11:20.305496 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f688626a-cce7-420a-90ec-673aa1e20aeb-hostproc\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.306366 kubelet[3390]: I0213 15:11:20.305537 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f688626a-cce7-420a-90ec-673aa1e20aeb-cni-path\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.306366 kubelet[3390]: I0213 15:11:20.305573 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f688626a-cce7-420a-90ec-673aa1e20aeb-xtables-lock\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.306366 kubelet[3390]: I0213 15:11:20.305609 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f688626a-cce7-420a-90ec-673aa1e20aeb-cilium-run\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.306366 kubelet[3390]: I0213 15:11:20.305661 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f688626a-cce7-420a-90ec-673aa1e20aeb-cilium-cgroup\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.306924 kubelet[3390]: I0213 15:11:20.305702 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f688626a-cce7-420a-90ec-673aa1e20aeb-clustermesh-secrets\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.306924 kubelet[3390]: I0213 15:11:20.305742 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f688626a-cce7-420a-90ec-673aa1e20aeb-hubble-tls\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.306924 kubelet[3390]: I0213 15:11:20.305810 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f688626a-cce7-420a-90ec-673aa1e20aeb-cilium-config-path\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.306924 kubelet[3390]: I0213 15:11:20.305851 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f688626a-cce7-420a-90ec-673aa1e20aeb-host-proc-sys-kernel\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.306924 kubelet[3390]: I0213 15:11:20.305895 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f688626a-cce7-420a-90ec-673aa1e20aeb-etc-cni-netd\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.311898 kubelet[3390]: I0213 15:11:20.305954 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f688626a-cce7-420a-90ec-673aa1e20aeb-cilium-ipsec-secrets\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.311898 kubelet[3390]: I0213 15:11:20.308448 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f688626a-cce7-420a-90ec-673aa1e20aeb-bpf-maps\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.311898 kubelet[3390]: I0213 15:11:20.308594 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f688626a-cce7-420a-90ec-673aa1e20aeb-lib-modules\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.311898 kubelet[3390]: I0213 15:11:20.308638 3390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f688626a-cce7-420a-90ec-673aa1e20aeb-host-proc-sys-net\") pod \"cilium-dqwnp\" (UID: \"f688626a-cce7-420a-90ec-673aa1e20aeb\") " pod="kube-system/cilium-dqwnp" Feb 13 15:11:20.571486 sshd[5205]: Accepted publickey for core from 139.178.68.195 port 37850 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:20.574637 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:20.583556 systemd-logind[1936]: New session 28 of user core. Feb 13 15:11:20.592662 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:11:20.622080 containerd[1955]: time="2025-02-13T15:11:20.622017232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dqwnp,Uid:f688626a-cce7-420a-90ec-673aa1e20aeb,Namespace:kube-system,Attempt:0,}" Feb 13 15:11:20.674416 containerd[1955]: time="2025-02-13T15:11:20.673403056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:11:20.674416 containerd[1955]: time="2025-02-13T15:11:20.674115136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:11:20.674416 containerd[1955]: time="2025-02-13T15:11:20.674148400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:20.674991 containerd[1955]: time="2025-02-13T15:11:20.674416516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:20.707514 systemd[1]: Started cri-containerd-76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43.scope - libcontainer container 76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43. Feb 13 15:11:20.717621 sshd[5212]: Connection closed by 139.178.68.195 port 37850 Feb 13 15:11:20.718391 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:20.727995 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:11:20.733893 systemd[1]: sshd@27-172.31.30.163:22-139.178.68.195:37850.service: Deactivated successfully. Feb 13 15:11:20.740843 systemd-logind[1936]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:11:20.771132 systemd[1]: Started sshd@28-172.31.30.163:22-139.178.68.195:37864.service - OpenSSH per-connection server daemon (139.178.68.195:37864). Feb 13 15:11:20.783531 systemd-logind[1936]: Removed session 28. Feb 13 15:11:20.827255 containerd[1955]: time="2025-02-13T15:11:20.827055581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dqwnp,Uid:f688626a-cce7-420a-90ec-673aa1e20aeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43\"" Feb 13 15:11:20.837318 containerd[1955]: time="2025-02-13T15:11:20.837252761Z" level=info msg="CreateContainer within sandbox \"76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:11:20.861340 containerd[1955]: time="2025-02-13T15:11:20.861054413Z" level=info msg="CreateContainer within sandbox \"76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0cb0c4656371a33216f84835e35b1041a4c47ed8149d3adb39209914b130af0d\"" Feb 13 15:11:20.863448 containerd[1955]: time="2025-02-13T15:11:20.862493105Z" level=info msg="StartContainer for \"0cb0c4656371a33216f84835e35b1041a4c47ed8149d3adb39209914b130af0d\"" Feb 13 15:11:20.909533 systemd[1]: Started cri-containerd-0cb0c4656371a33216f84835e35b1041a4c47ed8149d3adb39209914b130af0d.scope - libcontainer container 0cb0c4656371a33216f84835e35b1041a4c47ed8149d3adb39209914b130af0d. Feb 13 15:11:20.964611 containerd[1955]: time="2025-02-13T15:11:20.964554341Z" level=info msg="StartContainer for \"0cb0c4656371a33216f84835e35b1041a4c47ed8149d3adb39209914b130af0d\" returns successfully" Feb 13 15:11:20.976478 systemd[1]: cri-containerd-0cb0c4656371a33216f84835e35b1041a4c47ed8149d3adb39209914b130af0d.scope: Deactivated successfully. Feb 13 15:11:20.984304 sshd[5251]: Accepted publickey for core from 139.178.68.195 port 37864 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:20.987901 sshd-session[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:21.001734 systemd-logind[1936]: New session 29 of user core. Feb 13 15:11:21.007776 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:11:21.051175 containerd[1955]: time="2025-02-13T15:11:21.050820086Z" level=info msg="shim disconnected" id=0cb0c4656371a33216f84835e35b1041a4c47ed8149d3adb39209914b130af0d namespace=k8s.io Feb 13 15:11:21.051175 containerd[1955]: time="2025-02-13T15:11:21.050895926Z" level=warning msg="cleaning up after shim disconnected" id=0cb0c4656371a33216f84835e35b1041a4c47ed8149d3adb39209914b130af0d namespace=k8s.io Feb 13 15:11:21.051175 containerd[1955]: time="2025-02-13T15:11:21.050919014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:21.223777 containerd[1955]: time="2025-02-13T15:11:21.223547211Z" level=info msg="CreateContainer within sandbox \"76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:11:21.253716 containerd[1955]: time="2025-02-13T15:11:21.253627227Z" level=info msg="CreateContainer within sandbox \"76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f8af5ad6619a16320f1204961df967e8447ce1e0b891a5fe11b98d4b3f8df34\"" Feb 13 15:11:21.255038 containerd[1955]: time="2025-02-13T15:11:21.254964435Z" level=info msg="StartContainer for \"8f8af5ad6619a16320f1204961df967e8447ce1e0b891a5fe11b98d4b3f8df34\"" Feb 13 15:11:21.336495 systemd[1]: Started cri-containerd-8f8af5ad6619a16320f1204961df967e8447ce1e0b891a5fe11b98d4b3f8df34.scope - libcontainer container 8f8af5ad6619a16320f1204961df967e8447ce1e0b891a5fe11b98d4b3f8df34. Feb 13 15:11:21.387968 containerd[1955]: time="2025-02-13T15:11:21.387847155Z" level=info msg="StartContainer for \"8f8af5ad6619a16320f1204961df967e8447ce1e0b891a5fe11b98d4b3f8df34\" returns successfully" Feb 13 15:11:21.400128 systemd[1]: cri-containerd-8f8af5ad6619a16320f1204961df967e8447ce1e0b891a5fe11b98d4b3f8df34.scope: Deactivated successfully. Feb 13 15:11:21.448420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f8af5ad6619a16320f1204961df967e8447ce1e0b891a5fe11b98d4b3f8df34-rootfs.mount: Deactivated successfully. Feb 13 15:11:21.458004 containerd[1955]: time="2025-02-13T15:11:21.457930936Z" level=info msg="shim disconnected" id=8f8af5ad6619a16320f1204961df967e8447ce1e0b891a5fe11b98d4b3f8df34 namespace=k8s.io Feb 13 15:11:21.458845 containerd[1955]: time="2025-02-13T15:11:21.458327548Z" level=warning msg="cleaning up after shim disconnected" id=8f8af5ad6619a16320f1204961df967e8447ce1e0b891a5fe11b98d4b3f8df34 namespace=k8s.io Feb 13 15:11:21.458845 containerd[1955]: time="2025-02-13T15:11:21.458356432Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:22.228015 containerd[1955]: time="2025-02-13T15:11:22.227937304Z" level=info msg="CreateContainer within sandbox \"76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:11:22.264603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount789792177.mount: Deactivated successfully. Feb 13 15:11:22.267838 containerd[1955]: time="2025-02-13T15:11:22.266591836Z" level=info msg="CreateContainer within sandbox \"76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"24b97b3559c32501becdcea8c34f5884cea2fd943ac777e7733d35187ea80599\"" Feb 13 15:11:22.268576 containerd[1955]: time="2025-02-13T15:11:22.268443268Z" level=info msg="StartContainer for \"24b97b3559c32501becdcea8c34f5884cea2fd943ac777e7733d35187ea80599\"" Feb 13 15:11:22.320525 systemd[1]: Started cri-containerd-24b97b3559c32501becdcea8c34f5884cea2fd943ac777e7733d35187ea80599.scope - libcontainer container 24b97b3559c32501becdcea8c34f5884cea2fd943ac777e7733d35187ea80599. Feb 13 15:11:22.387266 containerd[1955]: time="2025-02-13T15:11:22.386897476Z" level=info msg="StartContainer for \"24b97b3559c32501becdcea8c34f5884cea2fd943ac777e7733d35187ea80599\" returns successfully" Feb 13 15:11:22.394061 systemd[1]: cri-containerd-24b97b3559c32501becdcea8c34f5884cea2fd943ac777e7733d35187ea80599.scope: Deactivated successfully. Feb 13 15:11:22.445866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24b97b3559c32501becdcea8c34f5884cea2fd943ac777e7733d35187ea80599-rootfs.mount: Deactivated successfully. Feb 13 15:11:22.452595 containerd[1955]: time="2025-02-13T15:11:22.452441645Z" level=info msg="shim disconnected" id=24b97b3559c32501becdcea8c34f5884cea2fd943ac777e7733d35187ea80599 namespace=k8s.io Feb 13 15:11:22.452595 containerd[1955]: time="2025-02-13T15:11:22.452525873Z" level=warning msg="cleaning up after shim disconnected" id=24b97b3559c32501becdcea8c34f5884cea2fd943ac777e7733d35187ea80599 namespace=k8s.io Feb 13 15:11:22.452595 containerd[1955]: time="2025-02-13T15:11:22.452560061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:22.643167 containerd[1955]: time="2025-02-13T15:11:22.642524874Z" level=info msg="StopPodSandbox for \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\"" Feb 13 15:11:22.643167 containerd[1955]: time="2025-02-13T15:11:22.642666906Z" level=info msg="TearDown network for sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" successfully" Feb 13 15:11:22.643167 containerd[1955]: time="2025-02-13T15:11:22.642689262Z" level=info msg="StopPodSandbox for \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" returns successfully" Feb 13 15:11:22.644873 containerd[1955]: time="2025-02-13T15:11:22.644334534Z" level=info msg="RemovePodSandbox for \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\"" Feb 13 15:11:22.644873 containerd[1955]: time="2025-02-13T15:11:22.644394234Z" level=info msg="Forcibly stopping sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\"" Feb 13 15:11:22.644873 containerd[1955]: time="2025-02-13T15:11:22.644721978Z" level=info msg="TearDown network for sandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" successfully" Feb 13 15:11:22.651238 containerd[1955]: time="2025-02-13T15:11:22.651133398Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:22.651464 containerd[1955]: time="2025-02-13T15:11:22.651247410Z" level=info msg="RemovePodSandbox \"781082dc7bdc2730340bc9da771f8cee2880b17b88f2c83f47d34eb6b60919b8\" returns successfully" Feb 13 15:11:22.653539 containerd[1955]: time="2025-02-13T15:11:22.653178258Z" level=info msg="StopPodSandbox for \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\"" Feb 13 15:11:22.653539 containerd[1955]: time="2025-02-13T15:11:22.653391846Z" level=info msg="TearDown network for sandbox \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\" successfully" Feb 13 15:11:22.653539 containerd[1955]: time="2025-02-13T15:11:22.653415882Z" level=info msg="StopPodSandbox for \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\" returns successfully" Feb 13 15:11:22.654523 containerd[1955]: time="2025-02-13T15:11:22.654349806Z" level=info msg="RemovePodSandbox for \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\"" Feb 13 15:11:22.654523 containerd[1955]: time="2025-02-13T15:11:22.654509394Z" level=info msg="Forcibly stopping sandbox \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\"" Feb 13 15:11:22.654772 containerd[1955]: time="2025-02-13T15:11:22.654683538Z" level=info msg="TearDown network for sandbox \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\" successfully" Feb 13 15:11:22.661429 containerd[1955]: time="2025-02-13T15:11:22.661353378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:22.661602 containerd[1955]: time="2025-02-13T15:11:22.661488522Z" level=info msg="RemovePodSandbox \"59ea4cd15015dd6f3973647cce594b19ffb2f912e48f3fcff9720eee26f71c4c\" returns successfully" Feb 13 15:11:22.876160 kubelet[3390]: E0213 15:11:22.875960 3390 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:11:23.234067 containerd[1955]: time="2025-02-13T15:11:23.233642921Z" level=info msg="CreateContainer within sandbox \"76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:11:23.270891 containerd[1955]: time="2025-02-13T15:11:23.270821045Z" level=info msg="CreateContainer within sandbox \"76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"76c66ab5eca63969ff4e35e77de063d686368db473184798c134fabc66776cc4\"" Feb 13 15:11:23.271954 containerd[1955]: time="2025-02-13T15:11:23.271916945Z" level=info msg="StartContainer for \"76c66ab5eca63969ff4e35e77de063d686368db473184798c134fabc66776cc4\"" Feb 13 15:11:23.336967 systemd[1]: Started cri-containerd-76c66ab5eca63969ff4e35e77de063d686368db473184798c134fabc66776cc4.scope - libcontainer container 76c66ab5eca63969ff4e35e77de063d686368db473184798c134fabc66776cc4. Feb 13 15:11:23.412968 systemd[1]: cri-containerd-76c66ab5eca63969ff4e35e77de063d686368db473184798c134fabc66776cc4.scope: Deactivated successfully. Feb 13 15:11:23.416794 containerd[1955]: time="2025-02-13T15:11:23.416674685Z" level=info msg="StartContainer for \"76c66ab5eca63969ff4e35e77de063d686368db473184798c134fabc66776cc4\" returns successfully" Feb 13 15:11:23.470227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76c66ab5eca63969ff4e35e77de063d686368db473184798c134fabc66776cc4-rootfs.mount: Deactivated successfully. Feb 13 15:11:23.480829 containerd[1955]: time="2025-02-13T15:11:23.480752862Z" level=info msg="shim disconnected" id=76c66ab5eca63969ff4e35e77de063d686368db473184798c134fabc66776cc4 namespace=k8s.io Feb 13 15:11:23.481594 containerd[1955]: time="2025-02-13T15:11:23.481083786Z" level=warning msg="cleaning up after shim disconnected" id=76c66ab5eca63969ff4e35e77de063d686368db473184798c134fabc66776cc4 namespace=k8s.io Feb 13 15:11:23.481594 containerd[1955]: time="2025-02-13T15:11:23.481110642Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:24.246959 containerd[1955]: time="2025-02-13T15:11:24.246903738Z" level=info msg="CreateContainer within sandbox \"76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:11:24.296051 containerd[1955]: time="2025-02-13T15:11:24.295948338Z" level=info msg="CreateContainer within sandbox \"76336c951da5628dbe1921db392740e6f53ed81952409bbf445f43cb457e0b43\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"44e4e06cca215d71f7cbaa40a44411937eaa4b69416f8da386142a1dac065bb8\"" Feb 13 15:11:24.298762 containerd[1955]: time="2025-02-13T15:11:24.297214338Z" level=info msg="StartContainer for \"44e4e06cca215d71f7cbaa40a44411937eaa4b69416f8da386142a1dac065bb8\"" Feb 13 15:11:24.349565 systemd[1]: Started cri-containerd-44e4e06cca215d71f7cbaa40a44411937eaa4b69416f8da386142a1dac065bb8.scope - libcontainer container 44e4e06cca215d71f7cbaa40a44411937eaa4b69416f8da386142a1dac065bb8. Feb 13 15:11:24.411656 containerd[1955]: time="2025-02-13T15:11:24.411596994Z" level=info msg="StartContainer for \"44e4e06cca215d71f7cbaa40a44411937eaa4b69416f8da386142a1dac065bb8\" returns successfully" Feb 13 15:11:25.083079 kubelet[3390]: I0213 15:11:25.082990 3390 setters.go:580] "Node became not ready" node="ip-172-31-30-163" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:11:25Z","lastTransitionTime":"2025-02-13T15:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:11:25.355123 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:11:25.533707 systemd[1]: run-containerd-runc-k8s.io-44e4e06cca215d71f7cbaa40a44411937eaa4b69416f8da386142a1dac065bb8-runc.YSoFJL.mount: Deactivated successfully. Feb 13 15:11:26.613327 kubelet[3390]: E0213 15:11:26.612054 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-d5g5p" podUID="1669f341-77e6-4e5a-b2c3-28247b098bd2" Feb 13 15:11:29.788130 (udev-worker)[6057]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:11:29.790932 systemd-networkd[1850]: lxc_health: Link UP Feb 13 15:11:29.807820 (udev-worker)[6058]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:11:29.813066 systemd-networkd[1850]: lxc_health: Gained carrier Feb 13 15:11:30.662582 kubelet[3390]: I0213 15:11:30.662018 3390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dqwnp" podStartSLOduration=10.661992253 podStartE2EDuration="10.661992253s" podCreationTimestamp="2025-02-13 15:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:11:25.298309363 +0000 UTC m=+122.950919462" watchObservedRunningTime="2025-02-13 15:11:30.661992253 +0000 UTC m=+128.314602232" Feb 13 15:11:31.532489 systemd-networkd[1850]: lxc_health: Gained IPv6LL Feb 13 15:11:34.171557 ntpd[1928]: Listen normally on 15 lxc_health [fe80::2888:4aff:fed2:7041%14]:123 Feb 13 15:11:34.172079 ntpd[1928]: 13 Feb 15:11:34 ntpd[1928]: Listen normally on 15 lxc_health [fe80::2888:4aff:fed2:7041%14]:123 Feb 13 15:11:34.773289 systemd[1]: run-containerd-runc-k8s.io-44e4e06cca215d71f7cbaa40a44411937eaa4b69416f8da386142a1dac065bb8-runc.t3UTPv.mount: Deactivated successfully. Feb 13 15:11:37.142019 kubelet[3390]: E0213 15:11:37.141917 3390 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45312->127.0.0.1:44445: write tcp 127.0.0.1:45312->127.0.0.1:44445: write: broken pipe Feb 13 15:11:37.191310 sshd[5313]: Connection closed by 139.178.68.195 port 37864 Feb 13 15:11:37.192954 sshd-session[5251]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:37.202381 systemd[1]: sshd@28-172.31.30.163:22-139.178.68.195:37864.service: Deactivated successfully. Feb 13 15:11:37.209960 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:11:37.214677 systemd-logind[1936]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:11:37.219541 systemd-logind[1936]: Removed session 29. Feb 13 15:12:02.595093 systemd[1]: cri-containerd-2b9665bdd3f48bbcdd82551252f1906744a27029dfcc2623af559b655989f5a4.scope: Deactivated successfully. Feb 13 15:12:02.595700 systemd[1]: cri-containerd-2b9665bdd3f48bbcdd82551252f1906744a27029dfcc2623af559b655989f5a4.scope: Consumed 6.834s CPU time, 58M memory peak. Feb 13 15:12:02.643664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b9665bdd3f48bbcdd82551252f1906744a27029dfcc2623af559b655989f5a4-rootfs.mount: Deactivated successfully. Feb 13 15:12:02.650166 containerd[1955]: time="2025-02-13T15:12:02.650053232Z" level=info msg="shim disconnected" id=2b9665bdd3f48bbcdd82551252f1906744a27029dfcc2623af559b655989f5a4 namespace=k8s.io Feb 13 15:12:02.650845 containerd[1955]: time="2025-02-13T15:12:02.650161268Z" level=warning msg="cleaning up after shim disconnected" id=2b9665bdd3f48bbcdd82551252f1906744a27029dfcc2623af559b655989f5a4 namespace=k8s.io Feb 13 15:12:02.650845 containerd[1955]: time="2025-02-13T15:12:02.650244644Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:12:03.363308 kubelet[3390]: I0213 15:12:03.363257 3390 scope.go:117] "RemoveContainer" containerID="2b9665bdd3f48bbcdd82551252f1906744a27029dfcc2623af559b655989f5a4" Feb 13 15:12:03.368389 containerd[1955]: time="2025-02-13T15:12:03.368314508Z" level=info msg="CreateContainer within sandbox \"a41140ba7d8fbdeb52b6db55247d8510028feb8e292b660abc42686d26223c50\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:12:03.394369 containerd[1955]: time="2025-02-13T15:12:03.394222364Z" level=info msg="CreateContainer within sandbox \"a41140ba7d8fbdeb52b6db55247d8510028feb8e292b660abc42686d26223c50\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"41816bf7dd3ec13fd752cf17a41516283bd24e1be2a57b27c3b5d2c5a84b9b73\"" Feb 13 15:12:03.395499 containerd[1955]: time="2025-02-13T15:12:03.395057924Z" level=info msg="StartContainer for \"41816bf7dd3ec13fd752cf17a41516283bd24e1be2a57b27c3b5d2c5a84b9b73\"" Feb 13 15:12:03.453525 systemd[1]: Started cri-containerd-41816bf7dd3ec13fd752cf17a41516283bd24e1be2a57b27c3b5d2c5a84b9b73.scope - libcontainer container 41816bf7dd3ec13fd752cf17a41516283bd24e1be2a57b27c3b5d2c5a84b9b73. Feb 13 15:12:03.527437 containerd[1955]: time="2025-02-13T15:12:03.527324601Z" level=info msg="StartContainer for \"41816bf7dd3ec13fd752cf17a41516283bd24e1be2a57b27c3b5d2c5a84b9b73\" returns successfully" Feb 13 15:12:05.641049 kubelet[3390]: E0213 15:12:05.640354 3390 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-163?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:12:08.762677 systemd[1]: cri-containerd-6675776be65899db00f26ed75f78bd281b6e440322df634dc5676ab938e1b20d.scope: Deactivated successfully. Feb 13 15:12:08.763392 systemd[1]: cri-containerd-6675776be65899db00f26ed75f78bd281b6e440322df634dc5676ab938e1b20d.scope: Consumed 3.001s CPU time, 20.8M memory peak. Feb 13 15:12:08.809376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6675776be65899db00f26ed75f78bd281b6e440322df634dc5676ab938e1b20d-rootfs.mount: Deactivated successfully. Feb 13 15:12:08.821853 containerd[1955]: time="2025-02-13T15:12:08.821711859Z" level=info msg="shim disconnected" id=6675776be65899db00f26ed75f78bd281b6e440322df634dc5676ab938e1b20d namespace=k8s.io Feb 13 15:12:08.821853 containerd[1955]: time="2025-02-13T15:12:08.821783103Z" level=warning msg="cleaning up after shim disconnected" id=6675776be65899db00f26ed75f78bd281b6e440322df634dc5676ab938e1b20d namespace=k8s.io Feb 13 15:12:08.822822 containerd[1955]: time="2025-02-13T15:12:08.821802687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:12:09.384260 kubelet[3390]: I0213 15:12:09.384019 3390 scope.go:117] "RemoveContainer" containerID="6675776be65899db00f26ed75f78bd281b6e440322df634dc5676ab938e1b20d" Feb 13 15:12:09.388131 containerd[1955]: time="2025-02-13T15:12:09.387926690Z" level=info msg="CreateContainer within sandbox \"b8e42cd310de3363094862d78ba4ea88b974a9ed5c1f9ea29b9f791efe30a360\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 15:12:09.420251 containerd[1955]: time="2025-02-13T15:12:09.420147866Z" level=info msg="CreateContainer within sandbox \"b8e42cd310de3363094862d78ba4ea88b974a9ed5c1f9ea29b9f791efe30a360\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"44bec13823a08e4b211e067313b02faf72afe97e0bbb980c889ff37bbdac8017\"" Feb 13 15:12:09.421262 containerd[1955]: time="2025-02-13T15:12:09.421079942Z" level=info msg="StartContainer for \"44bec13823a08e4b211e067313b02faf72afe97e0bbb980c889ff37bbdac8017\"" Feb 13 15:12:09.485519 systemd[1]: Started cri-containerd-44bec13823a08e4b211e067313b02faf72afe97e0bbb980c889ff37bbdac8017.scope - libcontainer container 44bec13823a08e4b211e067313b02faf72afe97e0bbb980c889ff37bbdac8017. Feb 13 15:12:09.553638 containerd[1955]: time="2025-02-13T15:12:09.553227387Z" level=info msg="StartContainer for \"44bec13823a08e4b211e067313b02faf72afe97e0bbb980c889ff37bbdac8017\" returns successfully" Feb 13 15:12:15.641844 kubelet[3390]: E0213 15:12:15.641611 3390 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-163?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"