Feb 13 15:19:55.162738 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 15:19:55.162783 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025 Feb 13 15:19:55.162807 kernel: KASLR disabled due to lack of seed Feb 13 15:19:55.162824 kernel: efi: EFI v2.7 by EDK II Feb 13 15:19:55.162839 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Feb 13 15:19:55.162854 kernel: secureboot: Secure boot disabled Feb 13 15:19:55.162872 kernel: ACPI: Early table checksum verification disabled Feb 13 15:19:55.162887 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 15:19:55.162902 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 15:19:55.162917 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:19:55.162937 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 15:19:55.162953 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:19:55.162968 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 15:19:55.162983 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 15:19:55.163001 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 15:19:55.163022 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:19:55.163038 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 15:19:55.163055 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 15:19:55.163070 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 15:19:55.163086 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 15:19:55.163103 kernel: printk: bootconsole [uart0] enabled Feb 13 15:19:55.163119 kernel: NUMA: Failed to initialise from firmware Feb 13 15:19:55.163135 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:19:55.163151 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 15:19:55.163167 kernel: Zone ranges: Feb 13 15:19:55.163183 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:19:55.163203 kernel: DMA32 empty Feb 13 15:19:55.165277 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 15:19:55.165295 kernel: Movable zone start for each node Feb 13 15:19:55.165311 kernel: Early memory node ranges Feb 13 15:19:55.165328 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 15:19:55.165344 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 15:19:55.165360 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 15:19:55.165376 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 15:19:55.165392 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 15:19:55.165408 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 15:19:55.165424 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 15:19:55.165440 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 15:19:55.165463 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:19:55.165480 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 15:19:55.165503 kernel: psci: probing for conduit method from ACPI. Feb 13 15:19:55.165520 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 15:19:55.165537 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:19:55.165559 kernel: psci: Trusted OS migration not required Feb 13 15:19:55.165576 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:19:55.165593 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:19:55.165610 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:19:55.165627 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:19:55.165644 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:19:55.165661 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:19:55.165678 kernel: CPU features: detected: Spectre-v2 Feb 13 15:19:55.165694 kernel: CPU features: detected: Spectre-v3a Feb 13 15:19:55.165711 kernel: CPU features: detected: Spectre-BHB Feb 13 15:19:55.165728 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 15:19:55.165745 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 15:19:55.165766 kernel: alternatives: applying boot alternatives Feb 13 15:19:55.165785 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:19:55.165804 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:19:55.165821 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:19:55.165838 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:19:55.165855 kernel: Fallback order for Node 0: 0 Feb 13 15:19:55.165871 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 15:19:55.165888 kernel: Policy zone: Normal Feb 13 15:19:55.165905 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:19:55.165922 kernel: software IO TLB: area num 2. Feb 13 15:19:55.165943 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 15:19:55.165960 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Feb 13 15:19:55.165978 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:19:55.165994 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:19:55.166012 kernel: rcu: RCU event tracing is enabled. Feb 13 15:19:55.166030 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:19:55.166047 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:19:55.166064 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:19:55.166081 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:19:55.166098 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:19:55.166115 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:19:55.166136 kernel: GICv3: 96 SPIs implemented Feb 13 15:19:55.166153 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:19:55.166170 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:19:55.166186 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 15:19:55.166203 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 15:19:55.168546 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 15:19:55.168565 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:19:55.168582 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:19:55.168599 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 15:19:55.168616 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 15:19:55.168633 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 15:19:55.168650 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:19:55.168675 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 15:19:55.168693 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 15:19:55.168710 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 15:19:55.168727 kernel: Console: colour dummy device 80x25 Feb 13 15:19:55.168744 kernel: printk: console [tty1] enabled Feb 13 15:19:55.168761 kernel: ACPI: Core revision 20230628 Feb 13 15:19:55.168779 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 15:19:55.168796 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:19:55.168814 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:19:55.168831 kernel: landlock: Up and running. Feb 13 15:19:55.168853 kernel: SELinux: Initializing. Feb 13 15:19:55.168870 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:19:55.168887 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:19:55.168905 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:19:55.168922 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:19:55.168940 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:19:55.168958 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:19:55.168975 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 15:19:55.168997 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 15:19:55.169015 kernel: Remapping and enabling EFI services. Feb 13 15:19:55.169032 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:19:55.169049 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:19:55.169066 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 15:19:55.169083 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 15:19:55.169101 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 15:19:55.169118 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:19:55.169135 kernel: SMP: Total of 2 processors activated. Feb 13 15:19:55.169152 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:19:55.169174 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 15:19:55.169191 kernel: CPU features: detected: CRC32 instructions Feb 13 15:19:55.169247 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:19:55.169272 kernel: alternatives: applying system-wide alternatives Feb 13 15:19:55.169290 kernel: devtmpfs: initialized Feb 13 15:19:55.169309 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:19:55.169327 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:19:55.169344 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:19:55.169363 kernel: SMBIOS 3.0.0 present. Feb 13 15:19:55.169385 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 15:19:55.169403 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:19:55.169421 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:19:55.169439 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:19:55.169458 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:19:55.169476 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:19:55.169494 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1 Feb 13 15:19:55.169516 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:19:55.169534 kernel: cpuidle: using governor menu Feb 13 15:19:55.169552 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:19:55.169570 kernel: ASID allocator initialised with 65536 entries Feb 13 15:19:55.169588 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:19:55.169606 kernel: Serial: AMBA PL011 UART driver Feb 13 15:19:55.169624 kernel: Modules: 17360 pages in range for non-PLT usage Feb 13 15:19:55.169642 kernel: Modules: 508880 pages in range for PLT usage Feb 13 15:19:55.169661 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:19:55.169683 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:19:55.169702 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:19:55.169720 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:19:55.169738 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:19:55.169756 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:19:55.169774 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:19:55.169792 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:19:55.169810 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:19:55.169827 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:19:55.169850 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:19:55.169868 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:19:55.169886 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:19:55.169904 kernel: ACPI: Interpreter enabled Feb 13 15:19:55.169922 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:19:55.169940 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:19:55.169958 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 15:19:55.172178 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:19:55.172474 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:19:55.172672 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:19:55.172866 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 15:19:55.173064 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 15:19:55.173089 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 15:19:55.173108 kernel: acpiphp: Slot [1] registered Feb 13 15:19:55.173126 kernel: acpiphp: Slot [2] registered Feb 13 15:19:55.173144 kernel: acpiphp: Slot [3] registered Feb 13 15:19:55.173169 kernel: acpiphp: Slot [4] registered Feb 13 15:19:55.173188 kernel: acpiphp: Slot [5] registered Feb 13 15:19:55.173223 kernel: acpiphp: Slot [6] registered Feb 13 15:19:55.173633 kernel: acpiphp: Slot [7] registered Feb 13 15:19:55.173654 kernel: acpiphp: Slot [8] registered Feb 13 15:19:55.173673 kernel: acpiphp: Slot [9] registered Feb 13 15:19:55.173692 kernel: acpiphp: Slot [10] registered Feb 13 15:19:55.173710 kernel: acpiphp: Slot [11] registered Feb 13 15:19:55.173728 kernel: acpiphp: Slot [12] registered Feb 13 15:19:55.173747 kernel: acpiphp: Slot [13] registered Feb 13 15:19:55.173774 kernel: acpiphp: Slot [14] registered Feb 13 15:19:55.173792 kernel: acpiphp: Slot [15] registered Feb 13 15:19:55.173809 kernel: acpiphp: Slot [16] registered Feb 13 15:19:55.173827 kernel: acpiphp: Slot [17] registered Feb 13 15:19:55.173845 kernel: acpiphp: Slot [18] registered Feb 13 15:19:55.173863 kernel: acpiphp: Slot [19] registered Feb 13 15:19:55.173880 kernel: acpiphp: Slot [20] registered Feb 13 15:19:55.173898 kernel: acpiphp: Slot [21] registered Feb 13 15:19:55.173916 kernel: acpiphp: Slot [22] registered Feb 13 15:19:55.173939 kernel: acpiphp: Slot [23] registered Feb 13 15:19:55.173957 kernel: acpiphp: Slot [24] registered Feb 13 15:19:55.173975 kernel: acpiphp: Slot [25] registered Feb 13 15:19:55.173992 kernel: acpiphp: Slot [26] registered Feb 13 15:19:55.174010 kernel: acpiphp: Slot [27] registered Feb 13 15:19:55.174028 kernel: acpiphp: Slot [28] registered Feb 13 15:19:55.174046 kernel: acpiphp: Slot [29] registered Feb 13 15:19:55.174063 kernel: acpiphp: Slot [30] registered Feb 13 15:19:55.174081 kernel: acpiphp: Slot [31] registered Feb 13 15:19:55.174099 kernel: PCI host bridge to bus 0000:00 Feb 13 15:19:55.175694 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 15:19:55.175917 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:19:55.176104 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 15:19:55.176358 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 15:19:55.176588 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 15:19:55.176809 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 15:19:55.177024 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 15:19:55.184097 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:19:55.184435 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 15:19:55.184663 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:19:55.184906 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:19:55.185135 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 15:19:55.185416 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 15:19:55.185632 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 15:19:55.185831 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:19:55.186037 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 15:19:55.187534 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 15:19:55.187784 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 15:19:55.187990 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 15:19:55.188200 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 15:19:55.189539 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 15:19:55.189721 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:19:55.189898 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 15:19:55.189923 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:19:55.189942 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:19:55.189961 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:19:55.189979 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:19:55.189998 kernel: iommu: Default domain type: Translated Feb 13 15:19:55.190023 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:19:55.190042 kernel: efivars: Registered efivars operations Feb 13 15:19:55.190060 kernel: vgaarb: loaded Feb 13 15:19:55.190078 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:19:55.190096 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:19:55.190115 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:19:55.190133 kernel: pnp: PnP ACPI init Feb 13 15:19:55.190366 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 15:19:55.190401 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:19:55.190420 kernel: NET: Registered PF_INET protocol family Feb 13 15:19:55.190439 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:19:55.190458 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:19:55.190476 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:19:55.190529 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:19:55.190570 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:19:55.190590 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:19:55.190608 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:19:55.190633 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:19:55.190652 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:19:55.190670 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:19:55.190689 kernel: kvm [1]: HYP mode not available Feb 13 15:19:55.190708 kernel: Initialise system trusted keyrings Feb 13 15:19:55.190726 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:19:55.190744 kernel: Key type asymmetric registered Feb 13 15:19:55.190762 kernel: Asymmetric key parser 'x509' registered Feb 13 15:19:55.190780 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:19:55.190803 kernel: io scheduler mq-deadline registered Feb 13 15:19:55.190821 kernel: io scheduler kyber registered Feb 13 15:19:55.190839 kernel: io scheduler bfq registered Feb 13 15:19:55.191056 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 15:19:55.191083 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:19:55.191101 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:19:55.191120 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 15:19:55.191138 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 15:19:55.191162 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:19:55.191181 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:19:55.193520 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 15:19:55.193566 kernel: printk: console [ttyS0] disabled Feb 13 15:19:55.193587 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 15:19:55.193605 kernel: printk: console [ttyS0] enabled Feb 13 15:19:55.193624 kernel: printk: bootconsole [uart0] disabled Feb 13 15:19:55.193642 kernel: thunder_xcv, ver 1.0 Feb 13 15:19:55.193660 kernel: thunder_bgx, ver 1.0 Feb 13 15:19:55.193678 kernel: nicpf, ver 1.0 Feb 13 15:19:55.193706 kernel: nicvf, ver 1.0 Feb 13 15:19:55.193937 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:19:55.194124 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:19:54 UTC (1739459994) Feb 13 15:19:55.194150 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:19:55.194183 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 15:19:55.196491 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:19:55.196544 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:19:55.196574 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:19:55.196593 kernel: Segment Routing with IPv6 Feb 13 15:19:55.196611 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:19:55.196629 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:19:55.196648 kernel: Key type dns_resolver registered Feb 13 15:19:55.196666 kernel: registered taskstats version 1 Feb 13 15:19:55.196684 kernel: Loading compiled-in X.509 certificates Feb 13 15:19:55.196703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e' Feb 13 15:19:55.196722 kernel: Key type .fscrypt registered Feb 13 15:19:55.196742 kernel: Key type fscrypt-provisioning registered Feb 13 15:19:55.196766 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:19:55.196786 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:19:55.196805 kernel: ima: No architecture policies found Feb 13 15:19:55.196826 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:19:55.196844 kernel: clk: Disabling unused clocks Feb 13 15:19:55.196863 kernel: Freeing unused kernel memory: 39936K Feb 13 15:19:55.196882 kernel: Run /init as init process Feb 13 15:19:55.196901 kernel: with arguments: Feb 13 15:19:55.196919 kernel: /init Feb 13 15:19:55.196943 kernel: with environment: Feb 13 15:19:55.196962 kernel: HOME=/ Feb 13 15:19:55.196981 kernel: TERM=linux Feb 13 15:19:55.197000 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:19:55.197025 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:19:55.197050 systemd[1]: Detected virtualization amazon. Feb 13 15:19:55.197071 systemd[1]: Detected architecture arm64. Feb 13 15:19:55.197098 systemd[1]: Running in initrd. Feb 13 15:19:55.197119 systemd[1]: No hostname configured, using default hostname. Feb 13 15:19:55.197139 systemd[1]: Hostname set to . Feb 13 15:19:55.197159 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:19:55.197179 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:19:55.197199 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:19:55.197252 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:19:55.197275 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:19:55.197303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:19:55.197324 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:19:55.197344 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:19:55.197368 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:19:55.197389 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:19:55.197409 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:19:55.197429 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:19:55.197454 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:19:55.197475 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:19:55.197494 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:19:55.197514 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:19:55.197534 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:19:55.197554 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:19:55.197574 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:19:55.197594 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:19:55.197614 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:19:55.197638 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:19:55.197658 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:19:55.197678 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:19:55.197698 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:19:55.197718 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:19:55.197738 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:19:55.197758 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:19:55.197777 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:19:55.197801 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:19:55.197821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:19:55.197842 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:19:55.197862 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:19:55.197889 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:19:55.197969 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 15:19:55.198018 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:19:55.198039 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:19:55.198059 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:19:55.198083 kernel: Bridge firewalling registered Feb 13 15:19:55.198103 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:19:55.198123 systemd-journald[251]: Journal started Feb 13 15:19:55.198169 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2da8796291ec3651bb39e74883f7cb) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:19:55.156666 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 15:19:55.202561 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:19:55.192301 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 15:19:55.213904 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:19:55.214328 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:19:55.221367 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:19:55.238499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:19:55.243472 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:19:55.273272 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:19:55.276748 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:19:55.293891 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:19:55.308297 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:19:55.318499 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:19:55.329265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:19:55.356270 dracut-cmdline[289]: dracut-dracut-053 Feb 13 15:19:55.362079 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:19:55.398306 systemd-resolved[282]: Positive Trust Anchors: Feb 13 15:19:55.398334 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:19:55.398394 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:19:55.530245 kernel: SCSI subsystem initialized Feb 13 15:19:55.537352 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:19:55.550260 kernel: iscsi: registered transport (tcp) Feb 13 15:19:55.571664 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:19:55.571735 kernel: QLogic iSCSI HBA Driver Feb 13 15:19:55.640244 kernel: random: crng init done Feb 13 15:19:55.640557 systemd-resolved[282]: Defaulting to hostname 'linux'. Feb 13 15:19:55.643995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:19:55.648024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:19:55.670122 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:19:55.679533 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:19:55.719924 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:19:55.720010 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:19:55.721595 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:19:55.786252 kernel: raid6: neonx8 gen() 6511 MB/s Feb 13 15:19:55.803239 kernel: raid6: neonx4 gen() 6498 MB/s Feb 13 15:19:55.820247 kernel: raid6: neonx2 gen() 5393 MB/s Feb 13 15:19:55.837238 kernel: raid6: neonx1 gen() 3940 MB/s Feb 13 15:19:55.854238 kernel: raid6: int64x8 gen() 3596 MB/s Feb 13 15:19:55.871238 kernel: raid6: int64x4 gen() 3684 MB/s Feb 13 15:19:55.888247 kernel: raid6: int64x2 gen() 3556 MB/s Feb 13 15:19:55.905991 kernel: raid6: int64x1 gen() 2764 MB/s Feb 13 15:19:55.906022 kernel: raid6: using algorithm neonx8 gen() 6511 MB/s Feb 13 15:19:55.923973 kernel: raid6: .... xor() 4760 MB/s, rmw enabled Feb 13 15:19:55.924009 kernel: raid6: using neon recovery algorithm Feb 13 15:19:55.931993 kernel: xor: measuring software checksum speed Feb 13 15:19:55.932044 kernel: 8regs : 12892 MB/sec Feb 13 15:19:55.933239 kernel: 32regs : 11688 MB/sec Feb 13 15:19:55.935141 kernel: arm64_neon : 9049 MB/sec Feb 13 15:19:55.935174 kernel: xor: using function: 8regs (12892 MB/sec) Feb 13 15:19:56.018250 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:19:56.036830 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:19:56.046529 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:19:56.086548 systemd-udevd[472]: Using default interface naming scheme 'v255'. Feb 13 15:19:56.095646 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:19:56.107474 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:19:56.137703 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Feb 13 15:19:56.192285 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:19:56.202543 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:19:56.314679 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:19:56.331261 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:19:56.369028 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:19:56.372817 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:19:56.379502 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:19:56.383018 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:19:56.399519 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:19:56.440560 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:19:56.481280 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:19:56.481342 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 15:19:56.502884 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:19:56.503156 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:19:56.503917 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:45:b3:1f:41:af Feb 13 15:19:56.506016 (udev-worker)[530]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:19:56.525258 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:19:56.528266 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:19:56.531083 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:19:56.531593 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:19:56.538087 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:19:56.540247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:19:56.540507 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:19:56.552395 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:19:56.544493 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:19:56.563418 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:19:56.563583 kernel: GPT:9289727 != 16777215 Feb 13 15:19:56.562187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:19:56.571814 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:19:56.571854 kernel: GPT:9289727 != 16777215 Feb 13 15:19:56.571879 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:19:56.571903 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:19:56.601054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:19:56.611493 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:19:56.657536 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:19:56.663290 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (538) Feb 13 15:19:56.717254 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (520) Feb 13 15:19:56.777748 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:19:56.795479 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:19:56.825196 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:19:56.841181 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:19:56.843567 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:19:56.854607 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:19:56.867674 disk-uuid[661]: Primary Header is updated. Feb 13 15:19:56.867674 disk-uuid[661]: Secondary Entries is updated. Feb 13 15:19:56.867674 disk-uuid[661]: Secondary Header is updated. Feb 13 15:19:56.878229 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:19:56.899232 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:19:57.896881 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:19:57.896948 disk-uuid[662]: The operation has completed successfully. Feb 13 15:19:58.062656 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:19:58.062862 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:19:58.122516 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:19:58.139100 sh[920]: Success Feb 13 15:19:58.163338 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:19:58.278824 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:19:58.290454 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:19:58.305038 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:19:58.330413 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f Feb 13 15:19:58.330476 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:19:58.330503 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:19:58.332064 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:19:58.333263 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:19:58.454257 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:19:58.466959 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:19:58.469774 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:19:58.493556 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:19:58.501509 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:19:58.537376 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:19:58.537462 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:19:58.537494 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:19:58.547985 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:19:58.563663 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:19:58.566463 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:19:58.575731 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:19:58.588572 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:19:58.682936 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:19:58.694595 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:19:58.747652 systemd-networkd[1112]: lo: Link UP Feb 13 15:19:58.748762 systemd-networkd[1112]: lo: Gained carrier Feb 13 15:19:58.752449 systemd-networkd[1112]: Enumeration completed Feb 13 15:19:58.754175 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:19:58.756428 systemd[1]: Reached target network.target - Network. Feb 13 15:19:58.757491 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:19:58.757498 systemd-networkd[1112]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:19:58.771294 systemd-networkd[1112]: eth0: Link UP Feb 13 15:19:58.771307 systemd-networkd[1112]: eth0: Gained carrier Feb 13 15:19:58.771326 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:19:58.799304 systemd-networkd[1112]: eth0: DHCPv4 address 172.31.23.231/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:19:58.973005 ignition[1026]: Ignition 2.20.0 Feb 13 15:19:58.973519 ignition[1026]: Stage: fetch-offline Feb 13 15:19:58.973939 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:58.973963 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:19:58.974438 ignition[1026]: Ignition finished successfully Feb 13 15:19:58.983559 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:19:58.999604 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:19:59.023967 ignition[1125]: Ignition 2.20.0 Feb 13 15:19:59.024017 ignition[1125]: Stage: fetch Feb 13 15:19:59.026119 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:59.026252 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:19:59.026956 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:19:59.052336 ignition[1125]: PUT result: OK Feb 13 15:19:59.055584 ignition[1125]: parsed url from cmdline: "" Feb 13 15:19:59.055600 ignition[1125]: no config URL provided Feb 13 15:19:59.055614 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:19:59.055638 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:19:59.055669 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:19:59.061142 ignition[1125]: PUT result: OK Feb 13 15:19:59.061236 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:19:59.064616 ignition[1125]: GET result: OK Feb 13 15:19:59.064748 ignition[1125]: parsing config with SHA512: 0ece8e49091f10c6dfb4fe7ad6101ada76e1ec9adcbcc2cae14511c0a771ca422317c24c74e2708841b0652bda0020d250d38c2a69dd81286968cc38a47e0498 Feb 13 15:19:59.076103 unknown[1125]: fetched base config from "system" Feb 13 15:19:59.076378 unknown[1125]: fetched base config from "system" Feb 13 15:19:59.077663 ignition[1125]: fetch: fetch complete Feb 13 15:19:59.076392 unknown[1125]: fetched user config from "aws" Feb 13 15:19:59.077683 ignition[1125]: fetch: fetch passed Feb 13 15:19:59.077784 ignition[1125]: Ignition finished successfully Feb 13 15:19:59.088545 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:19:59.104489 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:19:59.126954 ignition[1131]: Ignition 2.20.0 Feb 13 15:19:59.126984 ignition[1131]: Stage: kargs Feb 13 15:19:59.127771 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:59.127797 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:19:59.127946 ignition[1131]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:19:59.130040 ignition[1131]: PUT result: OK Feb 13 15:19:59.140930 ignition[1131]: kargs: kargs passed Feb 13 15:19:59.141242 ignition[1131]: Ignition finished successfully Feb 13 15:19:59.147265 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:19:59.166598 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:19:59.189495 ignition[1137]: Ignition 2.20.0 Feb 13 15:19:59.189517 ignition[1137]: Stage: disks Feb 13 15:19:59.190077 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:59.190101 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:19:59.190780 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:19:59.197389 ignition[1137]: PUT result: OK Feb 13 15:19:59.203958 ignition[1137]: disks: disks passed Feb 13 15:19:59.204285 ignition[1137]: Ignition finished successfully Feb 13 15:19:59.209924 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:19:59.210695 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:19:59.211049 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:19:59.213514 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:19:59.213813 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:19:59.214424 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:19:59.227645 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:19:59.266815 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:19:59.271045 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:19:59.282515 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:19:59.366233 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none. Feb 13 15:19:59.367619 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:19:59.371130 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:19:59.387460 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:19:59.392426 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:19:59.395913 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:19:59.396009 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:19:59.396060 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:19:59.417710 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:19:59.431483 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:19:59.436678 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1165) Feb 13 15:19:59.441174 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:19:59.441260 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:19:59.442592 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:19:59.449449 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:19:59.451735 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:19:59.806420 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:19:59.825270 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:19:59.841837 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:19:59.850591 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:20:00.157134 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:20:00.166451 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:20:00.179009 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:20:00.195752 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:20:00.201304 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:20:00.224530 systemd-networkd[1112]: eth0: Gained IPv6LL Feb 13 15:20:00.228401 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:20:00.252015 ignition[1279]: INFO : Ignition 2.20.0 Feb 13 15:20:00.252015 ignition[1279]: INFO : Stage: mount Feb 13 15:20:00.255195 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:00.255195 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:20:00.259231 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:20:00.262477 ignition[1279]: INFO : PUT result: OK Feb 13 15:20:00.266521 ignition[1279]: INFO : mount: mount passed Feb 13 15:20:00.266521 ignition[1279]: INFO : Ignition finished successfully Feb 13 15:20:00.272284 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:20:00.281424 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:20:00.375559 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:20:00.406259 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1289) Feb 13 15:20:00.409746 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:20:00.409790 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:20:00.409817 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:20:00.416611 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:20:00.419167 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:20:00.451272 ignition[1305]: INFO : Ignition 2.20.0 Feb 13 15:20:00.454155 ignition[1305]: INFO : Stage: files Feb 13 15:20:00.454155 ignition[1305]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:00.454155 ignition[1305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:20:00.454155 ignition[1305]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:20:00.462993 ignition[1305]: INFO : PUT result: OK Feb 13 15:20:00.467244 ignition[1305]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:20:00.479095 ignition[1305]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:20:00.479095 ignition[1305]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:20:00.497598 ignition[1305]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:20:00.500483 ignition[1305]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:20:00.503548 unknown[1305]: wrote ssh authorized keys file for user: core Feb 13 15:20:00.505876 ignition[1305]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:20:00.516964 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:20:00.520592 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:20:00.609913 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:20:00.749412 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:20:00.749412 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:20:00.756553 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:20:01.267839 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:20:01.404444 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:20:01.407882 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:20:01.706398 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:20:02.046572 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:20:02.046572 ignition[1305]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:20:02.052937 ignition[1305]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:20:02.052937 ignition[1305]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:20:02.052937 ignition[1305]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:20:02.052937 ignition[1305]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:20:02.052937 ignition[1305]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:20:02.052937 ignition[1305]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:20:02.052937 ignition[1305]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:20:02.052937 ignition[1305]: INFO : files: files passed Feb 13 15:20:02.052937 ignition[1305]: INFO : Ignition finished successfully Feb 13 15:20:02.066940 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:20:02.093550 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:20:02.100645 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:20:02.109739 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:20:02.113627 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:20:02.128222 initrd-setup-root-after-ignition[1334]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:20:02.131392 initrd-setup-root-after-ignition[1334]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:20:02.137422 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:20:02.140351 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:20:02.143727 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:20:02.162592 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:20:02.207650 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:20:02.210125 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:20:02.215311 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:20:02.217313 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:20:02.219280 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:20:02.232531 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:20:02.265266 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:20:02.275540 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:20:02.317829 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:20:02.318197 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:20:02.326178 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:20:02.330427 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:20:02.332636 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:20:02.334371 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:20:02.334491 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:20:02.336918 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:20:02.338913 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:20:02.340918 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:20:02.345400 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:20:02.349582 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:20:02.350192 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:20:02.350496 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:20:02.351103 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:20:02.351705 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:20:02.352006 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:20:02.354735 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:20:02.354835 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:20:02.355700 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:20:02.355963 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:20:02.381864 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:20:02.392704 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:20:02.395235 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:20:02.395334 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:20:02.397745 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:20:02.397832 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:20:02.401719 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:20:02.401805 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:20:02.424511 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:20:02.427637 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:20:02.427748 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:20:02.440845 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:20:02.442624 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:20:02.442749 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:20:02.445101 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:20:02.445218 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:20:02.483607 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:20:02.489183 ignition[1359]: INFO : Ignition 2.20.0 Feb 13 15:20:02.492673 ignition[1359]: INFO : Stage: umount Feb 13 15:20:02.495614 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:02.495614 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:20:02.495614 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:20:02.501920 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:20:02.502175 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:20:02.511485 ignition[1359]: INFO : PUT result: OK Feb 13 15:20:02.517037 ignition[1359]: INFO : umount: umount passed Feb 13 15:20:02.518835 ignition[1359]: INFO : Ignition finished successfully Feb 13 15:20:02.521989 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:20:02.524257 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:20:02.530193 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:20:02.530390 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:20:02.533998 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:20:02.534686 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:20:02.537670 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:20:02.537752 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:20:02.539785 systemd[1]: Stopped target network.target - Network. Feb 13 15:20:02.541430 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:20:02.541513 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:20:02.543782 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:20:02.545646 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:20:02.561748 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:20:02.564034 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:20:02.565716 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:20:02.567532 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:20:02.567607 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:20:02.569457 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:20:02.569524 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:20:02.571416 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:20:02.571496 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:20:02.573375 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:20:02.573450 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:20:02.575433 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:20:02.575511 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:20:02.578103 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:20:02.581982 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:20:02.593725 systemd-networkd[1112]: eth0: DHCPv6 lease lost Feb 13 15:20:02.596308 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:20:02.596566 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:20:02.603941 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:20:02.604315 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:20:02.608964 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:20:02.609051 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:20:02.625627 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:20:02.643111 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:20:02.643259 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:20:02.651684 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:20:02.651775 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:20:02.659538 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:20:02.659622 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:20:02.662330 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:20:02.662406 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:20:02.664628 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:20:02.697187 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:20:02.700439 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:20:02.707347 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:20:02.708365 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:20:02.712027 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:20:02.712110 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:20:02.716629 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:20:02.716695 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:20:02.718799 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:20:02.718882 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:20:02.732130 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:20:02.732273 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:20:02.735961 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:20:02.736044 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:20:02.752767 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:20:02.757945 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:20:02.758061 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:20:02.764029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:20:02.764137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:02.775961 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:20:02.776137 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:20:02.780899 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:20:02.796931 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:20:02.812721 systemd[1]: Switching root. Feb 13 15:20:02.850127 systemd-journald[251]: Journal stopped Feb 13 15:20:05.301088 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 15:20:05.301643 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:20:05.301688 kernel: SELinux: policy capability open_perms=1 Feb 13 15:20:05.301725 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:20:05.301755 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:20:05.301784 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:20:05.301821 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:20:05.301850 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:20:05.301879 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:20:05.301908 kernel: audit: type=1403 audit(1739460003.395:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:20:05.301937 systemd[1]: Successfully loaded SELinux policy in 49.124ms. Feb 13 15:20:05.301983 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.795ms. Feb 13 15:20:05.302020 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:20:05.302053 systemd[1]: Detected virtualization amazon. Feb 13 15:20:05.302084 systemd[1]: Detected architecture arm64. Feb 13 15:20:05.302115 systemd[1]: Detected first boot. Feb 13 15:20:05.302146 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:20:05.302177 zram_generator::config[1400]: No configuration found. Feb 13 15:20:05.302251 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:20:05.307801 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:20:05.307845 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:20:05.307892 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:20:05.307927 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:20:05.307957 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:20:05.307987 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:20:05.308017 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:20:05.308049 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:20:05.308082 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:20:05.308119 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:20:05.308169 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:20:05.309043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:20:05.310004 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:20:05.310183 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:20:05.313981 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:20:05.314031 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:20:05.314067 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:20:05.314100 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:20:05.314137 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:20:05.314166 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:20:05.314195 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:20:05.315342 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:20:05.315386 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:20:05.315420 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:20:05.315451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:20:05.315483 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:20:05.315520 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:20:05.315549 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:20:05.315579 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:20:05.315608 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:20:05.315636 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:20:05.315668 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:20:05.315697 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:20:05.315727 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:20:05.315759 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:20:05.315793 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:20:05.315824 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:20:05.315854 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:20:05.315884 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:20:05.315914 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:20:05.315945 systemd[1]: Reached target machines.target - Containers. Feb 13 15:20:05.315974 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:20:05.316003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:05.316034 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:20:05.316070 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:20:05.316099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:20:05.316128 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:20:05.316178 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:20:05.319299 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:20:05.319358 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:20:05.319389 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:20:05.319417 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:20:05.319454 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:20:05.319488 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:20:05.319520 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:20:05.319548 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:20:05.319576 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:20:05.319605 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:20:05.319633 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:20:05.319664 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:20:05.319694 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:20:05.319727 systemd[1]: Stopped verity-setup.service. Feb 13 15:20:05.319756 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:20:05.319785 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:20:05.319815 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:20:05.319846 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:20:05.319876 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:20:05.319908 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:20:05.319939 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:20:05.319968 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:20:05.319999 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:20:05.320027 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:20:05.320055 kernel: loop: module loaded Feb 13 15:20:05.320085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:20:05.320114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:20:05.320172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:20:05.324397 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:20:05.324466 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:20:05.324499 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:20:05.324528 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:20:05.324559 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:20:05.324589 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:20:05.324627 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:20:05.324701 systemd-journald[1485]: Collecting audit messages is disabled. Feb 13 15:20:05.324759 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:20:05.324789 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:20:05.324817 kernel: fuse: init (API version 7.39) Feb 13 15:20:05.324849 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:20:05.324880 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:20:05.324912 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:20:05.324939 kernel: ACPI: bus type drm_connector registered Feb 13 15:20:05.324965 systemd-journald[1485]: Journal started Feb 13 15:20:05.325019 systemd-journald[1485]: Runtime Journal (/run/log/journal/ec2da8796291ec3651bb39e74883f7cb) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:20:04.687780 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:20:05.333328 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:20:04.739449 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:20:04.740271 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:20:05.346065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:05.346155 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:20:05.354046 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:20:05.365091 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:20:05.367675 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:20:05.375680 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:20:05.398289 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:20:05.398387 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:20:05.405715 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:20:05.410727 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:20:05.412289 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:20:05.414973 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:20:05.416926 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:20:05.420680 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:20:05.424932 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:20:05.467553 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:20:05.476347 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:20:05.488242 kernel: loop0: detected capacity change from 0 to 116784 Feb 13 15:20:05.499024 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:20:05.512468 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:20:05.524507 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:20:05.532463 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:20:05.545579 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:20:05.603340 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:20:05.607283 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:20:05.625603 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:20:05.625794 systemd-journald[1485]: Time spent on flushing to /var/log/journal/ec2da8796291ec3651bb39e74883f7cb is 67.231ms for 918 entries. Feb 13 15:20:05.625794 systemd-journald[1485]: System Journal (/var/log/journal/ec2da8796291ec3651bb39e74883f7cb) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:20:05.709454 systemd-journald[1485]: Received client request to flush runtime journal. Feb 13 15:20:05.709527 kernel: loop1: detected capacity change from 0 to 194512 Feb 13 15:20:05.647314 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:20:05.659573 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:20:05.675844 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:20:05.692963 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:20:05.716982 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:20:05.723309 udevadm[1544]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:20:05.773958 systemd-tmpfiles[1546]: ACLs are not supported, ignoring. Feb 13 15:20:05.773996 systemd-tmpfiles[1546]: ACLs are not supported, ignoring. Feb 13 15:20:05.785474 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:20:05.930251 kernel: loop2: detected capacity change from 0 to 53784 Feb 13 15:20:06.004244 kernel: loop3: detected capacity change from 0 to 113552 Feb 13 15:20:06.128244 kernel: loop4: detected capacity change from 0 to 116784 Feb 13 15:20:06.144272 kernel: loop5: detected capacity change from 0 to 194512 Feb 13 15:20:06.176067 kernel: loop6: detected capacity change from 0 to 53784 Feb 13 15:20:06.202259 kernel: loop7: detected capacity change from 0 to 113552 Feb 13 15:20:06.216237 (sd-merge)[1554]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:20:06.221422 (sd-merge)[1554]: Merged extensions into '/usr'. Feb 13 15:20:06.231203 systemd[1]: Reloading requested from client PID 1510 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:20:06.231410 systemd[1]: Reloading... Feb 13 15:20:06.413246 zram_generator::config[1580]: No configuration found. Feb 13 15:20:06.744845 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:20:06.857249 systemd[1]: Reloading finished in 624 ms. Feb 13 15:20:06.892065 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:20:06.895331 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:20:06.909541 systemd[1]: Starting ensure-sysext.service... Feb 13 15:20:06.922624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:20:06.931578 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:20:06.963523 systemd[1]: Reloading requested from client PID 1632 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:20:06.963551 systemd[1]: Reloading... Feb 13 15:20:06.977304 systemd-tmpfiles[1633]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:20:06.977847 systemd-tmpfiles[1633]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:20:06.981727 systemd-tmpfiles[1633]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:20:06.986018 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. Feb 13 15:20:06.986360 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. Feb 13 15:20:07.002656 systemd-tmpfiles[1633]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:20:07.002675 systemd-tmpfiles[1633]: Skipping /boot Feb 13 15:20:07.074252 ldconfig[1506]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:20:07.075191 systemd-tmpfiles[1633]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:20:07.075239 systemd-tmpfiles[1633]: Skipping /boot Feb 13 15:20:07.080236 zram_generator::config[1659]: No configuration found. Feb 13 15:20:07.110466 systemd-udevd[1634]: Using default interface naming scheme 'v255'. Feb 13 15:20:07.445406 (udev-worker)[1726]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:20:07.451758 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:20:07.605600 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:20:07.607033 systemd[1]: Reloading finished in 642 ms. Feb 13 15:20:07.620247 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1730) Feb 13 15:20:07.652962 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:20:07.655961 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:20:07.675570 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:20:07.729025 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:20:07.735839 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:20:07.740749 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:07.745792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:20:07.757791 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:20:07.763961 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:20:07.767751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:07.772768 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:20:07.781728 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:20:07.790749 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:20:07.814165 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:20:07.823789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:07.824232 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:07.891037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:20:07.891368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:20:07.895441 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:20:07.895730 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:20:07.903322 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:20:07.921050 systemd[1]: Finished ensure-sysext.service. Feb 13 15:20:07.925402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:20:07.927480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:20:07.973225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:07.979712 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:20:07.982896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:07.983158 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:20:07.983443 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:20:07.983717 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:20:08.022242 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:20:08.045065 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:20:08.062266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:20:08.068915 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:20:08.072415 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:20:08.091286 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:20:08.091592 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:20:08.096396 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:20:08.115813 augenrules[1867]: No rules Feb 13 15:20:08.122435 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:20:08.122913 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:20:08.156325 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:20:08.157106 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:20:08.190689 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:20:08.204595 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:20:08.204749 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:20:08.218305 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:20:08.266230 lvm[1879]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:20:08.305353 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:20:08.338280 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:20:08.344146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:20:08.359808 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:20:08.366491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:08.377981 systemd-networkd[1789]: lo: Link UP Feb 13 15:20:08.378006 systemd-networkd[1789]: lo: Gained carrier Feb 13 15:20:08.381128 systemd-networkd[1789]: Enumeration completed Feb 13 15:20:08.381345 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:20:08.387696 systemd-networkd[1789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:20:08.387719 systemd-networkd[1789]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:20:08.388747 lvm[1893]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:20:08.392937 systemd-networkd[1789]: eth0: Link UP Feb 13 15:20:08.393337 systemd-networkd[1789]: eth0: Gained carrier Feb 13 15:20:08.393383 systemd-networkd[1789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:20:08.395617 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:20:08.408508 systemd-networkd[1789]: eth0: DHCPv4 address 172.31.23.231/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:20:08.433450 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:20:08.437238 systemd-resolved[1793]: Positive Trust Anchors: Feb 13 15:20:08.437739 systemd-resolved[1793]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:20:08.437889 systemd-resolved[1793]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:20:08.447584 systemd-resolved[1793]: Defaulting to hostname 'linux'. Feb 13 15:20:08.450904 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:20:08.453419 systemd[1]: Reached target network.target - Network. Feb 13 15:20:08.455357 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:20:08.457837 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:20:08.459953 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:20:08.462276 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:20:08.464847 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:20:08.466984 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:20:08.469289 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:20:08.471541 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:20:08.471593 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:20:08.473842 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:20:08.477162 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:20:08.481889 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:20:08.491535 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:20:08.494730 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:20:08.497124 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:20:08.498972 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:20:08.501126 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:20:08.501175 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:20:08.509593 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:20:08.518803 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:20:08.529566 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:20:08.534178 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:20:08.544647 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:20:08.548298 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:20:08.558795 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:20:08.564512 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:20:08.571840 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:20:08.587253 jq[1902]: false Feb 13 15:20:08.585423 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:20:08.591569 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:20:08.597779 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:20:08.611652 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:20:08.614540 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:20:08.615530 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:20:08.630250 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:20:08.637440 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:20:08.650025 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:20:08.650471 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:20:08.690185 dbus-daemon[1901]: [system] SELinux support is enabled Feb 13 15:20:08.692764 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:20:08.700960 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:20:08.701025 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:20:08.716494 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:20:08.716548 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:20:08.720394 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:20:08.723755 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:20:08.742032 dbus-daemon[1901]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1789 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:20:08.773895 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:20:08.784261 extend-filesystems[1903]: Found loop4 Feb 13 15:20:08.784261 extend-filesystems[1903]: Found loop5 Feb 13 15:20:08.784261 extend-filesystems[1903]: Found loop6 Feb 13 15:20:08.784261 extend-filesystems[1903]: Found loop7 Feb 13 15:20:08.784261 extend-filesystems[1903]: Found nvme0n1 Feb 13 15:20:08.784261 extend-filesystems[1903]: Found nvme0n1p1 Feb 13 15:20:08.784261 extend-filesystems[1903]: Found nvme0n1p2 Feb 13 15:20:08.784261 extend-filesystems[1903]: Found nvme0n1p3 Feb 13 15:20:08.784261 extend-filesystems[1903]: Found usr Feb 13 15:20:08.784261 extend-filesystems[1903]: Found nvme0n1p4 Feb 13 15:20:08.784261 extend-filesystems[1903]: Found nvme0n1p6 Feb 13 15:20:08.784261 extend-filesystems[1903]: Found nvme0n1p7 Feb 13 15:20:08.784261 extend-filesystems[1903]: Found nvme0n1p9 Feb 13 15:20:08.784261 extend-filesystems[1903]: Checking size of /dev/nvme0n1p9 Feb 13 15:20:08.874609 jq[1914]: true Feb 13 15:20:08.874872 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:31:02 UTC 2025 (1): Starting Feb 13 15:20:08.874872 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:20:08.874872 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: ---------------------------------------------------- Feb 13 15:20:08.874872 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:20:08.874872 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:20:08.874872 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: corporation. Support and training for ntp-4 are Feb 13 15:20:08.874872 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: available at https://www.nwtime.org/support Feb 13 15:20:08.874872 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: ---------------------------------------------------- Feb 13 15:20:08.874872 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: proto: precision = 0.096 usec (-23) Feb 13 15:20:08.896726 update_engine[1912]: I20250213 15:20:08.807632 1912 main.cc:92] Flatcar Update Engine starting Feb 13 15:20:08.896726 update_engine[1912]: I20250213 15:20:08.824481 1912 update_check_scheduler.cc:74] Next update check in 5m37s Feb 13 15:20:08.822542 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:20:08.828377 ntpd[1905]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:31:02 UTC 2025 (1): Starting Feb 13 15:20:08.897594 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: basedate set to 2025-02-01 Feb 13 15:20:08.897594 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: gps base set to 2025-02-02 (week 2352) Feb 13 15:20:08.837546 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:20:08.828426 ntpd[1905]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:20:08.913566 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:20:08.913566 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:20:08.913566 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:20:08.913566 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: Listen normally on 3 eth0 172.31.23.231:123 Feb 13 15:20:08.913566 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: Listen normally on 4 lo [::1]:123 Feb 13 15:20:08.913566 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: bind(21) AF_INET6 fe80::445:b3ff:fe1f:41af%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:20:08.913566 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: unable to create socket on eth0 (5) for fe80::445:b3ff:fe1f:41af%2#123 Feb 13 15:20:08.913566 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: failed to init interface for address fe80::445:b3ff:fe1f:41af%2 Feb 13 15:20:08.913566 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: Listening on routing socket on fd #21 for interface updates Feb 13 15:20:08.902461 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:20:08.828446 ntpd[1905]: ---------------------------------------------------- Feb 13 15:20:08.915521 tar[1921]: linux-arm64/helm Feb 13 15:20:08.902683 (ntainerd)[1935]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:20:08.828465 ntpd[1905]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:20:08.902868 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:20:08.828483 ntpd[1905]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:20:08.927577 jq[1932]: true Feb 13 15:20:08.828500 ntpd[1905]: corporation. Support and training for ntp-4 are Feb 13 15:20:08.828518 ntpd[1905]: available at https://www.nwtime.org/support Feb 13 15:20:08.828536 ntpd[1905]: ---------------------------------------------------- Feb 13 15:20:08.940923 extend-filesystems[1903]: Resized partition /dev/nvme0n1p9 Feb 13 15:20:08.867944 ntpd[1905]: proto: precision = 0.096 usec (-23) Feb 13 15:20:08.952264 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:20:08.952345 extend-filesystems[1960]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:20:08.884144 ntpd[1905]: basedate set to 2025-02-01 Feb 13 15:20:08.954527 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:20:08.954527 ntpd[1905]: 13 Feb 15:20:08 ntpd[1905]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:20:08.884180 ntpd[1905]: gps base set to 2025-02-02 (week 2352) Feb 13 15:20:08.909171 ntpd[1905]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:20:08.909287 ntpd[1905]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:20:08.909541 ntpd[1905]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:20:08.909603 ntpd[1905]: Listen normally on 3 eth0 172.31.23.231:123 Feb 13 15:20:08.909670 ntpd[1905]: Listen normally on 4 lo [::1]:123 Feb 13 15:20:08.909743 ntpd[1905]: bind(21) AF_INET6 fe80::445:b3ff:fe1f:41af%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:20:08.909780 ntpd[1905]: unable to create socket on eth0 (5) for fe80::445:b3ff:fe1f:41af%2#123 Feb 13 15:20:08.909808 ntpd[1905]: failed to init interface for address fe80::445:b3ff:fe1f:41af%2 Feb 13 15:20:08.909859 ntpd[1905]: Listening on routing socket on fd #21 for interface updates Feb 13 15:20:08.950282 ntpd[1905]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:20:08.950331 ntpd[1905]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:20:09.076480 systemd-logind[1911]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:20:09.076524 systemd-logind[1911]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 15:20:09.087540 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:20:09.089323 systemd-logind[1911]: New seat seat0. Feb 13 15:20:09.096672 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:20:09.105623 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:20:09.108910 extend-filesystems[1960]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:20:09.108910 extend-filesystems[1960]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:20:09.108910 extend-filesystems[1960]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:20:09.126456 extend-filesystems[1903]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:20:09.118097 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:20:09.137579 bash[1969]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:20:09.120572 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:20:09.135607 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:20:09.177981 coreos-metadata[1900]: Feb 13 15:20:09.174 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:20:09.177981 coreos-metadata[1900]: Feb 13 15:20:09.175 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:20:09.177981 coreos-metadata[1900]: Feb 13 15:20:09.176 INFO Fetch successful Feb 13 15:20:09.177981 coreos-metadata[1900]: Feb 13 15:20:09.176 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:20:09.177981 coreos-metadata[1900]: Feb 13 15:20:09.177 INFO Fetch successful Feb 13 15:20:09.177981 coreos-metadata[1900]: Feb 13 15:20:09.177 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.184 INFO Fetch successful Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.184 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.193 INFO Fetch successful Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.193 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.193 INFO Fetch failed with 404: resource not found Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.193 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.193 INFO Fetch successful Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.193 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.193 INFO Fetch successful Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.193 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.194 INFO Fetch successful Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.194 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.197 INFO Fetch successful Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.197 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:20:09.247160 coreos-metadata[1900]: Feb 13 15:20:09.198 INFO Fetch successful Feb 13 15:20:09.226031 systemd[1]: Starting sshkeys.service... Feb 13 15:20:09.302192 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:20:09.318195 locksmithd[1940]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:20:09.348525 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:20:09.366590 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1726) Feb 13 15:20:09.394675 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:20:09.426869 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:20:09.430892 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:20:09.449104 dbus-daemon[1901]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:20:09.449425 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:20:09.454509 dbus-daemon[1901]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1929 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:20:09.486314 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:20:09.526847 containerd[1935]: time="2025-02-13T15:20:09.526715554Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:20:09.530282 polkitd[2028]: Started polkitd version 121 Feb 13 15:20:09.551700 polkitd[2028]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:20:09.551841 polkitd[2028]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:20:09.554632 polkitd[2028]: Finished loading, compiling and executing 2 rules Feb 13 15:20:09.564516 dbus-daemon[1901]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:20:09.565342 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:20:09.565192 polkitd[2028]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:20:09.626677 systemd-hostnamed[1929]: Hostname set to (transient) Feb 13 15:20:09.626839 systemd-resolved[1793]: System hostname changed to 'ip-172-31-23-231'. Feb 13 15:20:09.688406 coreos-metadata[1992]: Feb 13 15:20:09.685 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:20:09.697292 coreos-metadata[1992]: Feb 13 15:20:09.697 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:20:09.698147 coreos-metadata[1992]: Feb 13 15:20:09.698 INFO Fetch successful Feb 13 15:20:09.698272 coreos-metadata[1992]: Feb 13 15:20:09.698 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:20:09.699249 coreos-metadata[1992]: Feb 13 15:20:09.698 INFO Fetch successful Feb 13 15:20:09.704995 unknown[1992]: wrote ssh authorized keys file for user: core Feb 13 15:20:09.730440 containerd[1935]: time="2025-02-13T15:20:09.730370879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:09.739246 containerd[1935]: time="2025-02-13T15:20:09.737012447Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:09.739246 containerd[1935]: time="2025-02-13T15:20:09.737074019Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:20:09.739246 containerd[1935]: time="2025-02-13T15:20:09.737108603Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:20:09.739246 containerd[1935]: time="2025-02-13T15:20:09.737422487Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:20:09.739246 containerd[1935]: time="2025-02-13T15:20:09.737456171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:09.739246 containerd[1935]: time="2025-02-13T15:20:09.737579555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:09.739246 containerd[1935]: time="2025-02-13T15:20:09.737606543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:09.739246 containerd[1935]: time="2025-02-13T15:20:09.737878295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:09.739246 containerd[1935]: time="2025-02-13T15:20:09.737912015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:09.739246 containerd[1935]: time="2025-02-13T15:20:09.737942015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:09.739246 containerd[1935]: time="2025-02-13T15:20:09.737965775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:09.739751 containerd[1935]: time="2025-02-13T15:20:09.738116663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:09.741893 containerd[1935]: time="2025-02-13T15:20:09.741843227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:09.742840 containerd[1935]: time="2025-02-13T15:20:09.742797251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:09.745811 containerd[1935]: time="2025-02-13T15:20:09.745256879Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:20:09.745811 containerd[1935]: time="2025-02-13T15:20:09.745503779Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:20:09.745811 containerd[1935]: time="2025-02-13T15:20:09.745599731Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.758061647Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.758167943Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.758203451Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.758269667Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.758310647Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.758570207Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.758975915Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.759181223Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.759246743Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.759282863Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.759318311Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.759348647Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.759377543Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:20:09.760255 containerd[1935]: time="2025-02-13T15:20:09.759409295Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759440507Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759470555Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759502283Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759531035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759569915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759600059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759636647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759666371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759695807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759726971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759754835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759783647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759813047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.760875 containerd[1935]: time="2025-02-13T15:20:09.759845735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.759872711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.759902387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.759931043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.759962315Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.760004399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.760044215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.760071167Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.762297371Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.762471371Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.762501167Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.762531191Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.762554351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.762586991Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:20:09.763545 containerd[1935]: time="2025-02-13T15:20:09.762610823Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:20:09.764100 containerd[1935]: time="2025-02-13T15:20:09.762634907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:20:09.764174 containerd[1935]: time="2025-02-13T15:20:09.763134551Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:20:09.764174 containerd[1935]: time="2025-02-13T15:20:09.764067695Z" level=info msg="Connect containerd service" Feb 13 15:20:09.764174 containerd[1935]: time="2025-02-13T15:20:09.764155391Z" level=info msg="using legacy CRI server" Feb 13 15:20:09.764174 containerd[1935]: time="2025-02-13T15:20:09.764177543Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:20:09.765594 containerd[1935]: time="2025-02-13T15:20:09.764951759Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:20:09.768729 containerd[1935]: time="2025-02-13T15:20:09.768650711Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:20:09.772838 containerd[1935]: time="2025-02-13T15:20:09.771101267Z" level=info msg="Start subscribing containerd event" Feb 13 15:20:09.772838 containerd[1935]: time="2025-02-13T15:20:09.771193919Z" level=info msg="Start recovering state" Feb 13 15:20:09.772838 containerd[1935]: time="2025-02-13T15:20:09.772740947Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:20:09.773135 containerd[1935]: time="2025-02-13T15:20:09.772860131Z" level=info msg="Start event monitor" Feb 13 15:20:09.773135 containerd[1935]: time="2025-02-13T15:20:09.772889555Z" level=info msg="Start snapshots syncer" Feb 13 15:20:09.773135 containerd[1935]: time="2025-02-13T15:20:09.772918379Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:20:09.773135 containerd[1935]: time="2025-02-13T15:20:09.772937555Z" level=info msg="Start streaming server" Feb 13 15:20:09.774347 containerd[1935]: time="2025-02-13T15:20:09.774278663Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:20:09.774537 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:20:09.780931 containerd[1935]: time="2025-02-13T15:20:09.775613255Z" level=info msg="containerd successfully booted in 0.255527s" Feb 13 15:20:09.790567 update-ssh-keys[2071]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:20:09.792680 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:20:09.800847 systemd[1]: Finished sshkeys.service. Feb 13 15:20:09.830772 ntpd[1905]: bind(24) AF_INET6 fe80::445:b3ff:fe1f:41af%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:20:09.832672 ntpd[1905]: 13 Feb 15:20:09 ntpd[1905]: bind(24) AF_INET6 fe80::445:b3ff:fe1f:41af%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:20:09.832672 ntpd[1905]: 13 Feb 15:20:09 ntpd[1905]: unable to create socket on eth0 (6) for fe80::445:b3ff:fe1f:41af%2#123 Feb 13 15:20:09.832672 ntpd[1905]: 13 Feb 15:20:09 ntpd[1905]: failed to init interface for address fe80::445:b3ff:fe1f:41af%2 Feb 13 15:20:09.830839 ntpd[1905]: unable to create socket on eth0 (6) for fe80::445:b3ff:fe1f:41af%2#123 Feb 13 15:20:09.830867 ntpd[1905]: failed to init interface for address fe80::445:b3ff:fe1f:41af%2 Feb 13 15:20:10.209401 systemd-networkd[1789]: eth0: Gained IPv6LL Feb 13 15:20:10.215510 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:20:10.221029 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:20:10.238742 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:20:10.249649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:10.255533 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:20:10.363311 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:20:10.393248 amazon-ssm-agent[2106]: Initializing new seelog logger Feb 13 15:20:10.393764 amazon-ssm-agent[2106]: New Seelog Logger Creation Complete Feb 13 15:20:10.393764 amazon-ssm-agent[2106]: 2025/02/13 15:20:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:10.393764 amazon-ssm-agent[2106]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:10.394185 amazon-ssm-agent[2106]: 2025/02/13 15:20:10 processing appconfig overrides Feb 13 15:20:10.397258 amazon-ssm-agent[2106]: 2025/02/13 15:20:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:10.397258 amazon-ssm-agent[2106]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:10.397258 amazon-ssm-agent[2106]: 2025/02/13 15:20:10 processing appconfig overrides Feb 13 15:20:10.397258 amazon-ssm-agent[2106]: 2025/02/13 15:20:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:10.397258 amazon-ssm-agent[2106]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:10.397258 amazon-ssm-agent[2106]: 2025/02/13 15:20:10 processing appconfig overrides Feb 13 15:20:10.399369 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO Proxy environment variables: Feb 13 15:20:10.402714 amazon-ssm-agent[2106]: 2025/02/13 15:20:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:10.402714 amazon-ssm-agent[2106]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:10.402882 amazon-ssm-agent[2106]: 2025/02/13 15:20:10 processing appconfig overrides Feb 13 15:20:10.465298 tar[1921]: linux-arm64/LICENSE Feb 13 15:20:10.465820 tar[1921]: linux-arm64/README.md Feb 13 15:20:10.501312 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:20:10.506861 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO https_proxy: Feb 13 15:20:10.606615 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO http_proxy: Feb 13 15:20:10.705673 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO no_proxy: Feb 13 15:20:10.804404 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:20:10.902548 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:20:10.969291 sshd_keygen[1925]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:20:11.001785 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO Agent will take identity from EC2 Feb 13 15:20:11.027076 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:20:11.044819 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:20:11.054595 systemd[1]: Started sshd@0-172.31.23.231:22-147.75.109.163:50486.service - OpenSSH per-connection server daemon (147.75.109.163:50486). Feb 13 15:20:11.080508 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:20:11.082738 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:20:11.097892 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:20:11.102261 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:20:11.141334 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:20:11.155769 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:20:11.168776 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:20:11.172693 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:20:11.202327 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:20:11.301567 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:20:11.320286 sshd[2136]: Accepted publickey for core from 147.75.109.163 port 50486 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:11.326697 sshd-session[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:11.349936 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:20:11.361786 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:20:11.377522 systemd-logind[1911]: New session 1 of user core. Feb 13 15:20:11.402627 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:20:11.405736 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:20:11.423964 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:20:11.447184 (systemd)[2148]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:20:11.502346 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 15:20:11.603310 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:20:11.680388 systemd[2148]: Queued start job for default target default.target. Feb 13 15:20:11.704252 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:20:11.703250 systemd[2148]: Created slice app.slice - User Application Slice. Feb 13 15:20:11.703302 systemd[2148]: Reached target paths.target - Paths. Feb 13 15:20:11.703333 systemd[2148]: Reached target timers.target - Timers. Feb 13 15:20:11.707433 systemd[2148]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:20:11.735486 systemd[2148]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:20:11.735605 systemd[2148]: Reached target sockets.target - Sockets. Feb 13 15:20:11.735637 systemd[2148]: Reached target basic.target - Basic System. Feb 13 15:20:11.735718 systemd[2148]: Reached target default.target - Main User Target. Feb 13 15:20:11.735777 systemd[2148]: Startup finished in 275ms. Feb 13 15:20:11.735945 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:20:11.747531 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:20:11.803984 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO [Registrar] Starting registrar module Feb 13 15:20:11.863498 amazon-ssm-agent[2106]: 2025-02-13 15:20:10 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:20:11.863498 amazon-ssm-agent[2106]: 2025-02-13 15:20:11 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:20:11.863498 amazon-ssm-agent[2106]: 2025-02-13 15:20:11 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:20:11.863498 amazon-ssm-agent[2106]: 2025-02-13 15:20:11 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:20:11.863498 amazon-ssm-agent[2106]: 2025-02-13 15:20:11 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:20:11.904394 amazon-ssm-agent[2106]: 2025-02-13 15:20:11 INFO [CredentialRefresher] Next credential rotation will be in 31.0499911271 minutes Feb 13 15:20:11.905694 systemd[1]: Started sshd@1-172.31.23.231:22-147.75.109.163:47964.service - OpenSSH per-connection server daemon (147.75.109.163:47964). Feb 13 15:20:12.107474 sshd[2159]: Accepted publickey for core from 147.75.109.163 port 47964 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:12.110009 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:12.118492 systemd-logind[1911]: New session 2 of user core. Feb 13 15:20:12.129098 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:20:12.258702 sshd[2161]: Connection closed by 147.75.109.163 port 47964 Feb 13 15:20:12.257991 sshd-session[2159]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:12.262784 systemd[1]: sshd@1-172.31.23.231:22-147.75.109.163:47964.service: Deactivated successfully. Feb 13 15:20:12.265550 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:20:12.268344 systemd-logind[1911]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:20:12.270078 systemd-logind[1911]: Removed session 2. Feb 13 15:20:12.296032 systemd[1]: Started sshd@2-172.31.23.231:22-147.75.109.163:47966.service - OpenSSH per-connection server daemon (147.75.109.163:47966). Feb 13 15:20:12.484571 sshd[2166]: Accepted publickey for core from 147.75.109.163 port 47966 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:12.489452 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:12.496926 systemd-logind[1911]: New session 3 of user core. Feb 13 15:20:12.506474 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:20:12.634461 sshd[2168]: Connection closed by 147.75.109.163 port 47966 Feb 13 15:20:12.635344 sshd-session[2166]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:12.641545 systemd[1]: sshd@2-172.31.23.231:22-147.75.109.163:47966.service: Deactivated successfully. Feb 13 15:20:12.646174 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:20:12.647871 systemd-logind[1911]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:20:12.649917 systemd-logind[1911]: Removed session 3. Feb 13 15:20:12.829130 ntpd[1905]: Listen normally on 7 eth0 [fe80::445:b3ff:fe1f:41af%2]:123 Feb 13 15:20:12.829707 ntpd[1905]: 13 Feb 15:20:12 ntpd[1905]: Listen normally on 7 eth0 [fe80::445:b3ff:fe1f:41af%2]:123 Feb 13 15:20:12.889831 amazon-ssm-agent[2106]: 2025-02-13 15:20:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:20:12.991139 amazon-ssm-agent[2106]: 2025-02-13 15:20:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2173) started Feb 13 15:20:13.091957 amazon-ssm-agent[2106]: 2025-02-13 15:20:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:20:13.305727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:13.308966 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:20:13.314371 systemd[1]: Startup finished in 1.067s (kernel) + 8.615s (initrd) + 9.966s (userspace) = 19.649s. Feb 13 15:20:13.318929 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:13.337855 agetty[2143]: failed to open credentials directory Feb 13 15:20:13.337916 agetty[2145]: failed to open credentials directory Feb 13 15:20:14.830904 kubelet[2188]: E0213 15:20:14.830771 2188 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:14.835941 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:14.836353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:14.838347 systemd[1]: kubelet.service: Consumed 1.308s CPU time. Feb 13 15:20:15.438264 systemd-resolved[1793]: Clock change detected. Flushing caches. Feb 13 15:20:22.287806 systemd[1]: Started sshd@3-172.31.23.231:22-147.75.109.163:47974.service - OpenSSH per-connection server daemon (147.75.109.163:47974). Feb 13 15:20:22.462882 sshd[2201]: Accepted publickey for core from 147.75.109.163 port 47974 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:22.465349 sshd-session[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:22.473522 systemd-logind[1911]: New session 4 of user core. Feb 13 15:20:22.483621 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:20:22.606511 sshd[2203]: Connection closed by 147.75.109.163 port 47974 Feb 13 15:20:22.607618 sshd-session[2201]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:22.614207 systemd[1]: sshd@3-172.31.23.231:22-147.75.109.163:47974.service: Deactivated successfully. Feb 13 15:20:22.618039 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:20:22.620406 systemd-logind[1911]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:20:22.621962 systemd-logind[1911]: Removed session 4. Feb 13 15:20:22.653773 systemd[1]: Started sshd@4-172.31.23.231:22-147.75.109.163:47980.service - OpenSSH per-connection server daemon (147.75.109.163:47980). Feb 13 15:20:22.830177 sshd[2208]: Accepted publickey for core from 147.75.109.163 port 47980 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:22.832539 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:22.840179 systemd-logind[1911]: New session 5 of user core. Feb 13 15:20:22.848535 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:20:22.965732 sshd[2210]: Connection closed by 147.75.109.163 port 47980 Feb 13 15:20:22.966548 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:22.972696 systemd[1]: sshd@4-172.31.23.231:22-147.75.109.163:47980.service: Deactivated successfully. Feb 13 15:20:22.976093 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:20:22.977397 systemd-logind[1911]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:20:22.979179 systemd-logind[1911]: Removed session 5. Feb 13 15:20:23.004834 systemd[1]: Started sshd@5-172.31.23.231:22-147.75.109.163:47986.service - OpenSSH per-connection server daemon (147.75.109.163:47986). Feb 13 15:20:23.185565 sshd[2215]: Accepted publickey for core from 147.75.109.163 port 47986 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:23.187952 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:23.195269 systemd-logind[1911]: New session 6 of user core. Feb 13 15:20:23.206551 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:20:23.330644 sshd[2217]: Connection closed by 147.75.109.163 port 47986 Feb 13 15:20:23.331133 sshd-session[2215]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:23.338404 systemd[1]: sshd@5-172.31.23.231:22-147.75.109.163:47986.service: Deactivated successfully. Feb 13 15:20:23.341740 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:20:23.342983 systemd-logind[1911]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:20:23.344995 systemd-logind[1911]: Removed session 6. Feb 13 15:20:23.371801 systemd[1]: Started sshd@6-172.31.23.231:22-147.75.109.163:47990.service - OpenSSH per-connection server daemon (147.75.109.163:47990). Feb 13 15:20:23.546484 sshd[2222]: Accepted publickey for core from 147.75.109.163 port 47990 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:23.548843 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:23.555808 systemd-logind[1911]: New session 7 of user core. Feb 13 15:20:23.565562 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:20:23.682431 sudo[2225]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:20:23.683037 sudo[2225]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:20:23.702478 sudo[2225]: pam_unix(sudo:session): session closed for user root Feb 13 15:20:23.724952 sshd[2224]: Connection closed by 147.75.109.163 port 47990 Feb 13 15:20:23.726107 sshd-session[2222]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:23.733270 systemd[1]: sshd@6-172.31.23.231:22-147.75.109.163:47990.service: Deactivated successfully. Feb 13 15:20:23.736617 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:20:23.738050 systemd-logind[1911]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:20:23.739729 systemd-logind[1911]: Removed session 7. Feb 13 15:20:23.760235 systemd[1]: Started sshd@7-172.31.23.231:22-147.75.109.163:47996.service - OpenSSH per-connection server daemon (147.75.109.163:47996). Feb 13 15:20:23.952412 sshd[2230]: Accepted publickey for core from 147.75.109.163 port 47996 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:23.954841 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:23.962844 systemd-logind[1911]: New session 8 of user core. Feb 13 15:20:23.972541 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:20:24.075138 sudo[2234]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:20:24.075789 sudo[2234]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:20:24.081485 sudo[2234]: pam_unix(sudo:session): session closed for user root Feb 13 15:20:24.091234 sudo[2233]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:20:24.091878 sudo[2233]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:20:24.115906 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:20:24.161729 augenrules[2256]: No rules Feb 13 15:20:24.163946 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:20:24.164606 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:20:24.167043 sudo[2233]: pam_unix(sudo:session): session closed for user root Feb 13 15:20:24.189895 sshd[2232]: Connection closed by 147.75.109.163 port 47996 Feb 13 15:20:24.190382 sshd-session[2230]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:24.197591 systemd-logind[1911]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:20:24.198713 systemd[1]: sshd@7-172.31.23.231:22-147.75.109.163:47996.service: Deactivated successfully. Feb 13 15:20:24.201952 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:20:24.204527 systemd-logind[1911]: Removed session 8. Feb 13 15:20:24.229838 systemd[1]: Started sshd@8-172.31.23.231:22-147.75.109.163:48000.service - OpenSSH per-connection server daemon (147.75.109.163:48000). Feb 13 15:20:24.403165 sshd[2264]: Accepted publickey for core from 147.75.109.163 port 48000 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:24.405539 sshd-session[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:24.414665 systemd-logind[1911]: New session 9 of user core. Feb 13 15:20:24.420576 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:20:24.497626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:20:24.507668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:24.524220 sudo[2268]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:20:24.524926 sudo[2268]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:20:24.948444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:24.962864 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:25.069901 kubelet[2292]: E0213 15:20:25.069696 2292 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:25.078491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:25.078845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:25.116794 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:20:25.119772 (dockerd)[2302]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:20:25.452373 dockerd[2302]: time="2025-02-13T15:20:25.452265257Z" level=info msg="Starting up" Feb 13 15:20:25.562708 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3618831345-merged.mount: Deactivated successfully. Feb 13 15:20:25.581982 systemd[1]: var-lib-docker-metacopy\x2dcheck719178668-merged.mount: Deactivated successfully. Feb 13 15:20:25.597419 dockerd[2302]: time="2025-02-13T15:20:25.596984970Z" level=info msg="Loading containers: start." Feb 13 15:20:25.833684 kernel: Initializing XFRM netlink socket Feb 13 15:20:25.864788 (udev-worker)[2326]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:20:25.954014 systemd-networkd[1789]: docker0: Link UP Feb 13 15:20:25.998588 dockerd[2302]: time="2025-02-13T15:20:25.998518856Z" level=info msg="Loading containers: done." Feb 13 15:20:26.022867 dockerd[2302]: time="2025-02-13T15:20:26.022790140Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:20:26.023105 dockerd[2302]: time="2025-02-13T15:20:26.022934716Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:20:26.023164 dockerd[2302]: time="2025-02-13T15:20:26.023148148Z" level=info msg="Daemon has completed initialization" Feb 13 15:20:26.076629 dockerd[2302]: time="2025-02-13T15:20:26.076466212Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:20:26.076749 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:20:27.590108 containerd[1935]: time="2025-02-13T15:20:27.590011712Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:20:28.166243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495147759.mount: Deactivated successfully. Feb 13 15:20:29.661574 containerd[1935]: time="2025-02-13T15:20:29.661411726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:29.663532 containerd[1935]: time="2025-02-13T15:20:29.663467230Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205861" Feb 13 15:20:29.665370 containerd[1935]: time="2025-02-13T15:20:29.664000222Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:29.674794 containerd[1935]: time="2025-02-13T15:20:29.674736238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:29.677132 containerd[1935]: time="2025-02-13T15:20:29.677082574Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 2.086980778s" Feb 13 15:20:29.677326 containerd[1935]: time="2025-02-13T15:20:29.677277598Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 15:20:29.720082 containerd[1935]: time="2025-02-13T15:20:29.719354866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:20:31.413868 containerd[1935]: time="2025-02-13T15:20:31.412045427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:31.414483 containerd[1935]: time="2025-02-13T15:20:31.413890127Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383091" Feb 13 15:20:31.415273 containerd[1935]: time="2025-02-13T15:20:31.415142543Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:31.420858 containerd[1935]: time="2025-02-13T15:20:31.420776615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:31.423351 containerd[1935]: time="2025-02-13T15:20:31.423075095Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 1.702697961s" Feb 13 15:20:31.423351 containerd[1935]: time="2025-02-13T15:20:31.423133535Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 15:20:31.465963 containerd[1935]: time="2025-02-13T15:20:31.465892775Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:20:32.553992 containerd[1935]: time="2025-02-13T15:20:32.553543789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:32.555597 containerd[1935]: time="2025-02-13T15:20:32.555512113Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766980" Feb 13 15:20:32.556197 containerd[1935]: time="2025-02-13T15:20:32.556130209Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:32.564304 containerd[1935]: time="2025-02-13T15:20:32.564224617Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 1.098266634s" Feb 13 15:20:32.564465 containerd[1935]: time="2025-02-13T15:20:32.564290125Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 15:20:32.564465 containerd[1935]: time="2025-02-13T15:20:32.564330001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:32.602897 containerd[1935]: time="2025-02-13T15:20:32.602594701Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:20:33.831030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4010462342.mount: Deactivated successfully. Feb 13 15:20:34.300990 containerd[1935]: time="2025-02-13T15:20:34.300909505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:34.302426 containerd[1935]: time="2025-02-13T15:20:34.302344681Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273375" Feb 13 15:20:34.304116 containerd[1935]: time="2025-02-13T15:20:34.304042897Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:34.307681 containerd[1935]: time="2025-02-13T15:20:34.307604929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:34.309131 containerd[1935]: time="2025-02-13T15:20:34.308932237Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.706278784s" Feb 13 15:20:34.309131 containerd[1935]: time="2025-02-13T15:20:34.308981281Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:20:34.349086 containerd[1935]: time="2025-02-13T15:20:34.349009021Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:20:34.912142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount841849430.mount: Deactivated successfully. Feb 13 15:20:35.247595 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:20:35.255786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:35.578653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:35.580585 (kubelet)[2630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:35.679017 kubelet[2630]: E0213 15:20:35.678933 2630 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:35.683585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:35.683883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:36.104639 containerd[1935]: time="2025-02-13T15:20:36.104553386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:36.107903 containerd[1935]: time="2025-02-13T15:20:36.107832422Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 15:20:36.115347 containerd[1935]: time="2025-02-13T15:20:36.115030610Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:36.124103 containerd[1935]: time="2025-02-13T15:20:36.122280398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:36.124828 containerd[1935]: time="2025-02-13T15:20:36.124776626Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.775688501s" Feb 13 15:20:36.124975 containerd[1935]: time="2025-02-13T15:20:36.124945442Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:20:36.174275 containerd[1935]: time="2025-02-13T15:20:36.174214034Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:20:36.680589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204035708.mount: Deactivated successfully. Feb 13 15:20:36.695924 containerd[1935]: time="2025-02-13T15:20:36.694414157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:36.698629 containerd[1935]: time="2025-02-13T15:20:36.698570117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 15:20:36.701419 containerd[1935]: time="2025-02-13T15:20:36.701376233Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:36.705775 containerd[1935]: time="2025-02-13T15:20:36.705724937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:36.707334 containerd[1935]: time="2025-02-13T15:20:36.707268389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 532.694559ms" Feb 13 15:20:36.707476 containerd[1935]: time="2025-02-13T15:20:36.707342213Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:20:36.748838 containerd[1935]: time="2025-02-13T15:20:36.748792877Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:20:37.284579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1963496171.mount: Deactivated successfully. Feb 13 15:20:39.270999 containerd[1935]: time="2025-02-13T15:20:39.270922338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:39.275692 containerd[1935]: time="2025-02-13T15:20:39.274651890Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Feb 13 15:20:39.275511 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:20:39.282975 containerd[1935]: time="2025-02-13T15:20:39.281642346Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:39.295147 containerd[1935]: time="2025-02-13T15:20:39.295041354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:39.297663 containerd[1935]: time="2025-02-13T15:20:39.297437766Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.548387129s" Feb 13 15:20:39.297663 containerd[1935]: time="2025-02-13T15:20:39.297514338Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 15:20:45.699362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:20:45.708815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:45.733922 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:20:45.734167 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:20:45.734762 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:45.743826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:45.783423 systemd[1]: Reloading requested from client PID 2778 ('systemctl') (unit session-9.scope)... Feb 13 15:20:45.783445 systemd[1]: Reloading... Feb 13 15:20:46.007344 zram_generator::config[2819]: No configuration found. Feb 13 15:20:46.251754 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:20:46.421272 systemd[1]: Reloading finished in 637 ms. Feb 13 15:20:46.508655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:46.515625 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:46.521281 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:20:46.521731 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:46.529070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:46.804984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:46.820104 (kubelet)[2883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:20:46.901384 kubelet[2883]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:20:46.901384 kubelet[2883]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:20:46.901384 kubelet[2883]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:20:46.901384 kubelet[2883]: I0213 15:20:46.901079 2883 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:20:48.871042 kubelet[2883]: I0213 15:20:48.870981 2883 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:20:48.871042 kubelet[2883]: I0213 15:20:48.871031 2883 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:20:48.871779 kubelet[2883]: I0213 15:20:48.871515 2883 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:20:48.899235 kubelet[2883]: E0213 15:20:48.899191 2883 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.231:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:48.899675 kubelet[2883]: I0213 15:20:48.899490 2883 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:20:48.917559 kubelet[2883]: I0213 15:20:48.917456 2883 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:20:48.917913 kubelet[2883]: I0213 15:20:48.917882 2883 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:20:48.918268 kubelet[2883]: I0213 15:20:48.918236 2883 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:20:48.918475 kubelet[2883]: I0213 15:20:48.918283 2883 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:20:48.918475 kubelet[2883]: I0213 15:20:48.918305 2883 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:20:48.918589 kubelet[2883]: I0213 15:20:48.918523 2883 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:20:48.923096 kubelet[2883]: I0213 15:20:48.923059 2883 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:20:48.923229 kubelet[2883]: I0213 15:20:48.923107 2883 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:20:48.923229 kubelet[2883]: I0213 15:20:48.923154 2883 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:20:48.923229 kubelet[2883]: I0213 15:20:48.923188 2883 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:20:48.926330 kubelet[2883]: W0213 15:20:48.926229 2883 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.231:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:48.926458 kubelet[2883]: E0213 15:20:48.926367 2883 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.231:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:48.927163 kubelet[2883]: W0213 15:20:48.927108 2883 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-231&limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:48.927279 kubelet[2883]: E0213 15:20:48.927170 2883 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-231&limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:48.927386 kubelet[2883]: I0213 15:20:48.927350 2883 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:20:48.927898 kubelet[2883]: I0213 15:20:48.927854 2883 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:20:48.927984 kubelet[2883]: W0213 15:20:48.927962 2883 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:20:48.929580 kubelet[2883]: I0213 15:20:48.929504 2883 server.go:1256] "Started kubelet" Feb 13 15:20:48.937608 kubelet[2883]: E0213 15:20:48.937508 2883 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.231:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.231:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-231.1823cdb2cd6cabb2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-231,UID:ip-172-31-23-231,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-231,},FirstTimestamp:2025-02-13 15:20:48.929467314 +0000 UTC m=+2.102058300,LastTimestamp:2025-02-13 15:20:48.929467314 +0000 UTC m=+2.102058300,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-231,}" Feb 13 15:20:48.940417 kubelet[2883]: I0213 15:20:48.940375 2883 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:20:48.944991 kubelet[2883]: I0213 15:20:48.944535 2883 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:20:48.944991 kubelet[2883]: I0213 15:20:48.940397 2883 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:20:48.946125 kubelet[2883]: I0213 15:20:48.946056 2883 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:20:48.947903 kubelet[2883]: I0213 15:20:48.947843 2883 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:20:48.948048 kubelet[2883]: I0213 15:20:48.947967 2883 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:20:48.948048 kubelet[2883]: I0213 15:20:48.940467 2883 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:20:48.948425 kubelet[2883]: I0213 15:20:48.948287 2883 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:20:48.949018 kubelet[2883]: W0213 15:20:48.948932 2883 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:48.949018 kubelet[2883]: E0213 15:20:48.949021 2883 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:48.950029 kubelet[2883]: E0213 15:20:48.949519 2883 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-23-231\" not found" Feb 13 15:20:48.950029 kubelet[2883]: E0213 15:20:48.949991 2883 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-231?timeout=10s\": dial tcp 172.31.23.231:6443: connect: connection refused" interval="200ms" Feb 13 15:20:48.950250 kubelet[2883]: E0213 15:20:48.950145 2883 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:20:48.951544 kubelet[2883]: I0213 15:20:48.951485 2883 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:20:48.955485 kubelet[2883]: I0213 15:20:48.953791 2883 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:20:48.955485 kubelet[2883]: I0213 15:20:48.953831 2883 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:20:48.973181 kubelet[2883]: I0213 15:20:48.973143 2883 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:20:48.975864 kubelet[2883]: I0213 15:20:48.975831 2883 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:20:48.976141 kubelet[2883]: I0213 15:20:48.976120 2883 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:20:48.976249 kubelet[2883]: I0213 15:20:48.976230 2883 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:20:48.976463 kubelet[2883]: E0213 15:20:48.976443 2883 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:20:48.985662 kubelet[2883]: W0213 15:20:48.985589 2883 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:48.988730 kubelet[2883]: E0213 15:20:48.988685 2883 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:49.001799 kubelet[2883]: I0213 15:20:49.001767 2883 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:20:49.002007 kubelet[2883]: I0213 15:20:49.001985 2883 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:20:49.002185 kubelet[2883]: I0213 15:20:49.002165 2883 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:20:49.006835 kubelet[2883]: I0213 15:20:49.006802 2883 policy_none.go:49] "None policy: Start" Feb 13 15:20:49.008054 kubelet[2883]: I0213 15:20:49.008016 2883 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:20:49.008387 kubelet[2883]: I0213 15:20:49.008366 2883 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:20:49.021758 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:20:49.039002 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:20:49.045457 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:20:49.053955 kubelet[2883]: I0213 15:20:49.052811 2883 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-231" Feb 13 15:20:49.053955 kubelet[2883]: I0213 15:20:49.053229 2883 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:20:49.053955 kubelet[2883]: E0213 15:20:49.053361 2883 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.231:6443/api/v1/nodes\": dial tcp 172.31.23.231:6443: connect: connection refused" node="ip-172-31-23-231" Feb 13 15:20:49.053955 kubelet[2883]: I0213 15:20:49.053671 2883 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:20:49.060467 kubelet[2883]: E0213 15:20:49.060434 2883 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-231\" not found" Feb 13 15:20:49.077149 kubelet[2883]: I0213 15:20:49.077101 2883 topology_manager.go:215] "Topology Admit Handler" podUID="ec3d619997448c5ec7d29785d133e0de" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-231" Feb 13 15:20:49.079777 kubelet[2883]: I0213 15:20:49.079606 2883 topology_manager.go:215] "Topology Admit Handler" podUID="5a1d09f9ca28fd39e982a6c15b9442fe" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:49.081893 kubelet[2883]: I0213 15:20:49.081857 2883 topology_manager.go:215] "Topology Admit Handler" podUID="c928533400c6344b648282ba56f2baa7" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-231" Feb 13 15:20:49.096059 systemd[1]: Created slice kubepods-burstable-podec3d619997448c5ec7d29785d133e0de.slice - libcontainer container kubepods-burstable-podec3d619997448c5ec7d29785d133e0de.slice. Feb 13 15:20:49.121826 systemd[1]: Created slice kubepods-burstable-pod5a1d09f9ca28fd39e982a6c15b9442fe.slice - libcontainer container kubepods-burstable-pod5a1d09f9ca28fd39e982a6c15b9442fe.slice. Feb 13 15:20:49.135265 systemd[1]: Created slice kubepods-burstable-podc928533400c6344b648282ba56f2baa7.slice - libcontainer container kubepods-burstable-podc928533400c6344b648282ba56f2baa7.slice. Feb 13 15:20:49.149055 kubelet[2883]: I0213 15:20:49.148931 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c928533400c6344b648282ba56f2baa7-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-231\" (UID: \"c928533400c6344b648282ba56f2baa7\") " pod="kube-system/kube-scheduler-ip-172-31-23-231" Feb 13 15:20:49.149055 kubelet[2883]: I0213 15:20:49.149002 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec3d619997448c5ec7d29785d133e0de-ca-certs\") pod \"kube-apiserver-ip-172-31-23-231\" (UID: \"ec3d619997448c5ec7d29785d133e0de\") " pod="kube-system/kube-apiserver-ip-172-31-23-231" Feb 13 15:20:49.149055 kubelet[2883]: I0213 15:20:49.149057 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec3d619997448c5ec7d29785d133e0de-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-231\" (UID: \"ec3d619997448c5ec7d29785d133e0de\") " pod="kube-system/kube-apiserver-ip-172-31-23-231" Feb 13 15:20:49.149360 kubelet[2883]: I0213 15:20:49.149104 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a1d09f9ca28fd39e982a6c15b9442fe-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-231\" (UID: \"5a1d09f9ca28fd39e982a6c15b9442fe\") " pod="kube-system/kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:49.149360 kubelet[2883]: I0213 15:20:49.149151 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec3d619997448c5ec7d29785d133e0de-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-231\" (UID: \"ec3d619997448c5ec7d29785d133e0de\") " pod="kube-system/kube-apiserver-ip-172-31-23-231" Feb 13 15:20:49.149360 kubelet[2883]: I0213 15:20:49.149193 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a1d09f9ca28fd39e982a6c15b9442fe-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-231\" (UID: \"5a1d09f9ca28fd39e982a6c15b9442fe\") " pod="kube-system/kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:49.149360 kubelet[2883]: I0213 15:20:49.149238 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a1d09f9ca28fd39e982a6c15b9442fe-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-231\" (UID: \"5a1d09f9ca28fd39e982a6c15b9442fe\") " pod="kube-system/kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:49.149360 kubelet[2883]: I0213 15:20:49.149292 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a1d09f9ca28fd39e982a6c15b9442fe-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-231\" (UID: \"5a1d09f9ca28fd39e982a6c15b9442fe\") " pod="kube-system/kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:49.149629 kubelet[2883]: I0213 15:20:49.149388 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a1d09f9ca28fd39e982a6c15b9442fe-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-231\" (UID: \"5a1d09f9ca28fd39e982a6c15b9442fe\") " pod="kube-system/kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:49.151018 kubelet[2883]: E0213 15:20:49.150967 2883 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-231?timeout=10s\": dial tcp 172.31.23.231:6443: connect: connection refused" interval="400ms" Feb 13 15:20:49.255504 kubelet[2883]: I0213 15:20:49.255460 2883 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-231" Feb 13 15:20:49.256026 kubelet[2883]: E0213 15:20:49.255921 2883 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.231:6443/api/v1/nodes\": dial tcp 172.31.23.231:6443: connect: connection refused" node="ip-172-31-23-231" Feb 13 15:20:49.417053 containerd[1935]: time="2025-02-13T15:20:49.416820808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-231,Uid:ec3d619997448c5ec7d29785d133e0de,Namespace:kube-system,Attempt:0,}" Feb 13 15:20:49.430970 containerd[1935]: time="2025-02-13T15:20:49.430871128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-231,Uid:5a1d09f9ca28fd39e982a6c15b9442fe,Namespace:kube-system,Attempt:0,}" Feb 13 15:20:49.440796 containerd[1935]: time="2025-02-13T15:20:49.440714872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-231,Uid:c928533400c6344b648282ba56f2baa7,Namespace:kube-system,Attempt:0,}" Feb 13 15:20:49.552093 kubelet[2883]: E0213 15:20:49.552038 2883 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-231?timeout=10s\": dial tcp 172.31.23.231:6443: connect: connection refused" interval="800ms" Feb 13 15:20:49.658882 kubelet[2883]: I0213 15:20:49.658823 2883 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-231" Feb 13 15:20:49.659355 kubelet[2883]: E0213 15:20:49.659297 2883 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.231:6443/api/v1/nodes\": dial tcp 172.31.23.231:6443: connect: connection refused" node="ip-172-31-23-231" Feb 13 15:20:49.878425 kubelet[2883]: W0213 15:20:49.878235 2883 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:49.878425 kubelet[2883]: E0213 15:20:49.878367 2883 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:49.927648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071462454.mount: Deactivated successfully. Feb 13 15:20:49.943297 containerd[1935]: time="2025-02-13T15:20:49.941681275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:20:49.946575 containerd[1935]: time="2025-02-13T15:20:49.946489039Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:20:49.953373 containerd[1935]: time="2025-02-13T15:20:49.952961839Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:20:49.956196 containerd[1935]: time="2025-02-13T15:20:49.956126191Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:20:49.959423 containerd[1935]: time="2025-02-13T15:20:49.959305087Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:20:49.962117 containerd[1935]: time="2025-02-13T15:20:49.962046679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:20:49.964036 containerd[1935]: time="2025-02-13T15:20:49.963967903Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.016487ms" Feb 13 15:20:49.966017 containerd[1935]: time="2025-02-13T15:20:49.965804623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:20:49.966798 containerd[1935]: time="2025-02-13T15:20:49.966736279Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:20:49.975986 containerd[1935]: time="2025-02-13T15:20:49.975931375Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.081323ms" Feb 13 15:20:49.978758 containerd[1935]: time="2025-02-13T15:20:49.978689407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.713123ms" Feb 13 15:20:50.163464 containerd[1935]: time="2025-02-13T15:20:50.161863144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:20:50.163464 containerd[1935]: time="2025-02-13T15:20:50.161962744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:20:50.163464 containerd[1935]: time="2025-02-13T15:20:50.161987356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:20:50.163464 containerd[1935]: time="2025-02-13T15:20:50.162116176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:20:50.164713 containerd[1935]: time="2025-02-13T15:20:50.163639588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:20:50.164713 containerd[1935]: time="2025-02-13T15:20:50.163746220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:20:50.164713 containerd[1935]: time="2025-02-13T15:20:50.163782424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:20:50.164713 containerd[1935]: time="2025-02-13T15:20:50.163927240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:20:50.170451 kubelet[2883]: W0213 15:20:50.170366 2883 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:50.170451 kubelet[2883]: E0213 15:20:50.170459 2883 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:50.179656 containerd[1935]: time="2025-02-13T15:20:50.178254880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:20:50.179923 containerd[1935]: time="2025-02-13T15:20:50.179834788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:20:50.180261 containerd[1935]: time="2025-02-13T15:20:50.180160600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:20:50.182355 containerd[1935]: time="2025-02-13T15:20:50.180871324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:20:50.193595 kubelet[2883]: W0213 15:20:50.193499 2883 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.231:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:50.193595 kubelet[2883]: E0213 15:20:50.193597 2883 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.231:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:50.222638 systemd[1]: Started cri-containerd-28db2dca9e6d31421ff93369224b86f5c0ce421b3d02c979243362b480de0e36.scope - libcontainer container 28db2dca9e6d31421ff93369224b86f5c0ce421b3d02c979243362b480de0e36. Feb 13 15:20:50.234522 systemd[1]: Started cri-containerd-512a5d15e4660760ce33d8d6a7c452039d81686f3e971c037dc86d27c1cd7aec.scope - libcontainer container 512a5d15e4660760ce33d8d6a7c452039d81686f3e971c037dc86d27c1cd7aec. Feb 13 15:20:50.238555 systemd[1]: Started cri-containerd-5e0167a3d60e9030c857defe3431d4d5132b41160a8c268e8b15c97eec54fb05.scope - libcontainer container 5e0167a3d60e9030c857defe3431d4d5132b41160a8c268e8b15c97eec54fb05. Feb 13 15:20:50.337491 containerd[1935]: time="2025-02-13T15:20:50.337420349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-231,Uid:c928533400c6344b648282ba56f2baa7,Namespace:kube-system,Attempt:0,} returns sandbox id \"28db2dca9e6d31421ff93369224b86f5c0ce421b3d02c979243362b480de0e36\"" Feb 13 15:20:50.353294 kubelet[2883]: E0213 15:20:50.353133 2883 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-231?timeout=10s\": dial tcp 172.31.23.231:6443: connect: connection refused" interval="1.6s" Feb 13 15:20:50.355549 containerd[1935]: time="2025-02-13T15:20:50.355016705Z" level=info msg="CreateContainer within sandbox \"28db2dca9e6d31421ff93369224b86f5c0ce421b3d02c979243362b480de0e36\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:20:50.364633 containerd[1935]: time="2025-02-13T15:20:50.364581149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-231,Uid:5a1d09f9ca28fd39e982a6c15b9442fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e0167a3d60e9030c857defe3431d4d5132b41160a8c268e8b15c97eec54fb05\"" Feb 13 15:20:50.373303 containerd[1935]: time="2025-02-13T15:20:50.373049429Z" level=info msg="CreateContainer within sandbox \"5e0167a3d60e9030c857defe3431d4d5132b41160a8c268e8b15c97eec54fb05\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:20:50.378692 containerd[1935]: time="2025-02-13T15:20:50.378528701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-231,Uid:ec3d619997448c5ec7d29785d133e0de,Namespace:kube-system,Attempt:0,} returns sandbox id \"512a5d15e4660760ce33d8d6a7c452039d81686f3e971c037dc86d27c1cd7aec\"" Feb 13 15:20:50.384851 containerd[1935]: time="2025-02-13T15:20:50.384786977Z" level=info msg="CreateContainer within sandbox \"512a5d15e4660760ce33d8d6a7c452039d81686f3e971c037dc86d27c1cd7aec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:20:50.411052 containerd[1935]: time="2025-02-13T15:20:50.410988017Z" level=info msg="CreateContainer within sandbox \"28db2dca9e6d31421ff93369224b86f5c0ce421b3d02c979243362b480de0e36\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"914cd8801c0ddb7f17be361ee497a15cf958a8f9df6ac0615be0609eeeca7b22\"" Feb 13 15:20:50.412397 containerd[1935]: time="2025-02-13T15:20:50.411954341Z" level=info msg="StartContainer for \"914cd8801c0ddb7f17be361ee497a15cf958a8f9df6ac0615be0609eeeca7b22\"" Feb 13 15:20:50.437907 containerd[1935]: time="2025-02-13T15:20:50.437126657Z" level=info msg="CreateContainer within sandbox \"5e0167a3d60e9030c857defe3431d4d5132b41160a8c268e8b15c97eec54fb05\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9101c48ec4fde2b1aaf75080acd7dbc685d4622147743d00bcc1f955bdeb5d8a\"" Feb 13 15:20:50.440357 containerd[1935]: time="2025-02-13T15:20:50.438951821Z" level=info msg="StartContainer for \"9101c48ec4fde2b1aaf75080acd7dbc685d4622147743d00bcc1f955bdeb5d8a\"" Feb 13 15:20:50.441125 containerd[1935]: time="2025-02-13T15:20:50.441066725Z" level=info msg="CreateContainer within sandbox \"512a5d15e4660760ce33d8d6a7c452039d81686f3e971c037dc86d27c1cd7aec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6daed0b518f6c2d041835889bfa4bd785e045cb84de384eba1febed572155db8\"" Feb 13 15:20:50.442499 containerd[1935]: time="2025-02-13T15:20:50.442432829Z" level=info msg="StartContainer for \"6daed0b518f6c2d041835889bfa4bd785e045cb84de384eba1febed572155db8\"" Feb 13 15:20:50.458980 kubelet[2883]: W0213 15:20:50.458809 2883 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-231&limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:50.458980 kubelet[2883]: E0213 15:20:50.458927 2883 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-231&limit=500&resourceVersion=0": dial tcp 172.31.23.231:6443: connect: connection refused Feb 13 15:20:50.463883 kubelet[2883]: I0213 15:20:50.463766 2883 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-231" Feb 13 15:20:50.467552 kubelet[2883]: E0213 15:20:50.467500 2883 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.231:6443/api/v1/nodes\": dial tcp 172.31.23.231:6443: connect: connection refused" node="ip-172-31-23-231" Feb 13 15:20:50.471114 systemd[1]: Started cri-containerd-914cd8801c0ddb7f17be361ee497a15cf958a8f9df6ac0615be0609eeeca7b22.scope - libcontainer container 914cd8801c0ddb7f17be361ee497a15cf958a8f9df6ac0615be0609eeeca7b22. Feb 13 15:20:50.528806 systemd[1]: Started cri-containerd-9101c48ec4fde2b1aaf75080acd7dbc685d4622147743d00bcc1f955bdeb5d8a.scope - libcontainer container 9101c48ec4fde2b1aaf75080acd7dbc685d4622147743d00bcc1f955bdeb5d8a. Feb 13 15:20:50.544016 systemd[1]: Started cri-containerd-6daed0b518f6c2d041835889bfa4bd785e045cb84de384eba1febed572155db8.scope - libcontainer container 6daed0b518f6c2d041835889bfa4bd785e045cb84de384eba1febed572155db8. Feb 13 15:20:50.594191 containerd[1935]: time="2025-02-13T15:20:50.593876742Z" level=info msg="StartContainer for \"914cd8801c0ddb7f17be361ee497a15cf958a8f9df6ac0615be0609eeeca7b22\" returns successfully" Feb 13 15:20:50.671150 containerd[1935]: time="2025-02-13T15:20:50.670162818Z" level=info msg="StartContainer for \"6daed0b518f6c2d041835889bfa4bd785e045cb84de384eba1febed572155db8\" returns successfully" Feb 13 15:20:50.677453 containerd[1935]: time="2025-02-13T15:20:50.677381707Z" level=info msg="StartContainer for \"9101c48ec4fde2b1aaf75080acd7dbc685d4622147743d00bcc1f955bdeb5d8a\" returns successfully" Feb 13 15:20:52.071989 kubelet[2883]: I0213 15:20:52.071939 2883 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-231" Feb 13 15:20:53.278343 update_engine[1912]: I20250213 15:20:53.276355 1912 update_attempter.cc:509] Updating boot flags... Feb 13 15:20:53.402351 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3175) Feb 13 15:20:53.820354 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3180) Feb 13 15:20:55.039928 kubelet[2883]: I0213 15:20:55.039839 2883 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-231" Feb 13 15:20:55.177742 kubelet[2883]: E0213 15:20:55.177675 2883 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Feb 13 15:20:55.933881 kubelet[2883]: I0213 15:20:55.933819 2883 apiserver.go:52] "Watching apiserver" Feb 13 15:20:55.948912 kubelet[2883]: I0213 15:20:55.948860 2883 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:20:57.552817 systemd[1]: Reloading requested from client PID 3344 ('systemctl') (unit session-9.scope)... Feb 13 15:20:57.552862 systemd[1]: Reloading... Feb 13 15:20:57.741378 zram_generator::config[3387]: No configuration found. Feb 13 15:20:57.964622 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:20:58.180752 systemd[1]: Reloading finished in 627 ms. Feb 13 15:20:58.271864 kubelet[2883]: I0213 15:20:58.271705 2883 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:20:58.272027 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:58.288196 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:20:58.288648 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:58.288727 systemd[1]: kubelet.service: Consumed 2.825s CPU time, 112.2M memory peak, 0B memory swap peak. Feb 13 15:20:58.299880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:58.605638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:58.616922 (kubelet)[3444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:20:58.717355 kubelet[3444]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:20:58.717355 kubelet[3444]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:20:58.717355 kubelet[3444]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:20:58.717355 kubelet[3444]: I0213 15:20:58.716396 3444 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:20:58.732034 kubelet[3444]: I0213 15:20:58.731968 3444 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:20:58.732034 kubelet[3444]: I0213 15:20:58.732020 3444 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:20:58.732694 kubelet[3444]: I0213 15:20:58.732649 3444 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:20:58.737326 kubelet[3444]: I0213 15:20:58.737016 3444 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:20:58.742430 kubelet[3444]: I0213 15:20:58.742137 3444 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:20:58.753105 kubelet[3444]: I0213 15:20:58.753067 3444 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:20:58.754391 kubelet[3444]: I0213 15:20:58.753818 3444 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:20:58.754391 kubelet[3444]: I0213 15:20:58.754134 3444 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:20:58.754391 kubelet[3444]: I0213 15:20:58.754171 3444 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:20:58.754391 kubelet[3444]: I0213 15:20:58.754191 3444 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:20:58.754391 kubelet[3444]: I0213 15:20:58.754235 3444 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:20:58.754992 kubelet[3444]: I0213 15:20:58.754962 3444 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:20:58.755120 kubelet[3444]: I0213 15:20:58.755101 3444 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:20:58.755263 kubelet[3444]: I0213 15:20:58.755242 3444 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:20:58.755418 kubelet[3444]: I0213 15:20:58.755398 3444 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:20:58.758014 sudo[3458]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:20:58.760841 sudo[3458]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:20:58.765494 kubelet[3444]: I0213 15:20:58.765444 3444 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:20:58.765799 kubelet[3444]: I0213 15:20:58.765763 3444 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:20:58.766490 kubelet[3444]: I0213 15:20:58.766447 3444 server.go:1256] "Started kubelet" Feb 13 15:20:58.773120 kubelet[3444]: I0213 15:20:58.773083 3444 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:20:58.775971 kubelet[3444]: I0213 15:20:58.775744 3444 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:20:58.776588 kubelet[3444]: I0213 15:20:58.776346 3444 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:20:58.776588 kubelet[3444]: I0213 15:20:58.776460 3444 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:20:58.788678 kubelet[3444]: I0213 15:20:58.788637 3444 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:20:58.799898 kubelet[3444]: I0213 15:20:58.799855 3444 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:20:58.805353 kubelet[3444]: I0213 15:20:58.805066 3444 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:20:58.805496 kubelet[3444]: I0213 15:20:58.805417 3444 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:20:58.838365 kubelet[3444]: E0213 15:20:58.837930 3444 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:20:58.841230 kubelet[3444]: I0213 15:20:58.840787 3444 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:20:58.841230 kubelet[3444]: I0213 15:20:58.840981 3444 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:20:58.879437 kubelet[3444]: I0213 15:20:58.878543 3444 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:20:58.924375 kubelet[3444]: I0213 15:20:58.924284 3444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:20:58.928266 kubelet[3444]: I0213 15:20:58.928186 3444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:20:58.928494 kubelet[3444]: I0213 15:20:58.928474 3444 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:20:58.928619 kubelet[3444]: I0213 15:20:58.928599 3444 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:20:58.929117 kubelet[3444]: E0213 15:20:58.928770 3444 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:20:58.935267 kubelet[3444]: I0213 15:20:58.935230 3444 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-231" Feb 13 15:20:58.960683 kubelet[3444]: I0213 15:20:58.960633 3444 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-231" Feb 13 15:20:58.965543 kubelet[3444]: I0213 15:20:58.965263 3444 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-231" Feb 13 15:20:59.029080 kubelet[3444]: E0213 15:20:59.028999 3444 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:20:59.085935 kubelet[3444]: I0213 15:20:59.085858 3444 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:20:59.086484 kubelet[3444]: I0213 15:20:59.086180 3444 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:20:59.086866 kubelet[3444]: I0213 15:20:59.086628 3444 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:20:59.087220 kubelet[3444]: I0213 15:20:59.087076 3444 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:20:59.087220 kubelet[3444]: I0213 15:20:59.087143 3444 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:20:59.087220 kubelet[3444]: I0213 15:20:59.087161 3444 policy_none.go:49] "None policy: Start" Feb 13 15:20:59.090379 kubelet[3444]: I0213 15:20:59.089163 3444 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:20:59.090379 kubelet[3444]: I0213 15:20:59.089212 3444 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:20:59.090379 kubelet[3444]: I0213 15:20:59.089522 3444 state_mem.go:75] "Updated machine memory state" Feb 13 15:20:59.102235 kubelet[3444]: I0213 15:20:59.101949 3444 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:20:59.106915 kubelet[3444]: I0213 15:20:59.106879 3444 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:20:59.229525 kubelet[3444]: I0213 15:20:59.229288 3444 topology_manager.go:215] "Topology Admit Handler" podUID="5a1d09f9ca28fd39e982a6c15b9442fe" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:59.229926 kubelet[3444]: I0213 15:20:59.229889 3444 topology_manager.go:215] "Topology Admit Handler" podUID="c928533400c6344b648282ba56f2baa7" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-231" Feb 13 15:20:59.231979 kubelet[3444]: I0213 15:20:59.231942 3444 topology_manager.go:215] "Topology Admit Handler" podUID="ec3d619997448c5ec7d29785d133e0de" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-231" Feb 13 15:20:59.241797 kubelet[3444]: E0213 15:20:59.241629 3444 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-231\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:59.244226 kubelet[3444]: E0213 15:20:59.243735 3444 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-23-231\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-231" Feb 13 15:20:59.312233 kubelet[3444]: I0213 15:20:59.311203 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a1d09f9ca28fd39e982a6c15b9442fe-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-231\" (UID: \"5a1d09f9ca28fd39e982a6c15b9442fe\") " pod="kube-system/kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:59.312233 kubelet[3444]: I0213 15:20:59.311288 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c928533400c6344b648282ba56f2baa7-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-231\" (UID: \"c928533400c6344b648282ba56f2baa7\") " pod="kube-system/kube-scheduler-ip-172-31-23-231" Feb 13 15:20:59.312233 kubelet[3444]: I0213 15:20:59.311359 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec3d619997448c5ec7d29785d133e0de-ca-certs\") pod \"kube-apiserver-ip-172-31-23-231\" (UID: \"ec3d619997448c5ec7d29785d133e0de\") " pod="kube-system/kube-apiserver-ip-172-31-23-231" Feb 13 15:20:59.312233 kubelet[3444]: I0213 15:20:59.311408 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec3d619997448c5ec7d29785d133e0de-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-231\" (UID: \"ec3d619997448c5ec7d29785d133e0de\") " pod="kube-system/kube-apiserver-ip-172-31-23-231" Feb 13 15:20:59.312233 kubelet[3444]: I0213 15:20:59.311452 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a1d09f9ca28fd39e982a6c15b9442fe-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-231\" (UID: \"5a1d09f9ca28fd39e982a6c15b9442fe\") " pod="kube-system/kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:59.312621 kubelet[3444]: I0213 15:20:59.311494 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a1d09f9ca28fd39e982a6c15b9442fe-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-231\" (UID: \"5a1d09f9ca28fd39e982a6c15b9442fe\") " pod="kube-system/kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:59.312621 kubelet[3444]: I0213 15:20:59.311537 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a1d09f9ca28fd39e982a6c15b9442fe-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-231\" (UID: \"5a1d09f9ca28fd39e982a6c15b9442fe\") " pod="kube-system/kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:59.312621 kubelet[3444]: I0213 15:20:59.311583 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a1d09f9ca28fd39e982a6c15b9442fe-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-231\" (UID: \"5a1d09f9ca28fd39e982a6c15b9442fe\") " pod="kube-system/kube-controller-manager-ip-172-31-23-231" Feb 13 15:20:59.312621 kubelet[3444]: I0213 15:20:59.311627 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec3d619997448c5ec7d29785d133e0de-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-231\" (UID: \"ec3d619997448c5ec7d29785d133e0de\") " pod="kube-system/kube-apiserver-ip-172-31-23-231" Feb 13 15:20:59.657927 sudo[3458]: pam_unix(sudo:session): session closed for user root Feb 13 15:20:59.759490 kubelet[3444]: I0213 15:20:59.757924 3444 apiserver.go:52] "Watching apiserver" Feb 13 15:20:59.808286 kubelet[3444]: I0213 15:20:59.805547 3444 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:20:59.878465 kubelet[3444]: I0213 15:20:59.878418 3444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-231" podStartSLOduration=3.8783533 podStartE2EDuration="3.8783533s" podCreationTimestamp="2025-02-13 15:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:20:59.857528632 +0000 UTC m=+1.226474035" watchObservedRunningTime="2025-02-13 15:20:59.8783533 +0000 UTC m=+1.247298715" Feb 13 15:20:59.906460 kubelet[3444]: I0213 15:20:59.906415 3444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-231" podStartSLOduration=0.906350848 podStartE2EDuration="906.350848ms" podCreationTimestamp="2025-02-13 15:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:20:59.878862724 +0000 UTC m=+1.247808127" watchObservedRunningTime="2025-02-13 15:20:59.906350848 +0000 UTC m=+1.275296251" Feb 13 15:20:59.908416 kubelet[3444]: I0213 15:20:59.908006 3444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-231" podStartSLOduration=2.907951708 podStartE2EDuration="2.907951708s" podCreationTimestamp="2025-02-13 15:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:20:59.907688896 +0000 UTC m=+1.276634299" watchObservedRunningTime="2025-02-13 15:20:59.907951708 +0000 UTC m=+1.276897099" Feb 13 15:21:00.026646 kubelet[3444]: E0213 15:21:00.026592 3444 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-231\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-231" Feb 13 15:21:02.857700 sudo[2268]: pam_unix(sudo:session): session closed for user root Feb 13 15:21:02.880262 sshd[2266]: Connection closed by 147.75.109.163 port 48000 Feb 13 15:21:02.880042 sshd-session[2264]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:02.886146 systemd-logind[1911]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:21:02.887724 systemd[1]: sshd@8-172.31.23.231:22-147.75.109.163:48000.service: Deactivated successfully. Feb 13 15:21:02.895657 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:21:02.896015 systemd[1]: session-9.scope: Consumed 10.823s CPU time, 187.7M memory peak, 0B memory swap peak. Feb 13 15:21:02.900187 systemd-logind[1911]: Removed session 9. Feb 13 15:21:13.171368 kubelet[3444]: I0213 15:21:13.171127 3444 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:21:13.171968 containerd[1935]: time="2025-02-13T15:21:13.171717854Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:21:13.172591 kubelet[3444]: I0213 15:21:13.172125 3444 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:21:14.135172 kubelet[3444]: I0213 15:21:14.132639 3444 topology_manager.go:215] "Topology Admit Handler" podUID="63803529-a5bc-43d3-bc43-bbe64ea022d8" podNamespace="kube-system" podName="kube-proxy-r2fnm" Feb 13 15:21:14.154205 systemd[1]: Created slice kubepods-besteffort-pod63803529_a5bc_43d3_bc43_bbe64ea022d8.slice - libcontainer container kubepods-besteffort-pod63803529_a5bc_43d3_bc43_bbe64ea022d8.slice. Feb 13 15:21:14.157617 kubelet[3444]: I0213 15:21:14.155533 3444 topology_manager.go:215] "Topology Admit Handler" podUID="637934ce-7b58-4703-be9c-0f058175c2fe" podNamespace="kube-system" podName="cilium-9ctc5" Feb 13 15:21:14.185916 systemd[1]: Created slice kubepods-burstable-pod637934ce_7b58_4703_be9c_0f058175c2fe.slice - libcontainer container kubepods-burstable-pod637934ce_7b58_4703_be9c_0f058175c2fe.slice. Feb 13 15:21:14.199438 kubelet[3444]: I0213 15:21:14.199379 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/637934ce-7b58-4703-be9c-0f058175c2fe-cilium-config-path\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.199963 kubelet[3444]: I0213 15:21:14.199486 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/63803529-a5bc-43d3-bc43-bbe64ea022d8-kube-proxy\") pod \"kube-proxy-r2fnm\" (UID: \"63803529-a5bc-43d3-bc43-bbe64ea022d8\") " pod="kube-system/kube-proxy-r2fnm" Feb 13 15:21:14.199963 kubelet[3444]: I0213 15:21:14.199563 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63803529-a5bc-43d3-bc43-bbe64ea022d8-xtables-lock\") pod \"kube-proxy-r2fnm\" (UID: \"63803529-a5bc-43d3-bc43-bbe64ea022d8\") " pod="kube-system/kube-proxy-r2fnm" Feb 13 15:21:14.199963 kubelet[3444]: I0213 15:21:14.199633 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-etc-cni-netd\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.199963 kubelet[3444]: I0213 15:21:14.199682 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-lib-modules\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.199963 kubelet[3444]: I0213 15:21:14.199754 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-xtables-lock\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.199963 kubelet[3444]: I0213 15:21:14.199832 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-cni-path\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.200289 kubelet[3444]: I0213 15:21:14.199904 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-cilium-run\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.200289 kubelet[3444]: I0213 15:21:14.199976 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/637934ce-7b58-4703-be9c-0f058175c2fe-clustermesh-secrets\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.200289 kubelet[3444]: I0213 15:21:14.200027 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zs2r\" (UniqueName: \"kubernetes.io/projected/637934ce-7b58-4703-be9c-0f058175c2fe-kube-api-access-4zs2r\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.200289 kubelet[3444]: I0213 15:21:14.200098 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-bpf-maps\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.200289 kubelet[3444]: I0213 15:21:14.200289 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/637934ce-7b58-4703-be9c-0f058175c2fe-hubble-tls\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.200584 kubelet[3444]: I0213 15:21:14.200503 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-host-proc-sys-net\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.201679 kubelet[3444]: I0213 15:21:14.201441 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-host-proc-sys-kernel\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.201679 kubelet[3444]: I0213 15:21:14.201564 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63803529-a5bc-43d3-bc43-bbe64ea022d8-lib-modules\") pod \"kube-proxy-r2fnm\" (UID: \"63803529-a5bc-43d3-bc43-bbe64ea022d8\") " pod="kube-system/kube-proxy-r2fnm" Feb 13 15:21:14.201679 kubelet[3444]: I0213 15:21:14.201623 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-cilium-cgroup\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.202071 kubelet[3444]: I0213 15:21:14.201720 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbdx9\" (UniqueName: \"kubernetes.io/projected/63803529-a5bc-43d3-bc43-bbe64ea022d8-kube-api-access-mbdx9\") pod \"kube-proxy-r2fnm\" (UID: \"63803529-a5bc-43d3-bc43-bbe64ea022d8\") " pod="kube-system/kube-proxy-r2fnm" Feb 13 15:21:14.202071 kubelet[3444]: I0213 15:21:14.201788 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-hostproc\") pod \"cilium-9ctc5\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " pod="kube-system/cilium-9ctc5" Feb 13 15:21:14.249305 kubelet[3444]: I0213 15:21:14.248518 3444 topology_manager.go:215] "Topology Admit Handler" podUID="00204c24-9b96-45a4-aea2-32228cf759a2" podNamespace="kube-system" podName="cilium-operator-5cc964979-d6pv6" Feb 13 15:21:14.266036 systemd[1]: Created slice kubepods-besteffort-pod00204c24_9b96_45a4_aea2_32228cf759a2.slice - libcontainer container kubepods-besteffort-pod00204c24_9b96_45a4_aea2_32228cf759a2.slice. Feb 13 15:21:14.304747 kubelet[3444]: I0213 15:21:14.304689 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrts9\" (UniqueName: \"kubernetes.io/projected/00204c24-9b96-45a4-aea2-32228cf759a2-kube-api-access-hrts9\") pod \"cilium-operator-5cc964979-d6pv6\" (UID: \"00204c24-9b96-45a4-aea2-32228cf759a2\") " pod="kube-system/cilium-operator-5cc964979-d6pv6" Feb 13 15:21:14.305040 kubelet[3444]: I0213 15:21:14.305016 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00204c24-9b96-45a4-aea2-32228cf759a2-cilium-config-path\") pod \"cilium-operator-5cc964979-d6pv6\" (UID: \"00204c24-9b96-45a4-aea2-32228cf759a2\") " pod="kube-system/cilium-operator-5cc964979-d6pv6" Feb 13 15:21:14.475339 containerd[1935]: time="2025-02-13T15:21:14.474559877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r2fnm,Uid:63803529-a5bc-43d3-bc43-bbe64ea022d8,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:14.495870 containerd[1935]: time="2025-02-13T15:21:14.495538013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ctc5,Uid:637934ce-7b58-4703-be9c-0f058175c2fe,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:14.533349 containerd[1935]: time="2025-02-13T15:21:14.532927817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:14.533349 containerd[1935]: time="2025-02-13T15:21:14.533014193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:14.533349 containerd[1935]: time="2025-02-13T15:21:14.533039249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:14.533349 containerd[1935]: time="2025-02-13T15:21:14.533202317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:14.571252 containerd[1935]: time="2025-02-13T15:21:14.569722865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:14.571645 containerd[1935]: time="2025-02-13T15:21:14.571428929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:14.571645 containerd[1935]: time="2025-02-13T15:21:14.571478021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:14.571799 systemd[1]: Started cri-containerd-e1df8f10b68b6d2ba3a120f3299b92bbf5163050468b1d8377251f0183505e89.scope - libcontainer container e1df8f10b68b6d2ba3a120f3299b92bbf5163050468b1d8377251f0183505e89. Feb 13 15:21:14.572128 containerd[1935]: time="2025-02-13T15:21:14.571922693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:14.574609 containerd[1935]: time="2025-02-13T15:21:14.574552061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-d6pv6,Uid:00204c24-9b96-45a4-aea2-32228cf759a2,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:14.618772 systemd[1]: Started cri-containerd-817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706.scope - libcontainer container 817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706. Feb 13 15:21:14.662567 containerd[1935]: time="2025-02-13T15:21:14.662451882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r2fnm,Uid:63803529-a5bc-43d3-bc43-bbe64ea022d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1df8f10b68b6d2ba3a120f3299b92bbf5163050468b1d8377251f0183505e89\"" Feb 13 15:21:14.677332 containerd[1935]: time="2025-02-13T15:21:14.677263386Z" level=info msg="CreateContainer within sandbox \"e1df8f10b68b6d2ba3a120f3299b92bbf5163050468b1d8377251f0183505e89\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:21:14.689359 containerd[1935]: time="2025-02-13T15:21:14.686454834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:14.689359 containerd[1935]: time="2025-02-13T15:21:14.687732222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:14.689359 containerd[1935]: time="2025-02-13T15:21:14.687761298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:14.689359 containerd[1935]: time="2025-02-13T15:21:14.687903966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:14.696243 containerd[1935]: time="2025-02-13T15:21:14.696182370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ctc5,Uid:637934ce-7b58-4703-be9c-0f058175c2fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\"" Feb 13 15:21:14.699739 containerd[1935]: time="2025-02-13T15:21:14.699573906Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:21:14.728692 systemd[1]: Started cri-containerd-7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d.scope - libcontainer container 7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d. Feb 13 15:21:14.737116 containerd[1935]: time="2025-02-13T15:21:14.737034762Z" level=info msg="CreateContainer within sandbox \"e1df8f10b68b6d2ba3a120f3299b92bbf5163050468b1d8377251f0183505e89\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02dc4e3856e1b7ee59122f4625ac3e37927e76ad2674c5563dc8902b40934211\"" Feb 13 15:21:14.740268 containerd[1935]: time="2025-02-13T15:21:14.740125626Z" level=info msg="StartContainer for \"02dc4e3856e1b7ee59122f4625ac3e37927e76ad2674c5563dc8902b40934211\"" Feb 13 15:21:14.800697 systemd[1]: Started cri-containerd-02dc4e3856e1b7ee59122f4625ac3e37927e76ad2674c5563dc8902b40934211.scope - libcontainer container 02dc4e3856e1b7ee59122f4625ac3e37927e76ad2674c5563dc8902b40934211. Feb 13 15:21:14.830882 containerd[1935]: time="2025-02-13T15:21:14.830677387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-d6pv6,Uid:00204c24-9b96-45a4-aea2-32228cf759a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d\"" Feb 13 15:21:14.880226 containerd[1935]: time="2025-02-13T15:21:14.880107763Z" level=info msg="StartContainer for \"02dc4e3856e1b7ee59122f4625ac3e37927e76ad2674c5563dc8902b40934211\" returns successfully" Feb 13 15:21:21.493486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3224929832.mount: Deactivated successfully. Feb 13 15:21:23.957966 containerd[1935]: time="2025-02-13T15:21:23.957905368Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:23.960941 containerd[1935]: time="2025-02-13T15:21:23.960274636Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:21:23.960941 containerd[1935]: time="2025-02-13T15:21:23.960798148Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:23.970668 containerd[1935]: time="2025-02-13T15:21:23.970583656Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.270938734s" Feb 13 15:21:23.970668 containerd[1935]: time="2025-02-13T15:21:23.970663600Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:21:23.973374 containerd[1935]: time="2025-02-13T15:21:23.973250032Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:21:23.975966 containerd[1935]: time="2025-02-13T15:21:23.975675604Z" level=info msg="CreateContainer within sandbox \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:21:23.998660 containerd[1935]: time="2025-02-13T15:21:23.998591812Z" level=info msg="CreateContainer within sandbox \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189\"" Feb 13 15:21:23.999850 containerd[1935]: time="2025-02-13T15:21:23.999750496Z" level=info msg="StartContainer for \"38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189\"" Feb 13 15:21:24.055630 systemd[1]: Started cri-containerd-38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189.scope - libcontainer container 38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189. Feb 13 15:21:24.111899 containerd[1935]: time="2025-02-13T15:21:24.111844753Z" level=info msg="StartContainer for \"38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189\" returns successfully" Feb 13 15:21:24.135758 systemd[1]: cri-containerd-38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189.scope: Deactivated successfully. Feb 13 15:21:24.988658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189-rootfs.mount: Deactivated successfully. Feb 13 15:21:25.124965 kubelet[3444]: I0213 15:21:25.124889 3444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-r2fnm" podStartSLOduration=11.124830194 podStartE2EDuration="11.124830194s" podCreationTimestamp="2025-02-13 15:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:15.074928856 +0000 UTC m=+16.443874247" watchObservedRunningTime="2025-02-13 15:21:25.124830194 +0000 UTC m=+26.493775597" Feb 13 15:21:25.225057 containerd[1935]: time="2025-02-13T15:21:25.224922290Z" level=info msg="shim disconnected" id=38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189 namespace=k8s.io Feb 13 15:21:25.225057 containerd[1935]: time="2025-02-13T15:21:25.224999774Z" level=warning msg="cleaning up after shim disconnected" id=38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189 namespace=k8s.io Feb 13 15:21:25.225057 containerd[1935]: time="2025-02-13T15:21:25.225020306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:21:25.874075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2052631423.mount: Deactivated successfully. Feb 13 15:21:26.114777 containerd[1935]: time="2025-02-13T15:21:26.113113587Z" level=info msg="CreateContainer within sandbox \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:21:26.150005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3433701603.mount: Deactivated successfully. Feb 13 15:21:26.161680 containerd[1935]: time="2025-02-13T15:21:26.161543787Z" level=info msg="CreateContainer within sandbox \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035\"" Feb 13 15:21:26.164751 containerd[1935]: time="2025-02-13T15:21:26.164674095Z" level=info msg="StartContainer for \"1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035\"" Feb 13 15:21:26.241982 systemd[1]: Started cri-containerd-1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035.scope - libcontainer container 1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035. Feb 13 15:21:26.324004 containerd[1935]: time="2025-02-13T15:21:26.323582020Z" level=info msg="StartContainer for \"1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035\" returns successfully" Feb 13 15:21:26.361543 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:21:26.363049 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:21:26.363469 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:21:26.373096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:21:26.375870 systemd[1]: cri-containerd-1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035.scope: Deactivated successfully. Feb 13 15:21:26.424455 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:21:26.443803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035-rootfs.mount: Deactivated successfully. Feb 13 15:21:26.472797 containerd[1935]: time="2025-02-13T15:21:26.472711396Z" level=info msg="shim disconnected" id=1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035 namespace=k8s.io Feb 13 15:21:26.473893 containerd[1935]: time="2025-02-13T15:21:26.473621860Z" level=warning msg="cleaning up after shim disconnected" id=1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035 namespace=k8s.io Feb 13 15:21:26.473893 containerd[1935]: time="2025-02-13T15:21:26.473657152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:21:27.115407 containerd[1935]: time="2025-02-13T15:21:27.115302820Z" level=info msg="CreateContainer within sandbox \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:21:27.155259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849508855.mount: Deactivated successfully. Feb 13 15:21:27.160971 containerd[1935]: time="2025-02-13T15:21:27.160720768Z" level=info msg="CreateContainer within sandbox \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045\"" Feb 13 15:21:27.161575 containerd[1935]: time="2025-02-13T15:21:27.161511952Z" level=info msg="StartContainer for \"ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045\"" Feb 13 15:21:27.220681 systemd[1]: Started cri-containerd-ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045.scope - libcontainer container ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045. Feb 13 15:21:27.277951 containerd[1935]: time="2025-02-13T15:21:27.277110292Z" level=info msg="StartContainer for \"ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045\" returns successfully" Feb 13 15:21:27.280410 systemd[1]: cri-containerd-ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045.scope: Deactivated successfully. Feb 13 15:21:27.321231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045-rootfs.mount: Deactivated successfully. Feb 13 15:21:27.377747 containerd[1935]: time="2025-02-13T15:21:27.377543513Z" level=info msg="shim disconnected" id=ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045 namespace=k8s.io Feb 13 15:21:27.377747 containerd[1935]: time="2025-02-13T15:21:27.377651957Z" level=warning msg="cleaning up after shim disconnected" id=ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045 namespace=k8s.io Feb 13 15:21:27.377747 containerd[1935]: time="2025-02-13T15:21:27.377672909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:21:28.122236 containerd[1935]: time="2025-02-13T15:21:28.121608449Z" level=info msg="CreateContainer within sandbox \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:21:28.158970 containerd[1935]: time="2025-02-13T15:21:28.158642165Z" level=info msg="CreateContainer within sandbox \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454\"" Feb 13 15:21:28.159639 containerd[1935]: time="2025-02-13T15:21:28.159561221Z" level=info msg="StartContainer for \"8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454\"" Feb 13 15:21:28.214619 systemd[1]: Started cri-containerd-8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454.scope - libcontainer container 8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454. Feb 13 15:21:28.261416 systemd[1]: cri-containerd-8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454.scope: Deactivated successfully. Feb 13 15:21:28.266933 containerd[1935]: time="2025-02-13T15:21:28.266737373Z" level=info msg="StartContainer for \"8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454\" returns successfully" Feb 13 15:21:28.303130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454-rootfs.mount: Deactivated successfully. Feb 13 15:21:28.308017 containerd[1935]: time="2025-02-13T15:21:28.307931969Z" level=info msg="shim disconnected" id=8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454 namespace=k8s.io Feb 13 15:21:28.308017 containerd[1935]: time="2025-02-13T15:21:28.308008049Z" level=warning msg="cleaning up after shim disconnected" id=8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454 namespace=k8s.io Feb 13 15:21:28.308017 containerd[1935]: time="2025-02-13T15:21:28.308028905Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:21:29.129735 containerd[1935]: time="2025-02-13T15:21:29.129650874Z" level=info msg="CreateContainer within sandbox \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:21:29.172619 containerd[1935]: time="2025-02-13T15:21:29.172449594Z" level=info msg="CreateContainer within sandbox \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\"" Feb 13 15:21:29.175688 containerd[1935]: time="2025-02-13T15:21:29.175617006Z" level=info msg="StartContainer for \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\"" Feb 13 15:21:29.237916 systemd[1]: Started cri-containerd-1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6.scope - libcontainer container 1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6. Feb 13 15:21:29.287877 containerd[1935]: time="2025-02-13T15:21:29.287763426Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:29.292724 containerd[1935]: time="2025-02-13T15:21:29.291753750Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:21:29.293444 containerd[1935]: time="2025-02-13T15:21:29.293392266Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:29.313752 containerd[1935]: time="2025-02-13T15:21:29.313675434Z" level=info msg="StartContainer for \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\" returns successfully" Feb 13 15:21:29.314864 containerd[1935]: time="2025-02-13T15:21:29.314301906Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.340278522s" Feb 13 15:21:29.315361 containerd[1935]: time="2025-02-13T15:21:29.315259914Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:21:29.323279 containerd[1935]: time="2025-02-13T15:21:29.322683402Z" level=info msg="CreateContainer within sandbox \"7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:21:29.384121 containerd[1935]: time="2025-02-13T15:21:29.383616175Z" level=info msg="CreateContainer within sandbox \"7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\"" Feb 13 15:21:29.386540 containerd[1935]: time="2025-02-13T15:21:29.385599775Z" level=info msg="StartContainer for \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\"" Feb 13 15:21:29.469547 systemd[1]: Started cri-containerd-f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8.scope - libcontainer container f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8. Feb 13 15:21:29.515685 kubelet[3444]: I0213 15:21:29.515462 3444 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:21:29.564873 kubelet[3444]: I0213 15:21:29.564819 3444 topology_manager.go:215] "Topology Admit Handler" podUID="032605a5-8fc4-46a8-a56a-e1e5c0ff201a" podNamespace="kube-system" podName="coredns-76f75df574-vgcpp" Feb 13 15:21:29.570390 containerd[1935]: time="2025-02-13T15:21:29.569737592Z" level=info msg="StartContainer for \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\" returns successfully" Feb 13 15:21:29.591080 systemd[1]: Created slice kubepods-burstable-pod032605a5_8fc4_46a8_a56a_e1e5c0ff201a.slice - libcontainer container kubepods-burstable-pod032605a5_8fc4_46a8_a56a_e1e5c0ff201a.slice. Feb 13 15:21:29.592956 kubelet[3444]: I0213 15:21:29.592643 3444 topology_manager.go:215] "Topology Admit Handler" podUID="9000b7a3-c2a9-408f-9ae7-931706efec09" podNamespace="kube-system" podName="coredns-76f75df574-f5wfh" Feb 13 15:21:29.592956 kubelet[3444]: W0213 15:21:29.592835 3444 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-23-231" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-231' and this object Feb 13 15:21:29.592956 kubelet[3444]: E0213 15:21:29.592890 3444 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-23-231" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-231' and this object Feb 13 15:21:29.622821 kubelet[3444]: I0213 15:21:29.620798 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/032605a5-8fc4-46a8-a56a-e1e5c0ff201a-config-volume\") pod \"coredns-76f75df574-vgcpp\" (UID: \"032605a5-8fc4-46a8-a56a-e1e5c0ff201a\") " pod="kube-system/coredns-76f75df574-vgcpp" Feb 13 15:21:29.622821 kubelet[3444]: I0213 15:21:29.620878 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9000b7a3-c2a9-408f-9ae7-931706efec09-config-volume\") pod \"coredns-76f75df574-f5wfh\" (UID: \"9000b7a3-c2a9-408f-9ae7-931706efec09\") " pod="kube-system/coredns-76f75df574-f5wfh" Feb 13 15:21:29.622821 kubelet[3444]: I0213 15:21:29.620932 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxv2x\" (UniqueName: \"kubernetes.io/projected/032605a5-8fc4-46a8-a56a-e1e5c0ff201a-kube-api-access-vxv2x\") pod \"coredns-76f75df574-vgcpp\" (UID: \"032605a5-8fc4-46a8-a56a-e1e5c0ff201a\") " pod="kube-system/coredns-76f75df574-vgcpp" Feb 13 15:21:29.622821 kubelet[3444]: I0213 15:21:29.620981 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zp9j\" (UniqueName: \"kubernetes.io/projected/9000b7a3-c2a9-408f-9ae7-931706efec09-kube-api-access-9zp9j\") pod \"coredns-76f75df574-f5wfh\" (UID: \"9000b7a3-c2a9-408f-9ae7-931706efec09\") " pod="kube-system/coredns-76f75df574-f5wfh" Feb 13 15:21:29.622229 systemd[1]: Created slice kubepods-burstable-pod9000b7a3_c2a9_408f_9ae7_931706efec09.slice - libcontainer container kubepods-burstable-pod9000b7a3_c2a9_408f_9ae7_931706efec09.slice. Feb 13 15:21:30.177277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount850186172.mount: Deactivated successfully. Feb 13 15:21:30.319379 kubelet[3444]: I0213 15:21:30.318370 3444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-d6pv6" podStartSLOduration=1.835312592 podStartE2EDuration="16.318263371s" podCreationTimestamp="2025-02-13 15:21:14 +0000 UTC" firstStartedPulling="2025-02-13 15:21:14.832983907 +0000 UTC m=+16.201929310" lastFinishedPulling="2025-02-13 15:21:29.315934698 +0000 UTC m=+30.684880089" observedRunningTime="2025-02-13 15:21:30.219532351 +0000 UTC m=+31.588477766" watchObservedRunningTime="2025-02-13 15:21:30.318263371 +0000 UTC m=+31.687208786" Feb 13 15:21:30.508244 containerd[1935]: time="2025-02-13T15:21:30.508181420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vgcpp,Uid:032605a5-8fc4-46a8-a56a-e1e5c0ff201a,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:30.537740 containerd[1935]: time="2025-02-13T15:21:30.537675093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f5wfh,Uid:9000b7a3-c2a9-408f-9ae7-931706efec09,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:33.480664 systemd-networkd[1789]: cilium_host: Link UP Feb 13 15:21:33.480949 systemd-networkd[1789]: cilium_net: Link UP Feb 13 15:21:33.481249 systemd-networkd[1789]: cilium_net: Gained carrier Feb 13 15:21:33.482869 (udev-worker)[4219]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:21:33.483208 (udev-worker)[4216]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:21:33.487574 systemd-networkd[1789]: cilium_host: Gained carrier Feb 13 15:21:33.651985 (udev-worker)[4280]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:21:33.663559 systemd-networkd[1789]: cilium_vxlan: Link UP Feb 13 15:21:33.663580 systemd-networkd[1789]: cilium_vxlan: Gained carrier Feb 13 15:21:34.105577 systemd-networkd[1789]: cilium_net: Gained IPv6LL Feb 13 15:21:34.153430 kernel: NET: Registered PF_ALG protocol family Feb 13 15:21:34.489483 systemd-networkd[1789]: cilium_host: Gained IPv6LL Feb 13 15:21:35.450519 systemd-networkd[1789]: cilium_vxlan: Gained IPv6LL Feb 13 15:21:35.460895 (udev-worker)[4218]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:21:35.462864 systemd-networkd[1789]: lxc_health: Link UP Feb 13 15:21:35.476755 systemd-networkd[1789]: lxc_health: Gained carrier Feb 13 15:21:36.170286 systemd-networkd[1789]: lxcbc2841a8dda2: Link UP Feb 13 15:21:36.178443 kernel: eth0: renamed from tmp2fa58 Feb 13 15:21:36.186838 systemd-networkd[1789]: lxcbc2841a8dda2: Gained carrier Feb 13 15:21:36.209899 systemd-networkd[1789]: lxc94956181c2de: Link UP Feb 13 15:21:36.219416 kernel: eth0: renamed from tmp26c04 Feb 13 15:21:36.228950 systemd-networkd[1789]: lxc94956181c2de: Gained carrier Feb 13 15:21:36.542696 kubelet[3444]: I0213 15:21:36.542633 3444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9ctc5" podStartSLOduration=13.269853304 podStartE2EDuration="22.542565566s" podCreationTimestamp="2025-02-13 15:21:14 +0000 UTC" firstStartedPulling="2025-02-13 15:21:14.698578686 +0000 UTC m=+16.067524077" lastFinishedPulling="2025-02-13 15:21:23.97129096 +0000 UTC m=+25.340236339" observedRunningTime="2025-02-13 15:21:30.318811171 +0000 UTC m=+31.687756562" watchObservedRunningTime="2025-02-13 15:21:36.542565566 +0000 UTC m=+37.911510981" Feb 13 15:21:37.049648 systemd-networkd[1789]: lxc_health: Gained IPv6LL Feb 13 15:21:38.138375 systemd-networkd[1789]: lxcbc2841a8dda2: Gained IPv6LL Feb 13 15:21:38.142001 systemd-networkd[1789]: lxc94956181c2de: Gained IPv6LL Feb 13 15:21:40.438362 ntpd[1905]: Listen normally on 8 cilium_host 192.168.0.247:123 Feb 13 15:21:40.439801 ntpd[1905]: 13 Feb 15:21:40 ntpd[1905]: Listen normally on 8 cilium_host 192.168.0.247:123 Feb 13 15:21:40.439801 ntpd[1905]: 13 Feb 15:21:40 ntpd[1905]: Listen normally on 9 cilium_net [fe80::94a7:deff:fe54:d442%4]:123 Feb 13 15:21:40.439801 ntpd[1905]: 13 Feb 15:21:40 ntpd[1905]: Listen normally on 10 cilium_host [fe80::2020:dbff:feac:4d90%5]:123 Feb 13 15:21:40.439801 ntpd[1905]: 13 Feb 15:21:40 ntpd[1905]: Listen normally on 11 cilium_vxlan [fe80::e8cc:88ff:fe90:f16d%6]:123 Feb 13 15:21:40.439801 ntpd[1905]: 13 Feb 15:21:40 ntpd[1905]: Listen normally on 12 lxc_health [fe80::8056:28ff:fecc:17d0%8]:123 Feb 13 15:21:40.439801 ntpd[1905]: 13 Feb 15:21:40 ntpd[1905]: Listen normally on 13 lxcbc2841a8dda2 [fe80::86f:e8ff:fef9:9c%10]:123 Feb 13 15:21:40.439801 ntpd[1905]: 13 Feb 15:21:40 ntpd[1905]: Listen normally on 14 lxc94956181c2de [fe80::5f:70ff:fedc:3616%12]:123 Feb 13 15:21:40.438511 ntpd[1905]: Listen normally on 9 cilium_net [fe80::94a7:deff:fe54:d442%4]:123 Feb 13 15:21:40.438594 ntpd[1905]: Listen normally on 10 cilium_host [fe80::2020:dbff:feac:4d90%5]:123 Feb 13 15:21:40.438661 ntpd[1905]: Listen normally on 11 cilium_vxlan [fe80::e8cc:88ff:fe90:f16d%6]:123 Feb 13 15:21:40.438746 ntpd[1905]: Listen normally on 12 lxc_health [fe80::8056:28ff:fecc:17d0%8]:123 Feb 13 15:21:40.438817 ntpd[1905]: Listen normally on 13 lxcbc2841a8dda2 [fe80::86f:e8ff:fef9:9c%10]:123 Feb 13 15:21:40.438886 ntpd[1905]: Listen normally on 14 lxc94956181c2de [fe80::5f:70ff:fedc:3616%12]:123 Feb 13 15:21:44.476201 containerd[1935]: time="2025-02-13T15:21:44.475717846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:44.476201 containerd[1935]: time="2025-02-13T15:21:44.475847422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:44.476201 containerd[1935]: time="2025-02-13T15:21:44.475971850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:44.481394 containerd[1935]: time="2025-02-13T15:21:44.478235746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:44.514411 containerd[1935]: time="2025-02-13T15:21:44.513406066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:44.514411 containerd[1935]: time="2025-02-13T15:21:44.513522874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:44.514411 containerd[1935]: time="2025-02-13T15:21:44.513560854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:44.517184 containerd[1935]: time="2025-02-13T15:21:44.516871570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:44.566640 systemd[1]: Started cri-containerd-2fa580177fa12f1a79f7fc0a59512606ee9056ef54ad4a406e93a25b970977e8.scope - libcontainer container 2fa580177fa12f1a79f7fc0a59512606ee9056ef54ad4a406e93a25b970977e8. Feb 13 15:21:44.606746 systemd[1]: Started cri-containerd-26c040d972db4b440ab7f02ef14c776ed3482a3e472f9689fb38a6596c215bff.scope - libcontainer container 26c040d972db4b440ab7f02ef14c776ed3482a3e472f9689fb38a6596c215bff. Feb 13 15:21:44.746224 containerd[1935]: time="2025-02-13T15:21:44.745793183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vgcpp,Uid:032605a5-8fc4-46a8-a56a-e1e5c0ff201a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fa580177fa12f1a79f7fc0a59512606ee9056ef54ad4a406e93a25b970977e8\"" Feb 13 15:21:44.751205 containerd[1935]: time="2025-02-13T15:21:44.750322943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f5wfh,Uid:9000b7a3-c2a9-408f-9ae7-931706efec09,Namespace:kube-system,Attempt:0,} returns sandbox id \"26c040d972db4b440ab7f02ef14c776ed3482a3e472f9689fb38a6596c215bff\"" Feb 13 15:21:44.764059 containerd[1935]: time="2025-02-13T15:21:44.763715519Z" level=info msg="CreateContainer within sandbox \"2fa580177fa12f1a79f7fc0a59512606ee9056ef54ad4a406e93a25b970977e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:21:44.765994 containerd[1935]: time="2025-02-13T15:21:44.765781919Z" level=info msg="CreateContainer within sandbox \"26c040d972db4b440ab7f02ef14c776ed3482a3e472f9689fb38a6596c215bff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:21:44.833921 containerd[1935]: time="2025-02-13T15:21:44.833652396Z" level=info msg="CreateContainer within sandbox \"26c040d972db4b440ab7f02ef14c776ed3482a3e472f9689fb38a6596c215bff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fdd50b5360b346ef335cd80eb414523959effc93c68207304be4f49064995b4\"" Feb 13 15:21:44.836833 containerd[1935]: time="2025-02-13T15:21:44.836170884Z" level=info msg="StartContainer for \"4fdd50b5360b346ef335cd80eb414523959effc93c68207304be4f49064995b4\"" Feb 13 15:21:44.848601 containerd[1935]: time="2025-02-13T15:21:44.848271540Z" level=info msg="CreateContainer within sandbox \"2fa580177fa12f1a79f7fc0a59512606ee9056ef54ad4a406e93a25b970977e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"672e510251e1710ddd78d5e64e68064b7d1fe6195a00fa4228682b259e6b3a04\"" Feb 13 15:21:44.850973 containerd[1935]: time="2025-02-13T15:21:44.850760700Z" level=info msg="StartContainer for \"672e510251e1710ddd78d5e64e68064b7d1fe6195a00fa4228682b259e6b3a04\"" Feb 13 15:21:44.907650 systemd[1]: Started cri-containerd-4fdd50b5360b346ef335cd80eb414523959effc93c68207304be4f49064995b4.scope - libcontainer container 4fdd50b5360b346ef335cd80eb414523959effc93c68207304be4f49064995b4. Feb 13 15:21:44.923089 systemd[1]: Started cri-containerd-672e510251e1710ddd78d5e64e68064b7d1fe6195a00fa4228682b259e6b3a04.scope - libcontainer container 672e510251e1710ddd78d5e64e68064b7d1fe6195a00fa4228682b259e6b3a04. Feb 13 15:21:45.002131 containerd[1935]: time="2025-02-13T15:21:45.001906868Z" level=info msg="StartContainer for \"672e510251e1710ddd78d5e64e68064b7d1fe6195a00fa4228682b259e6b3a04\" returns successfully" Feb 13 15:21:45.002131 containerd[1935]: time="2025-02-13T15:21:45.001909664Z" level=info msg="StartContainer for \"4fdd50b5360b346ef335cd80eb414523959effc93c68207304be4f49064995b4\" returns successfully" Feb 13 15:21:45.211002 kubelet[3444]: I0213 15:21:45.210901 3444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-f5wfh" podStartSLOduration=31.210837561 podStartE2EDuration="31.210837561s" podCreationTimestamp="2025-02-13 15:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:45.209213289 +0000 UTC m=+46.578158704" watchObservedRunningTime="2025-02-13 15:21:45.210837561 +0000 UTC m=+46.579782964" Feb 13 15:21:45.232861 kubelet[3444]: I0213 15:21:45.232784 3444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vgcpp" podStartSLOduration=31.232727398 podStartE2EDuration="31.232727398s" podCreationTimestamp="2025-02-13 15:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:45.229926154 +0000 UTC m=+46.598871569" watchObservedRunningTime="2025-02-13 15:21:45.232727398 +0000 UTC m=+46.601672789" Feb 13 15:21:49.683851 systemd[1]: Started sshd@9-172.31.23.231:22-147.75.109.163:52628.service - OpenSSH per-connection server daemon (147.75.109.163:52628). Feb 13 15:21:49.871062 sshd[4815]: Accepted publickey for core from 147.75.109.163 port 52628 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:21:49.875182 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:49.883449 systemd-logind[1911]: New session 10 of user core. Feb 13 15:21:49.891574 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:21:50.157824 sshd[4817]: Connection closed by 147.75.109.163 port 52628 Feb 13 15:21:50.158846 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:50.165293 systemd[1]: sshd@9-172.31.23.231:22-147.75.109.163:52628.service: Deactivated successfully. Feb 13 15:21:50.165788 systemd-logind[1911]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:21:50.170067 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:21:50.173877 systemd-logind[1911]: Removed session 10. Feb 13 15:21:55.199820 systemd[1]: Started sshd@10-172.31.23.231:22-147.75.109.163:52636.service - OpenSSH per-connection server daemon (147.75.109.163:52636). Feb 13 15:21:55.395070 sshd[4830]: Accepted publickey for core from 147.75.109.163 port 52636 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:21:55.397621 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:55.405965 systemd-logind[1911]: New session 11 of user core. Feb 13 15:21:55.414571 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:21:55.658958 sshd[4832]: Connection closed by 147.75.109.163 port 52636 Feb 13 15:21:55.657867 sshd-session[4830]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:55.663050 systemd-logind[1911]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:21:55.664594 systemd[1]: sshd@10-172.31.23.231:22-147.75.109.163:52636.service: Deactivated successfully. Feb 13 15:21:55.668426 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:21:55.673995 systemd-logind[1911]: Removed session 11. Feb 13 15:22:00.697844 systemd[1]: Started sshd@11-172.31.23.231:22-147.75.109.163:47664.service - OpenSSH per-connection server daemon (147.75.109.163:47664). Feb 13 15:22:00.884565 sshd[4846]: Accepted publickey for core from 147.75.109.163 port 47664 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:00.887507 sshd-session[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:00.894744 systemd-logind[1911]: New session 12 of user core. Feb 13 15:22:00.901590 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:22:01.150692 sshd[4848]: Connection closed by 147.75.109.163 port 47664 Feb 13 15:22:01.151626 sshd-session[4846]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:01.158378 systemd[1]: sshd@11-172.31.23.231:22-147.75.109.163:47664.service: Deactivated successfully. Feb 13 15:22:01.162564 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:22:01.164458 systemd-logind[1911]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:22:01.166879 systemd-logind[1911]: Removed session 12. Feb 13 15:22:06.190830 systemd[1]: Started sshd@12-172.31.23.231:22-147.75.109.163:47670.service - OpenSSH per-connection server daemon (147.75.109.163:47670). Feb 13 15:22:06.378936 sshd[4860]: Accepted publickey for core from 147.75.109.163 port 47670 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:06.381476 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:06.390243 systemd-logind[1911]: New session 13 of user core. Feb 13 15:22:06.395576 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:22:06.639406 sshd[4862]: Connection closed by 147.75.109.163 port 47670 Feb 13 15:22:06.640381 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:06.646934 systemd[1]: sshd@12-172.31.23.231:22-147.75.109.163:47670.service: Deactivated successfully. Feb 13 15:22:06.651589 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:22:06.653008 systemd-logind[1911]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:22:06.655000 systemd-logind[1911]: Removed session 13. Feb 13 15:22:11.677831 systemd[1]: Started sshd@13-172.31.23.231:22-147.75.109.163:38820.service - OpenSSH per-connection server daemon (147.75.109.163:38820). Feb 13 15:22:11.860478 sshd[4873]: Accepted publickey for core from 147.75.109.163 port 38820 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:11.862921 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:11.871797 systemd-logind[1911]: New session 14 of user core. Feb 13 15:22:11.879623 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:22:12.122837 sshd[4875]: Connection closed by 147.75.109.163 port 38820 Feb 13 15:22:12.123718 sshd-session[4873]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:12.130128 systemd[1]: sshd@13-172.31.23.231:22-147.75.109.163:38820.service: Deactivated successfully. Feb 13 15:22:12.135482 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:22:12.137155 systemd-logind[1911]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:22:12.139079 systemd-logind[1911]: Removed session 14. Feb 13 15:22:12.165841 systemd[1]: Started sshd@14-172.31.23.231:22-147.75.109.163:38832.service - OpenSSH per-connection server daemon (147.75.109.163:38832). Feb 13 15:22:12.355353 sshd[4887]: Accepted publickey for core from 147.75.109.163 port 38832 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:12.357413 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:12.367643 systemd-logind[1911]: New session 15 of user core. Feb 13 15:22:12.374651 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:22:12.692434 sshd[4889]: Connection closed by 147.75.109.163 port 38832 Feb 13 15:22:12.693181 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:12.702919 systemd[1]: sshd@14-172.31.23.231:22-147.75.109.163:38832.service: Deactivated successfully. Feb 13 15:22:12.703193 systemd-logind[1911]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:22:12.708909 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:22:12.724455 systemd-logind[1911]: Removed session 15. Feb 13 15:22:12.736159 systemd[1]: Started sshd@15-172.31.23.231:22-147.75.109.163:38834.service - OpenSSH per-connection server daemon (147.75.109.163:38834). Feb 13 15:22:12.917059 sshd[4897]: Accepted publickey for core from 147.75.109.163 port 38834 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:12.919734 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:12.928420 systemd-logind[1911]: New session 16 of user core. Feb 13 15:22:12.935830 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:22:13.178873 sshd[4899]: Connection closed by 147.75.109.163 port 38834 Feb 13 15:22:13.179723 sshd-session[4897]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:13.185834 systemd[1]: sshd@15-172.31.23.231:22-147.75.109.163:38834.service: Deactivated successfully. Feb 13 15:22:13.186506 systemd-logind[1911]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:22:13.191837 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:22:13.195299 systemd-logind[1911]: Removed session 16. Feb 13 15:22:18.223786 systemd[1]: Started sshd@16-172.31.23.231:22-147.75.109.163:38846.service - OpenSSH per-connection server daemon (147.75.109.163:38846). Feb 13 15:22:18.403085 sshd[4914]: Accepted publickey for core from 147.75.109.163 port 38846 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:18.405545 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:18.413729 systemd-logind[1911]: New session 17 of user core. Feb 13 15:22:18.418590 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:22:18.664410 sshd[4916]: Connection closed by 147.75.109.163 port 38846 Feb 13 15:22:18.665277 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:18.670750 systemd[1]: sshd@16-172.31.23.231:22-147.75.109.163:38846.service: Deactivated successfully. Feb 13 15:22:18.675103 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:22:18.679585 systemd-logind[1911]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:22:18.681227 systemd-logind[1911]: Removed session 17. Feb 13 15:22:23.703836 systemd[1]: Started sshd@17-172.31.23.231:22-147.75.109.163:59936.service - OpenSSH per-connection server daemon (147.75.109.163:59936). Feb 13 15:22:23.886413 sshd[4928]: Accepted publickey for core from 147.75.109.163 port 59936 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:23.889625 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:23.898575 systemd-logind[1911]: New session 18 of user core. Feb 13 15:22:23.906597 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:22:24.154284 sshd[4930]: Connection closed by 147.75.109.163 port 59936 Feb 13 15:22:24.155431 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:24.161745 systemd[1]: sshd@17-172.31.23.231:22-147.75.109.163:59936.service: Deactivated successfully. Feb 13 15:22:24.162464 systemd-logind[1911]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:22:24.166018 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:22:24.171282 systemd-logind[1911]: Removed session 18. Feb 13 15:22:29.196836 systemd[1]: Started sshd@18-172.31.23.231:22-147.75.109.163:59942.service - OpenSSH per-connection server daemon (147.75.109.163:59942). Feb 13 15:22:29.389513 sshd[4941]: Accepted publickey for core from 147.75.109.163 port 59942 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:29.392658 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:29.400845 systemd-logind[1911]: New session 19 of user core. Feb 13 15:22:29.410610 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:22:29.682171 sshd[4943]: Connection closed by 147.75.109.163 port 59942 Feb 13 15:22:29.683045 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:29.688744 systemd-logind[1911]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:22:29.690144 systemd[1]: sshd@18-172.31.23.231:22-147.75.109.163:59942.service: Deactivated successfully. Feb 13 15:22:29.694253 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:22:29.698066 systemd-logind[1911]: Removed session 19. Feb 13 15:22:34.722102 systemd[1]: Started sshd@19-172.31.23.231:22-147.75.109.163:47328.service - OpenSSH per-connection server daemon (147.75.109.163:47328). Feb 13 15:22:34.904797 sshd[4954]: Accepted publickey for core from 147.75.109.163 port 47328 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:34.907249 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:34.914709 systemd-logind[1911]: New session 20 of user core. Feb 13 15:22:34.922568 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:22:35.168730 sshd[4956]: Connection closed by 147.75.109.163 port 47328 Feb 13 15:22:35.169946 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:35.175026 systemd-logind[1911]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:22:35.175427 systemd[1]: sshd@19-172.31.23.231:22-147.75.109.163:47328.service: Deactivated successfully. Feb 13 15:22:35.180824 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:22:35.185116 systemd-logind[1911]: Removed session 20. Feb 13 15:22:35.208844 systemd[1]: Started sshd@20-172.31.23.231:22-147.75.109.163:47342.service - OpenSSH per-connection server daemon (147.75.109.163:47342). Feb 13 15:22:35.403702 sshd[4967]: Accepted publickey for core from 147.75.109.163 port 47342 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:35.406243 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:35.413750 systemd-logind[1911]: New session 21 of user core. Feb 13 15:22:35.424587 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:22:35.721680 sshd[4969]: Connection closed by 147.75.109.163 port 47342 Feb 13 15:22:35.722180 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:35.728942 systemd[1]: sshd@20-172.31.23.231:22-147.75.109.163:47342.service: Deactivated successfully. Feb 13 15:22:35.732566 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:22:35.735238 systemd-logind[1911]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:22:35.737201 systemd-logind[1911]: Removed session 21. Feb 13 15:22:35.763839 systemd[1]: Started sshd@21-172.31.23.231:22-147.75.109.163:47352.service - OpenSSH per-connection server daemon (147.75.109.163:47352). Feb 13 15:22:35.942389 sshd[4978]: Accepted publickey for core from 147.75.109.163 port 47352 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:35.944910 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:35.954588 systemd-logind[1911]: New session 22 of user core. Feb 13 15:22:35.959594 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:22:38.583474 sshd[4980]: Connection closed by 147.75.109.163 port 47352 Feb 13 15:22:38.584455 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:38.592610 systemd[1]: sshd@21-172.31.23.231:22-147.75.109.163:47352.service: Deactivated successfully. Feb 13 15:22:38.598178 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:22:38.604442 systemd-logind[1911]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:22:38.628907 systemd[1]: Started sshd@22-172.31.23.231:22-147.75.109.163:47356.service - OpenSSH per-connection server daemon (147.75.109.163:47356). Feb 13 15:22:38.631016 systemd-logind[1911]: Removed session 22. Feb 13 15:22:38.816635 sshd[4997]: Accepted publickey for core from 147.75.109.163 port 47356 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:38.819116 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:38.828653 systemd-logind[1911]: New session 23 of user core. Feb 13 15:22:38.835567 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:22:39.327374 sshd[4999]: Connection closed by 147.75.109.163 port 47356 Feb 13 15:22:39.330098 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:39.337727 systemd[1]: sshd@22-172.31.23.231:22-147.75.109.163:47356.service: Deactivated successfully. Feb 13 15:22:39.342197 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:22:39.344378 systemd-logind[1911]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:22:39.346884 systemd-logind[1911]: Removed session 23. Feb 13 15:22:39.367858 systemd[1]: Started sshd@23-172.31.23.231:22-147.75.109.163:37028.service - OpenSSH per-connection server daemon (147.75.109.163:37028). Feb 13 15:22:39.561964 sshd[5007]: Accepted publickey for core from 147.75.109.163 port 37028 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:39.564690 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:39.575641 systemd-logind[1911]: New session 24 of user core. Feb 13 15:22:39.584591 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:22:39.828523 sshd[5009]: Connection closed by 147.75.109.163 port 37028 Feb 13 15:22:39.828941 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:39.836696 systemd[1]: sshd@23-172.31.23.231:22-147.75.109.163:37028.service: Deactivated successfully. Feb 13 15:22:39.840953 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:22:39.842184 systemd-logind[1911]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:22:39.844072 systemd-logind[1911]: Removed session 24. Feb 13 15:22:44.866820 systemd[1]: Started sshd@24-172.31.23.231:22-147.75.109.163:37042.service - OpenSSH per-connection server daemon (147.75.109.163:37042). Feb 13 15:22:45.058125 sshd[5022]: Accepted publickey for core from 147.75.109.163 port 37042 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:45.060679 sshd-session[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:45.068943 systemd-logind[1911]: New session 25 of user core. Feb 13 15:22:45.077615 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:22:45.316200 sshd[5026]: Connection closed by 147.75.109.163 port 37042 Feb 13 15:22:45.316058 sshd-session[5022]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:45.321203 systemd[1]: sshd@24-172.31.23.231:22-147.75.109.163:37042.service: Deactivated successfully. Feb 13 15:22:45.325861 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:22:45.329375 systemd-logind[1911]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:22:45.331622 systemd-logind[1911]: Removed session 25. Feb 13 15:22:50.355828 systemd[1]: Started sshd@25-172.31.23.231:22-147.75.109.163:49634.service - OpenSSH per-connection server daemon (147.75.109.163:49634). Feb 13 15:22:50.550718 sshd[5041]: Accepted publickey for core from 147.75.109.163 port 49634 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:50.553251 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:50.561435 systemd-logind[1911]: New session 26 of user core. Feb 13 15:22:50.572595 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:22:50.812611 sshd[5043]: Connection closed by 147.75.109.163 port 49634 Feb 13 15:22:50.813580 sshd-session[5041]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:50.820066 systemd[1]: sshd@25-172.31.23.231:22-147.75.109.163:49634.service: Deactivated successfully. Feb 13 15:22:50.823198 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:22:50.825159 systemd-logind[1911]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:22:50.827627 systemd-logind[1911]: Removed session 26. Feb 13 15:22:55.853849 systemd[1]: Started sshd@26-172.31.23.231:22-147.75.109.163:49650.service - OpenSSH per-connection server daemon (147.75.109.163:49650). Feb 13 15:22:56.041939 sshd[5053]: Accepted publickey for core from 147.75.109.163 port 49650 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:56.044491 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:56.057230 systemd-logind[1911]: New session 27 of user core. Feb 13 15:22:56.063635 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:22:56.315546 sshd[5055]: Connection closed by 147.75.109.163 port 49650 Feb 13 15:22:56.316427 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:56.322813 systemd[1]: sshd@26-172.31.23.231:22-147.75.109.163:49650.service: Deactivated successfully. Feb 13 15:22:56.327153 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:22:56.329081 systemd-logind[1911]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:22:56.331299 systemd-logind[1911]: Removed session 27. Feb 13 15:23:01.356919 systemd[1]: Started sshd@27-172.31.23.231:22-147.75.109.163:46984.service - OpenSSH per-connection server daemon (147.75.109.163:46984). Feb 13 15:23:01.549595 sshd[5069]: Accepted publickey for core from 147.75.109.163 port 46984 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:01.552541 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:01.562582 systemd-logind[1911]: New session 28 of user core. Feb 13 15:23:01.567628 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:23:01.832226 sshd[5071]: Connection closed by 147.75.109.163 port 46984 Feb 13 15:23:01.834741 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:01.841702 systemd[1]: sshd@27-172.31.23.231:22-147.75.109.163:46984.service: Deactivated successfully. Feb 13 15:23:01.847970 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:23:01.850255 systemd-logind[1911]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:23:01.869955 systemd-logind[1911]: Removed session 28. Feb 13 15:23:01.876930 systemd[1]: Started sshd@28-172.31.23.231:22-147.75.109.163:46998.service - OpenSSH per-connection server daemon (147.75.109.163:46998). Feb 13 15:23:02.068095 sshd[5082]: Accepted publickey for core from 147.75.109.163 port 46998 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:02.070551 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:02.082386 systemd-logind[1911]: New session 29 of user core. Feb 13 15:23:02.087579 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:23:05.263703 containerd[1935]: time="2025-02-13T15:23:05.263348211Z" level=info msg="StopContainer for \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\" with timeout 30 (s)" Feb 13 15:23:05.265808 containerd[1935]: time="2025-02-13T15:23:05.265552119Z" level=info msg="Stop container \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\" with signal terminated" Feb 13 15:23:05.292034 systemd[1]: cri-containerd-f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8.scope: Deactivated successfully. Feb 13 15:23:05.308931 containerd[1935]: time="2025-02-13T15:23:05.308817723Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:23:05.323598 containerd[1935]: time="2025-02-13T15:23:05.323547651Z" level=info msg="StopContainer for \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\" with timeout 2 (s)" Feb 13 15:23:05.324612 containerd[1935]: time="2025-02-13T15:23:05.324568971Z" level=info msg="Stop container \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\" with signal terminated" Feb 13 15:23:05.341087 systemd-networkd[1789]: lxc_health: Link DOWN Feb 13 15:23:05.341108 systemd-networkd[1789]: lxc_health: Lost carrier Feb 13 15:23:05.360896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8-rootfs.mount: Deactivated successfully. Feb 13 15:23:05.378459 containerd[1935]: time="2025-02-13T15:23:05.378297184Z" level=info msg="shim disconnected" id=f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8 namespace=k8s.io Feb 13 15:23:05.378459 containerd[1935]: time="2025-02-13T15:23:05.378393772Z" level=warning msg="cleaning up after shim disconnected" id=f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8 namespace=k8s.io Feb 13 15:23:05.378459 containerd[1935]: time="2025-02-13T15:23:05.378415168Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:05.380867 systemd[1]: cri-containerd-1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6.scope: Deactivated successfully. Feb 13 15:23:05.381948 systemd[1]: cri-containerd-1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6.scope: Consumed 14.286s CPU time. Feb 13 15:23:05.421302 containerd[1935]: time="2025-02-13T15:23:05.421233868Z" level=info msg="StopContainer for \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\" returns successfully" Feb 13 15:23:05.422364 containerd[1935]: time="2025-02-13T15:23:05.422215228Z" level=info msg="StopPodSandbox for \"7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d\"" Feb 13 15:23:05.422595 containerd[1935]: time="2025-02-13T15:23:05.422297212Z" level=info msg="Container to stop \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:23:05.426844 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d-shm.mount: Deactivated successfully. Feb 13 15:23:05.436981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6-rootfs.mount: Deactivated successfully. Feb 13 15:23:05.444017 systemd[1]: cri-containerd-7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d.scope: Deactivated successfully. Feb 13 15:23:05.450542 containerd[1935]: time="2025-02-13T15:23:05.450466816Z" level=info msg="shim disconnected" id=1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6 namespace=k8s.io Feb 13 15:23:05.451041 containerd[1935]: time="2025-02-13T15:23:05.450890536Z" level=warning msg="cleaning up after shim disconnected" id=1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6 namespace=k8s.io Feb 13 15:23:05.451041 containerd[1935]: time="2025-02-13T15:23:05.450922780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:05.483394 containerd[1935]: time="2025-02-13T15:23:05.483246172Z" level=info msg="StopContainer for \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\" returns successfully" Feb 13 15:23:05.484738 containerd[1935]: time="2025-02-13T15:23:05.484398196Z" level=info msg="StopPodSandbox for \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\"" Feb 13 15:23:05.484738 containerd[1935]: time="2025-02-13T15:23:05.484529212Z" level=info msg="Container to stop \"38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:23:05.484738 containerd[1935]: time="2025-02-13T15:23:05.484559404Z" level=info msg="Container to stop \"1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:23:05.484738 containerd[1935]: time="2025-02-13T15:23:05.484581640Z" level=info msg="Container to stop \"8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:23:05.484738 containerd[1935]: time="2025-02-13T15:23:05.484605472Z" level=info msg="Container to stop \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:23:05.484738 containerd[1935]: time="2025-02-13T15:23:05.484627480Z" level=info msg="Container to stop \"ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:23:05.491535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706-shm.mount: Deactivated successfully. Feb 13 15:23:05.497750 systemd[1]: cri-containerd-817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706.scope: Deactivated successfully. Feb 13 15:23:05.526005 containerd[1935]: time="2025-02-13T15:23:05.525558976Z" level=info msg="shim disconnected" id=7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d namespace=k8s.io Feb 13 15:23:05.526005 containerd[1935]: time="2025-02-13T15:23:05.525642928Z" level=warning msg="cleaning up after shim disconnected" id=7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d namespace=k8s.io Feb 13 15:23:05.526005 containerd[1935]: time="2025-02-13T15:23:05.525662908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:05.556420 containerd[1935]: time="2025-02-13T15:23:05.556037284Z" level=info msg="shim disconnected" id=817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706 namespace=k8s.io Feb 13 15:23:05.556420 containerd[1935]: time="2025-02-13T15:23:05.556126216Z" level=warning msg="cleaning up after shim disconnected" id=817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706 namespace=k8s.io Feb 13 15:23:05.556420 containerd[1935]: time="2025-02-13T15:23:05.556148500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:05.559667 containerd[1935]: time="2025-02-13T15:23:05.558909256Z" level=info msg="TearDown network for sandbox \"7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d\" successfully" Feb 13 15:23:05.559667 containerd[1935]: time="2025-02-13T15:23:05.558994312Z" level=info msg="StopPodSandbox for \"7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d\" returns successfully" Feb 13 15:23:05.593068 containerd[1935]: time="2025-02-13T15:23:05.593015237Z" level=info msg="TearDown network for sandbox \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\" successfully" Feb 13 15:23:05.594014 containerd[1935]: time="2025-02-13T15:23:05.593563805Z" level=info msg="StopPodSandbox for \"817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706\" returns successfully" Feb 13 15:23:05.611468 kubelet[3444]: I0213 15:23:05.611300 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00204c24-9b96-45a4-aea2-32228cf759a2-cilium-config-path\") pod \"00204c24-9b96-45a4-aea2-32228cf759a2\" (UID: \"00204c24-9b96-45a4-aea2-32228cf759a2\") " Feb 13 15:23:05.611468 kubelet[3444]: I0213 15:23:05.611394 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrts9\" (UniqueName: \"kubernetes.io/projected/00204c24-9b96-45a4-aea2-32228cf759a2-kube-api-access-hrts9\") pod \"00204c24-9b96-45a4-aea2-32228cf759a2\" (UID: \"00204c24-9b96-45a4-aea2-32228cf759a2\") " Feb 13 15:23:05.624738 kubelet[3444]: I0213 15:23:05.622933 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00204c24-9b96-45a4-aea2-32228cf759a2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "00204c24-9b96-45a4-aea2-32228cf759a2" (UID: "00204c24-9b96-45a4-aea2-32228cf759a2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:23:05.629332 kubelet[3444]: I0213 15:23:05.629232 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00204c24-9b96-45a4-aea2-32228cf759a2-kube-api-access-hrts9" (OuterVolumeSpecName: "kube-api-access-hrts9") pod "00204c24-9b96-45a4-aea2-32228cf759a2" (UID: "00204c24-9b96-45a4-aea2-32228cf759a2"). InnerVolumeSpecName "kube-api-access-hrts9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:23:05.712537 kubelet[3444]: I0213 15:23:05.712495 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/637934ce-7b58-4703-be9c-0f058175c2fe-clustermesh-secrets\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.712793 kubelet[3444]: I0213 15:23:05.712769 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/637934ce-7b58-4703-be9c-0f058175c2fe-cilium-config-path\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.712956 kubelet[3444]: I0213 15:23:05.712931 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-cni-path\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.713099 kubelet[3444]: I0213 15:23:05.713079 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zs2r\" (UniqueName: \"kubernetes.io/projected/637934ce-7b58-4703-be9c-0f058175c2fe-kube-api-access-4zs2r\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.713235 kubelet[3444]: I0213 15:23:05.713216 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-xtables-lock\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.715350 kubelet[3444]: I0213 15:23:05.713365 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-hostproc\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.715350 kubelet[3444]: I0213 15:23:05.713417 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-lib-modules\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.715350 kubelet[3444]: I0213 15:23:05.713461 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-host-proc-sys-net\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.715350 kubelet[3444]: I0213 15:23:05.713501 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-cilium-cgroup\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.715350 kubelet[3444]: I0213 15:23:05.713541 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-cilium-run\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.715350 kubelet[3444]: I0213 15:23:05.713578 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-bpf-maps\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.715743 kubelet[3444]: I0213 15:23:05.713624 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/637934ce-7b58-4703-be9c-0f058175c2fe-hubble-tls\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.715743 kubelet[3444]: I0213 15:23:05.713665 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-host-proc-sys-kernel\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.715743 kubelet[3444]: I0213 15:23:05.713706 3444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-etc-cni-netd\") pod \"637934ce-7b58-4703-be9c-0f058175c2fe\" (UID: \"637934ce-7b58-4703-be9c-0f058175c2fe\") " Feb 13 15:23:05.715743 kubelet[3444]: I0213 15:23:05.713775 3444 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00204c24-9b96-45a4-aea2-32228cf759a2-cilium-config-path\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.715743 kubelet[3444]: I0213 15:23:05.713803 3444 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hrts9\" (UniqueName: \"kubernetes.io/projected/00204c24-9b96-45a4-aea2-32228cf759a2-kube-api-access-hrts9\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.715743 kubelet[3444]: I0213 15:23:05.713846 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:05.716098 kubelet[3444]: I0213 15:23:05.714581 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:05.716098 kubelet[3444]: I0213 15:23:05.714663 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-cni-path" (OuterVolumeSpecName: "cni-path") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:05.716098 kubelet[3444]: I0213 15:23:05.715455 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:05.716098 kubelet[3444]: I0213 15:23:05.715539 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:05.716098 kubelet[3444]: I0213 15:23:05.715605 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:05.716497 kubelet[3444]: I0213 15:23:05.716460 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:05.716606 kubelet[3444]: I0213 15:23:05.716458 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:05.716711 kubelet[3444]: I0213 15:23:05.716554 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-hostproc" (OuterVolumeSpecName: "hostproc") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:05.716813 kubelet[3444]: I0213 15:23:05.716581 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:05.723256 kubelet[3444]: I0213 15:23:05.723185 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/637934ce-7b58-4703-be9c-0f058175c2fe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:23:05.725730 kubelet[3444]: I0213 15:23:05.725678 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/637934ce-7b58-4703-be9c-0f058175c2fe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:23:05.726393 kubelet[3444]: I0213 15:23:05.726352 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/637934ce-7b58-4703-be9c-0f058175c2fe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:23:05.726732 kubelet[3444]: I0213 15:23:05.726694 3444 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/637934ce-7b58-4703-be9c-0f058175c2fe-kube-api-access-4zs2r" (OuterVolumeSpecName: "kube-api-access-4zs2r") pod "637934ce-7b58-4703-be9c-0f058175c2fe" (UID: "637934ce-7b58-4703-be9c-0f058175c2fe"). InnerVolumeSpecName "kube-api-access-4zs2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:23:05.814410 kubelet[3444]: I0213 15:23:05.814227 3444 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-bpf-maps\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.814410 kubelet[3444]: I0213 15:23:05.814300 3444 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/637934ce-7b58-4703-be9c-0f058175c2fe-hubble-tls\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.814410 kubelet[3444]: I0213 15:23:05.814354 3444 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-host-proc-sys-kernel\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.814410 kubelet[3444]: I0213 15:23:05.814384 3444 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-etc-cni-netd\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.816598 kubelet[3444]: I0213 15:23:05.816205 3444 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-cilium-run\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.816598 kubelet[3444]: I0213 15:23:05.816282 3444 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/637934ce-7b58-4703-be9c-0f058175c2fe-clustermesh-secrets\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.816598 kubelet[3444]: I0213 15:23:05.816338 3444 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/637934ce-7b58-4703-be9c-0f058175c2fe-cilium-config-path\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.816598 kubelet[3444]: I0213 15:23:05.816367 3444 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-cni-path\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.816598 kubelet[3444]: I0213 15:23:05.816392 3444 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4zs2r\" (UniqueName: \"kubernetes.io/projected/637934ce-7b58-4703-be9c-0f058175c2fe-kube-api-access-4zs2r\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.816598 kubelet[3444]: I0213 15:23:05.816441 3444 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-xtables-lock\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.816598 kubelet[3444]: I0213 15:23:05.816471 3444 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-hostproc\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.816598 kubelet[3444]: I0213 15:23:05.816522 3444 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-host-proc-sys-net\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.817078 kubelet[3444]: I0213 15:23:05.816550 3444 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-cilium-cgroup\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:05.817078 kubelet[3444]: I0213 15:23:05.816597 3444 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/637934ce-7b58-4703-be9c-0f058175c2fe-lib-modules\") on node \"ip-172-31-23-231\" DevicePath \"\"" Feb 13 15:23:06.271261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7101f4e53c4ab4c4cd36b2fb254df564c7a2d8ea8713595faa4f468b7e952a9d-rootfs.mount: Deactivated successfully. Feb 13 15:23:06.271724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-817a649904ec9da9867c58b58eb78b4cdc5e41b18baf6a49b9a5a892169a0706-rootfs.mount: Deactivated successfully. Feb 13 15:23:06.272048 systemd[1]: var-lib-kubelet-pods-00204c24\x2d9b96\x2d45a4\x2daea2\x2d32228cf759a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhrts9.mount: Deactivated successfully. Feb 13 15:23:06.272189 systemd[1]: var-lib-kubelet-pods-637934ce\x2d7b58\x2d4703\x2dbe9c\x2d0f058175c2fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4zs2r.mount: Deactivated successfully. Feb 13 15:23:06.272343 systemd[1]: var-lib-kubelet-pods-637934ce\x2d7b58\x2d4703\x2dbe9c\x2d0f058175c2fe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:23:06.272812 systemd[1]: var-lib-kubelet-pods-637934ce\x2d7b58\x2d4703\x2dbe9c\x2d0f058175c2fe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:23:06.402489 kubelet[3444]: I0213 15:23:06.402446 3444 scope.go:117] "RemoveContainer" containerID="f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8" Feb 13 15:23:06.407038 containerd[1935]: time="2025-02-13T15:23:06.406798121Z" level=info msg="RemoveContainer for \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\"" Feb 13 15:23:06.414808 containerd[1935]: time="2025-02-13T15:23:06.413921201Z" level=info msg="RemoveContainer for \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\" returns successfully" Feb 13 15:23:06.415063 kubelet[3444]: I0213 15:23:06.414615 3444 scope.go:117] "RemoveContainer" containerID="f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8" Feb 13 15:23:06.415579 containerd[1935]: time="2025-02-13T15:23:06.415459037Z" level=error msg="ContainerStatus for \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\": not found" Feb 13 15:23:06.416101 kubelet[3444]: E0213 15:23:06.415738 3444 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\": not found" containerID="f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8" Feb 13 15:23:06.416101 kubelet[3444]: I0213 15:23:06.415892 3444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8"} err="failed to get container status \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1a92dd3d97b7a959fc12858527f2251d9f547f8e4c5d573b0fe24db03b894d8\": not found" Feb 13 15:23:06.416842 kubelet[3444]: I0213 15:23:06.416649 3444 scope.go:117] "RemoveContainer" containerID="1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6" Feb 13 15:23:06.424040 containerd[1935]: time="2025-02-13T15:23:06.422991869Z" level=info msg="RemoveContainer for \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\"" Feb 13 15:23:06.425033 systemd[1]: Removed slice kubepods-besteffort-pod00204c24_9b96_45a4_aea2_32228cf759a2.slice - libcontainer container kubepods-besteffort-pod00204c24_9b96_45a4_aea2_32228cf759a2.slice. Feb 13 15:23:06.434174 containerd[1935]: time="2025-02-13T15:23:06.434108405Z" level=info msg="RemoveContainer for \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\" returns successfully" Feb 13 15:23:06.434724 kubelet[3444]: I0213 15:23:06.434460 3444 scope.go:117] "RemoveContainer" containerID="8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454" Feb 13 15:23:06.439660 containerd[1935]: time="2025-02-13T15:23:06.439226357Z" level=info msg="RemoveContainer for \"8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454\"" Feb 13 15:23:06.442247 systemd[1]: Removed slice kubepods-burstable-pod637934ce_7b58_4703_be9c_0f058175c2fe.slice - libcontainer container kubepods-burstable-pod637934ce_7b58_4703_be9c_0f058175c2fe.slice. Feb 13 15:23:06.442511 systemd[1]: kubepods-burstable-pod637934ce_7b58_4703_be9c_0f058175c2fe.slice: Consumed 14.436s CPU time. Feb 13 15:23:06.451729 containerd[1935]: time="2025-02-13T15:23:06.450970169Z" level=info msg="RemoveContainer for \"8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454\" returns successfully" Feb 13 15:23:06.452275 kubelet[3444]: I0213 15:23:06.452221 3444 scope.go:117] "RemoveContainer" containerID="ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045" Feb 13 15:23:06.456772 containerd[1935]: time="2025-02-13T15:23:06.456227777Z" level=info msg="RemoveContainer for \"ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045\"" Feb 13 15:23:06.464061 containerd[1935]: time="2025-02-13T15:23:06.463813241Z" level=info msg="RemoveContainer for \"ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045\" returns successfully" Feb 13 15:23:06.464355 kubelet[3444]: I0213 15:23:06.464292 3444 scope.go:117] "RemoveContainer" containerID="1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035" Feb 13 15:23:06.467101 containerd[1935]: time="2025-02-13T15:23:06.466608341Z" level=info msg="RemoveContainer for \"1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035\"" Feb 13 15:23:06.472926 containerd[1935]: time="2025-02-13T15:23:06.472875065Z" level=info msg="RemoveContainer for \"1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035\" returns successfully" Feb 13 15:23:06.474915 kubelet[3444]: I0213 15:23:06.473489 3444 scope.go:117] "RemoveContainer" containerID="38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189" Feb 13 15:23:06.479629 containerd[1935]: time="2025-02-13T15:23:06.479537537Z" level=info msg="RemoveContainer for \"38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189\"" Feb 13 15:23:06.486395 containerd[1935]: time="2025-02-13T15:23:06.485951153Z" level=info msg="RemoveContainer for \"38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189\" returns successfully" Feb 13 15:23:06.487340 kubelet[3444]: I0213 15:23:06.486702 3444 scope.go:117] "RemoveContainer" containerID="1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6" Feb 13 15:23:06.487494 containerd[1935]: time="2025-02-13T15:23:06.487283801Z" level=error msg="ContainerStatus for \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\": not found" Feb 13 15:23:06.487913 kubelet[3444]: E0213 15:23:06.487756 3444 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\": not found" containerID="1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6" Feb 13 15:23:06.488277 kubelet[3444]: I0213 15:23:06.488135 3444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6"} err="failed to get container status \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b2668df0ee66df3f978d782093cf973d6e997f363651f0f6662cd3e6303acf6\": not found" Feb 13 15:23:06.488588 kubelet[3444]: I0213 15:23:06.488476 3444 scope.go:117] "RemoveContainer" containerID="8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454" Feb 13 15:23:06.490565 containerd[1935]: time="2025-02-13T15:23:06.489886973Z" level=error msg="ContainerStatus for \"8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454\": not found" Feb 13 15:23:06.490718 kubelet[3444]: E0213 15:23:06.490189 3444 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454\": not found" containerID="8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454" Feb 13 15:23:06.490718 kubelet[3444]: I0213 15:23:06.490247 3444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454"} err="failed to get container status \"8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454\": rpc error: code = NotFound desc = an error occurred when try to find container \"8287486f6ae802225f5e4afc96fa382bb9ab23f1e25e16dc388492a419f35454\": not found" Feb 13 15:23:06.490718 kubelet[3444]: I0213 15:23:06.490271 3444 scope.go:117] "RemoveContainer" containerID="ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045" Feb 13 15:23:06.490987 containerd[1935]: time="2025-02-13T15:23:06.490640561Z" level=error msg="ContainerStatus for \"ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045\": not found" Feb 13 15:23:06.491057 kubelet[3444]: E0213 15:23:06.490876 3444 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045\": not found" containerID="ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045" Feb 13 15:23:06.491057 kubelet[3444]: I0213 15:23:06.490927 3444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045"} err="failed to get container status \"ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea2b802e3eee8d4bb8c83f18dc8ca2d4b46ede31184c9673bb2f29d4ae197045\": not found" Feb 13 15:23:06.491057 kubelet[3444]: I0213 15:23:06.490950 3444 scope.go:117] "RemoveContainer" containerID="1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035" Feb 13 15:23:06.491924 containerd[1935]: time="2025-02-13T15:23:06.491347229Z" level=error msg="ContainerStatus for \"1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035\": not found" Feb 13 15:23:06.492030 kubelet[3444]: E0213 15:23:06.491607 3444 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035\": not found" containerID="1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035" Feb 13 15:23:06.492030 kubelet[3444]: I0213 15:23:06.491659 3444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035"} err="failed to get container status \"1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035\": rpc error: code = NotFound desc = an error occurred when try to find container \"1841135702e24ceef0ce0f333ec8ae49cbbb6f00023c8188effaaab60f488035\": not found" Feb 13 15:23:06.492030 kubelet[3444]: I0213 15:23:06.491681 3444 scope.go:117] "RemoveContainer" containerID="38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189" Feb 13 15:23:06.492211 containerd[1935]: time="2025-02-13T15:23:06.492031097Z" level=error msg="ContainerStatus for \"38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189\": not found" Feb 13 15:23:06.493192 kubelet[3444]: E0213 15:23:06.493003 3444 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189\": not found" containerID="38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189" Feb 13 15:23:06.493192 kubelet[3444]: I0213 15:23:06.493067 3444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189"} err="failed to get container status \"38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189\": rpc error: code = NotFound desc = an error occurred when try to find container \"38026da6314d153307be581075259786731f9d0951c152f5f2181edf6a2e0189\": not found" Feb 13 15:23:06.934460 kubelet[3444]: I0213 15:23:06.934399 3444 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="00204c24-9b96-45a4-aea2-32228cf759a2" path="/var/lib/kubelet/pods/00204c24-9b96-45a4-aea2-32228cf759a2/volumes" Feb 13 15:23:06.935499 kubelet[3444]: I0213 15:23:06.935464 3444 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="637934ce-7b58-4703-be9c-0f058175c2fe" path="/var/lib/kubelet/pods/637934ce-7b58-4703-be9c-0f058175c2fe/volumes" Feb 13 15:23:07.186210 sshd[5084]: Connection closed by 147.75.109.163 port 46998 Feb 13 15:23:07.187265 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:07.192715 systemd[1]: sshd@28-172.31.23.231:22-147.75.109.163:46998.service: Deactivated successfully. Feb 13 15:23:07.197085 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:23:07.197940 systemd[1]: session-29.scope: Consumed 2.422s CPU time. Feb 13 15:23:07.201160 systemd-logind[1911]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:23:07.204159 systemd-logind[1911]: Removed session 29. Feb 13 15:23:07.231814 systemd[1]: Started sshd@29-172.31.23.231:22-147.75.109.163:47014.service - OpenSSH per-connection server daemon (147.75.109.163:47014). Feb 13 15:23:07.415612 sshd[5245]: Accepted publickey for core from 147.75.109.163 port 47014 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:07.418724 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:07.429035 systemd-logind[1911]: New session 30 of user core. Feb 13 15:23:07.438256 ntpd[1905]: Deleting interface #12 lxc_health, fe80::8056:28ff:fecc:17d0%8#123, interface stats: received=0, sent=0, dropped=0, active_time=87 secs Feb 13 15:23:07.438832 ntpd[1905]: 13 Feb 15:23:07 ntpd[1905]: Deleting interface #12 lxc_health, fe80::8056:28ff:fecc:17d0%8#123, interface stats: received=0, sent=0, dropped=0, active_time=87 secs Feb 13 15:23:07.438647 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 15:23:09.144536 kubelet[3444]: E0213 15:23:09.144396 3444 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:23:09.929770 kubelet[3444]: E0213 15:23:09.929705 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-vgcpp" podUID="032605a5-8fc4-46a8-a56a-e1e5c0ff201a" Feb 13 15:23:09.935375 kubelet[3444]: I0213 15:23:09.935272 3444 topology_manager.go:215] "Topology Admit Handler" podUID="372edc8d-f258-4bee-8de5-1a00794c6ce5" podNamespace="kube-system" podName="cilium-rm7m4" Feb 13 15:23:09.935539 kubelet[3444]: E0213 15:23:09.935393 3444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="637934ce-7b58-4703-be9c-0f058175c2fe" containerName="mount-cgroup" Feb 13 15:23:09.935539 kubelet[3444]: E0213 15:23:09.935418 3444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="637934ce-7b58-4703-be9c-0f058175c2fe" containerName="apply-sysctl-overwrites" Feb 13 15:23:09.935539 kubelet[3444]: E0213 15:23:09.935436 3444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="637934ce-7b58-4703-be9c-0f058175c2fe" containerName="clean-cilium-state" Feb 13 15:23:09.935539 kubelet[3444]: E0213 15:23:09.935456 3444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="637934ce-7b58-4703-be9c-0f058175c2fe" containerName="cilium-agent" Feb 13 15:23:09.935539 kubelet[3444]: E0213 15:23:09.935473 3444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00204c24-9b96-45a4-aea2-32228cf759a2" containerName="cilium-operator" Feb 13 15:23:09.935539 kubelet[3444]: E0213 15:23:09.935490 3444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="637934ce-7b58-4703-be9c-0f058175c2fe" containerName="mount-bpf-fs" Feb 13 15:23:09.935539 kubelet[3444]: I0213 15:23:09.935535 3444 memory_manager.go:354] "RemoveStaleState removing state" podUID="637934ce-7b58-4703-be9c-0f058175c2fe" containerName="cilium-agent" Feb 13 15:23:09.935940 kubelet[3444]: I0213 15:23:09.935554 3444 memory_manager.go:354] "RemoveStaleState removing state" podUID="00204c24-9b96-45a4-aea2-32228cf759a2" containerName="cilium-operator" Feb 13 15:23:09.938964 sshd[5247]: Connection closed by 147.75.109.163 port 47014 Feb 13 15:23:09.941180 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:09.954542 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 15:23:09.956757 systemd[1]: session-30.scope: Consumed 2.298s CPU time. Feb 13 15:23:09.958582 systemd[1]: sshd@29-172.31.23.231:22-147.75.109.163:47014.service: Deactivated successfully. Feb 13 15:23:09.988940 systemd-logind[1911]: Session 30 logged out. Waiting for processes to exit. Feb 13 15:23:09.990878 systemd[1]: Created slice kubepods-burstable-pod372edc8d_f258_4bee_8de5_1a00794c6ce5.slice - libcontainer container kubepods-burstable-pod372edc8d_f258_4bee_8de5_1a00794c6ce5.slice. Feb 13 15:23:09.996768 kubelet[3444]: W0213 15:23:09.996479 3444 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-23-231" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-231' and this object Feb 13 15:23:09.996768 kubelet[3444]: E0213 15:23:09.996535 3444 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-23-231" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-231' and this object Feb 13 15:23:09.996768 kubelet[3444]: W0213 15:23:09.996618 3444 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-23-231" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-231' and this object Feb 13 15:23:09.996768 kubelet[3444]: E0213 15:23:09.996642 3444 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-23-231" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-231' and this object Feb 13 15:23:09.996768 kubelet[3444]: W0213 15:23:09.996707 3444 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-23-231" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-231' and this object Feb 13 15:23:09.998548 kubelet[3444]: E0213 15:23:09.996731 3444 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-23-231" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-231' and this object Feb 13 15:23:09.998643 kubelet[3444]: W0213 15:23:09.998563 3444 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-23-231" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-231' and this object Feb 13 15:23:09.998643 kubelet[3444]: E0213 15:23:09.998611 3444 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-23-231" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-231' and this object Feb 13 15:23:10.003898 systemd[1]: Started sshd@30-172.31.23.231:22-147.75.109.163:35490.service - OpenSSH per-connection server daemon (147.75.109.163:35490). Feb 13 15:23:10.013869 systemd-logind[1911]: Removed session 30. Feb 13 15:23:10.043379 kubelet[3444]: I0213 15:23:10.041501 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/372edc8d-f258-4bee-8de5-1a00794c6ce5-etc-cni-netd\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.043379 kubelet[3444]: I0213 15:23:10.041599 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/372edc8d-f258-4bee-8de5-1a00794c6ce5-clustermesh-secrets\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.043379 kubelet[3444]: I0213 15:23:10.041649 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/372edc8d-f258-4bee-8de5-1a00794c6ce5-cilium-run\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.043379 kubelet[3444]: I0213 15:23:10.041695 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/372edc8d-f258-4bee-8de5-1a00794c6ce5-cilium-ipsec-secrets\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.043379 kubelet[3444]: I0213 15:23:10.041741 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/372edc8d-f258-4bee-8de5-1a00794c6ce5-cilium-cgroup\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.043379 kubelet[3444]: I0213 15:23:10.041782 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/372edc8d-f258-4bee-8de5-1a00794c6ce5-lib-modules\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.043831 kubelet[3444]: I0213 15:23:10.041831 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/372edc8d-f258-4bee-8de5-1a00794c6ce5-host-proc-sys-net\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.043831 kubelet[3444]: I0213 15:23:10.041878 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/372edc8d-f258-4bee-8de5-1a00794c6ce5-xtables-lock\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.043831 kubelet[3444]: I0213 15:23:10.041920 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/372edc8d-f258-4bee-8de5-1a00794c6ce5-cilium-config-path\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.043831 kubelet[3444]: I0213 15:23:10.041997 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/372edc8d-f258-4bee-8de5-1a00794c6ce5-host-proc-sys-kernel\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.043831 kubelet[3444]: I0213 15:23:10.042040 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/372edc8d-f258-4bee-8de5-1a00794c6ce5-bpf-maps\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.043831 kubelet[3444]: I0213 15:23:10.042089 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/372edc8d-f258-4bee-8de5-1a00794c6ce5-cni-path\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.044108 kubelet[3444]: I0213 15:23:10.042133 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/372edc8d-f258-4bee-8de5-1a00794c6ce5-hostproc\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.044108 kubelet[3444]: I0213 15:23:10.042176 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/372edc8d-f258-4bee-8de5-1a00794c6ce5-hubble-tls\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.044108 kubelet[3444]: I0213 15:23:10.042221 3444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6mwm\" (UniqueName: \"kubernetes.io/projected/372edc8d-f258-4bee-8de5-1a00794c6ce5-kube-api-access-n6mwm\") pod \"cilium-rm7m4\" (UID: \"372edc8d-f258-4bee-8de5-1a00794c6ce5\") " pod="kube-system/cilium-rm7m4" Feb 13 15:23:10.220501 sshd[5257]: Accepted publickey for core from 147.75.109.163 port 35490 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:10.223475 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:10.231525 systemd-logind[1911]: New session 31 of user core. Feb 13 15:23:10.235561 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 15:23:10.354855 sshd[5260]: Connection closed by 147.75.109.163 port 35490 Feb 13 15:23:10.356260 sshd-session[5257]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:10.363516 systemd[1]: sshd@30-172.31.23.231:22-147.75.109.163:35490.service: Deactivated successfully. Feb 13 15:23:10.367991 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 15:23:10.369742 systemd-logind[1911]: Session 31 logged out. Waiting for processes to exit. Feb 13 15:23:10.371927 systemd-logind[1911]: Removed session 31. Feb 13 15:23:10.393935 systemd[1]: Started sshd@31-172.31.23.231:22-147.75.109.163:35498.service - OpenSSH per-connection server daemon (147.75.109.163:35498). Feb 13 15:23:10.578440 sshd[5266]: Accepted publickey for core from 147.75.109.163 port 35498 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:10.581104 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:10.590530 systemd-logind[1911]: New session 32 of user core. Feb 13 15:23:10.594574 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 15:23:11.144237 kubelet[3444]: E0213 15:23:11.144172 3444 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:23:11.144832 kubelet[3444]: E0213 15:23:11.144330 3444 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/372edc8d-f258-4bee-8de5-1a00794c6ce5-cilium-config-path podName:372edc8d-f258-4bee-8de5-1a00794c6ce5 nodeName:}" failed. No retries permitted until 2025-02-13 15:23:11.6442714 +0000 UTC m=+133.013216791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/372edc8d-f258-4bee-8de5-1a00794c6ce5-cilium-config-path") pod "cilium-rm7m4" (UID: "372edc8d-f258-4bee-8de5-1a00794c6ce5") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:23:11.145538 kubelet[3444]: E0213 15:23:11.145365 3444 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 13 15:23:11.145538 kubelet[3444]: E0213 15:23:11.145496 3444 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/372edc8d-f258-4bee-8de5-1a00794c6ce5-clustermesh-secrets podName:372edc8d-f258-4bee-8de5-1a00794c6ce5 nodeName:}" failed. No retries permitted until 2025-02-13 15:23:11.645464356 +0000 UTC m=+133.014409747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/372edc8d-f258-4bee-8de5-1a00794c6ce5-clustermesh-secrets") pod "cilium-rm7m4" (UID: "372edc8d-f258-4bee-8de5-1a00794c6ce5") : failed to sync secret cache: timed out waiting for the condition Feb 13 15:23:11.813745 containerd[1935]: time="2025-02-13T15:23:11.813621912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rm7m4,Uid:372edc8d-f258-4bee-8de5-1a00794c6ce5,Namespace:kube-system,Attempt:0,}" Feb 13 15:23:11.857637 containerd[1935]: time="2025-02-13T15:23:11.857147832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:11.857637 containerd[1935]: time="2025-02-13T15:23:11.857251584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:11.857637 containerd[1935]: time="2025-02-13T15:23:11.857287908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:11.857637 containerd[1935]: time="2025-02-13T15:23:11.857466288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:11.894626 systemd[1]: Started cri-containerd-dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d.scope - libcontainer container dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d. Feb 13 15:23:11.929234 kubelet[3444]: E0213 15:23:11.929187 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-vgcpp" podUID="032605a5-8fc4-46a8-a56a-e1e5c0ff201a" Feb 13 15:23:11.935420 containerd[1935]: time="2025-02-13T15:23:11.935256564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rm7m4,Uid:372edc8d-f258-4bee-8de5-1a00794c6ce5,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d\"" Feb 13 15:23:11.943828 containerd[1935]: time="2025-02-13T15:23:11.943751532Z" level=info msg="CreateContainer within sandbox \"dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:23:11.970717 containerd[1935]: time="2025-02-13T15:23:11.970541412Z" level=info msg="CreateContainer within sandbox \"dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5477dbacf364fa5f9a6c3a578c5a1aa0b7aee403a797e4f335365686a3755e80\"" Feb 13 15:23:11.971541 containerd[1935]: time="2025-02-13T15:23:11.971245596Z" level=info msg="StartContainer for \"5477dbacf364fa5f9a6c3a578c5a1aa0b7aee403a797e4f335365686a3755e80\"" Feb 13 15:23:12.014641 systemd[1]: Started cri-containerd-5477dbacf364fa5f9a6c3a578c5a1aa0b7aee403a797e4f335365686a3755e80.scope - libcontainer container 5477dbacf364fa5f9a6c3a578c5a1aa0b7aee403a797e4f335365686a3755e80. Feb 13 15:23:12.062826 containerd[1935]: time="2025-02-13T15:23:12.062621061Z" level=info msg="StartContainer for \"5477dbacf364fa5f9a6c3a578c5a1aa0b7aee403a797e4f335365686a3755e80\" returns successfully" Feb 13 15:23:12.077828 systemd[1]: cri-containerd-5477dbacf364fa5f9a6c3a578c5a1aa0b7aee403a797e4f335365686a3755e80.scope: Deactivated successfully. Feb 13 15:23:12.135326 containerd[1935]: time="2025-02-13T15:23:12.135210597Z" level=info msg="shim disconnected" id=5477dbacf364fa5f9a6c3a578c5a1aa0b7aee403a797e4f335365686a3755e80 namespace=k8s.io Feb 13 15:23:12.135326 containerd[1935]: time="2025-02-13T15:23:12.135297909Z" level=warning msg="cleaning up after shim disconnected" id=5477dbacf364fa5f9a6c3a578c5a1aa0b7aee403a797e4f335365686a3755e80 namespace=k8s.io Feb 13 15:23:12.137017 containerd[1935]: time="2025-02-13T15:23:12.135344685Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:12.446649 containerd[1935]: time="2025-02-13T15:23:12.446249123Z" level=info msg="CreateContainer within sandbox \"dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:23:12.472424 containerd[1935]: time="2025-02-13T15:23:12.472195727Z" level=info msg="CreateContainer within sandbox \"dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"94347ba0c4d4d048889987f1687da5833a583c669d8a4b78859217418f2b129f\"" Feb 13 15:23:12.474380 containerd[1935]: time="2025-02-13T15:23:12.472893275Z" level=info msg="StartContainer for \"94347ba0c4d4d048889987f1687da5833a583c669d8a4b78859217418f2b129f\"" Feb 13 15:23:12.514635 systemd[1]: Started cri-containerd-94347ba0c4d4d048889987f1687da5833a583c669d8a4b78859217418f2b129f.scope - libcontainer container 94347ba0c4d4d048889987f1687da5833a583c669d8a4b78859217418f2b129f. Feb 13 15:23:12.560378 kubelet[3444]: I0213 15:23:12.560292 3444 setters.go:568] "Node became not ready" node="ip-172-31-23-231" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:23:12Z","lastTransitionTime":"2025-02-13T15:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:23:12.571387 containerd[1935]: time="2025-02-13T15:23:12.570406895Z" level=info msg="StartContainer for \"94347ba0c4d4d048889987f1687da5833a583c669d8a4b78859217418f2b129f\" returns successfully" Feb 13 15:23:12.584803 systemd[1]: cri-containerd-94347ba0c4d4d048889987f1687da5833a583c669d8a4b78859217418f2b129f.scope: Deactivated successfully. Feb 13 15:23:12.643685 containerd[1935]: time="2025-02-13T15:23:12.643489848Z" level=info msg="shim disconnected" id=94347ba0c4d4d048889987f1687da5833a583c669d8a4b78859217418f2b129f namespace=k8s.io Feb 13 15:23:12.644027 containerd[1935]: time="2025-02-13T15:23:12.643670952Z" level=warning msg="cleaning up after shim disconnected" id=94347ba0c4d4d048889987f1687da5833a583c669d8a4b78859217418f2b129f namespace=k8s.io Feb 13 15:23:12.644027 containerd[1935]: time="2025-02-13T15:23:12.643716756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:12.666085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556056591.mount: Deactivated successfully. Feb 13 15:23:13.450830 containerd[1935]: time="2025-02-13T15:23:13.450723360Z" level=info msg="CreateContainer within sandbox \"dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:23:13.492559 containerd[1935]: time="2025-02-13T15:23:13.492483228Z" level=info msg="CreateContainer within sandbox \"dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5ec091e0c202c6220df283f0d0e349fe96aca9aca452a0c3e99b596a52b0e9f\"" Feb 13 15:23:13.495196 containerd[1935]: time="2025-02-13T15:23:13.493208868Z" level=info msg="StartContainer for \"e5ec091e0c202c6220df283f0d0e349fe96aca9aca452a0c3e99b596a52b0e9f\"" Feb 13 15:23:13.550636 systemd[1]: Started cri-containerd-e5ec091e0c202c6220df283f0d0e349fe96aca9aca452a0c3e99b596a52b0e9f.scope - libcontainer container e5ec091e0c202c6220df283f0d0e349fe96aca9aca452a0c3e99b596a52b0e9f. Feb 13 15:23:13.621726 containerd[1935]: time="2025-02-13T15:23:13.621642493Z" level=info msg="StartContainer for \"e5ec091e0c202c6220df283f0d0e349fe96aca9aca452a0c3e99b596a52b0e9f\" returns successfully" Feb 13 15:23:13.625971 systemd[1]: cri-containerd-e5ec091e0c202c6220df283f0d0e349fe96aca9aca452a0c3e99b596a52b0e9f.scope: Deactivated successfully. Feb 13 15:23:13.671295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5ec091e0c202c6220df283f0d0e349fe96aca9aca452a0c3e99b596a52b0e9f-rootfs.mount: Deactivated successfully. Feb 13 15:23:13.679631 containerd[1935]: time="2025-02-13T15:23:13.679545349Z" level=info msg="shim disconnected" id=e5ec091e0c202c6220df283f0d0e349fe96aca9aca452a0c3e99b596a52b0e9f namespace=k8s.io Feb 13 15:23:13.679631 containerd[1935]: time="2025-02-13T15:23:13.679621849Z" level=warning msg="cleaning up after shim disconnected" id=e5ec091e0c202c6220df283f0d0e349fe96aca9aca452a0c3e99b596a52b0e9f namespace=k8s.io Feb 13 15:23:13.680128 containerd[1935]: time="2025-02-13T15:23:13.679643401Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:13.930063 kubelet[3444]: E0213 15:23:13.930003 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-vgcpp" podUID="032605a5-8fc4-46a8-a56a-e1e5c0ff201a" Feb 13 15:23:14.146263 kubelet[3444]: E0213 15:23:14.146209 3444 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:23:14.461452 containerd[1935]: time="2025-02-13T15:23:14.461111797Z" level=info msg="CreateContainer within sandbox \"dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:23:14.496788 containerd[1935]: time="2025-02-13T15:23:14.496719853Z" level=info msg="CreateContainer within sandbox \"dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"be9852c0831e8e25514e1cd8af6fca093c7224da071973b572ef5bdb0b9a75ef\"" Feb 13 15:23:14.497858 containerd[1935]: time="2025-02-13T15:23:14.497683393Z" level=info msg="StartContainer for \"be9852c0831e8e25514e1cd8af6fca093c7224da071973b572ef5bdb0b9a75ef\"" Feb 13 15:23:14.556636 systemd[1]: Started cri-containerd-be9852c0831e8e25514e1cd8af6fca093c7224da071973b572ef5bdb0b9a75ef.scope - libcontainer container be9852c0831e8e25514e1cd8af6fca093c7224da071973b572ef5bdb0b9a75ef. Feb 13 15:23:14.602596 systemd[1]: cri-containerd-be9852c0831e8e25514e1cd8af6fca093c7224da071973b572ef5bdb0b9a75ef.scope: Deactivated successfully. Feb 13 15:23:14.607239 containerd[1935]: time="2025-02-13T15:23:14.606513973Z" level=info msg="StartContainer for \"be9852c0831e8e25514e1cd8af6fca093c7224da071973b572ef5bdb0b9a75ef\" returns successfully" Feb 13 15:23:14.660332 containerd[1935]: time="2025-02-13T15:23:14.660179150Z" level=info msg="shim disconnected" id=be9852c0831e8e25514e1cd8af6fca093c7224da071973b572ef5bdb0b9a75ef namespace=k8s.io Feb 13 15:23:14.660994 containerd[1935]: time="2025-02-13T15:23:14.660721070Z" level=warning msg="cleaning up after shim disconnected" id=be9852c0831e8e25514e1cd8af6fca093c7224da071973b572ef5bdb0b9a75ef namespace=k8s.io Feb 13 15:23:14.660994 containerd[1935]: time="2025-02-13T15:23:14.660756410Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:14.671264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be9852c0831e8e25514e1cd8af6fca093c7224da071973b572ef5bdb0b9a75ef-rootfs.mount: Deactivated successfully. Feb 13 15:23:15.466767 containerd[1935]: time="2025-02-13T15:23:15.465942350Z" level=info msg="CreateContainer within sandbox \"dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:23:15.503687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552039061.mount: Deactivated successfully. Feb 13 15:23:15.505067 containerd[1935]: time="2025-02-13T15:23:15.504841046Z" level=info msg="CreateContainer within sandbox \"dcd806ace3695259a9b86c64e99d4350bffceac233d5f492739ddcce549f402d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bf7549b0179d29227cc491abc516e3993a79a7b9379215e9e1aed7ca57a8daeb\"" Feb 13 15:23:15.508960 containerd[1935]: time="2025-02-13T15:23:15.508765094Z" level=info msg="StartContainer for \"bf7549b0179d29227cc491abc516e3993a79a7b9379215e9e1aed7ca57a8daeb\"" Feb 13 15:23:15.564661 systemd[1]: Started cri-containerd-bf7549b0179d29227cc491abc516e3993a79a7b9379215e9e1aed7ca57a8daeb.scope - libcontainer container bf7549b0179d29227cc491abc516e3993a79a7b9379215e9e1aed7ca57a8daeb. Feb 13 15:23:15.625680 containerd[1935]: time="2025-02-13T15:23:15.624467126Z" level=info msg="StartContainer for \"bf7549b0179d29227cc491abc516e3993a79a7b9379215e9e1aed7ca57a8daeb\" returns successfully" Feb 13 15:23:15.929711 kubelet[3444]: E0213 15:23:15.929384 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-f5wfh" podUID="9000b7a3-c2a9-408f-9ae7-931706efec09" Feb 13 15:23:15.931071 kubelet[3444]: E0213 15:23:15.930865 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-vgcpp" podUID="032605a5-8fc4-46a8-a56a-e1e5c0ff201a" Feb 13 15:23:16.442352 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:23:16.504920 kubelet[3444]: I0213 15:23:16.502977 3444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rm7m4" podStartSLOduration=7.502919859 podStartE2EDuration="7.502919859s" podCreationTimestamp="2025-02-13 15:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:23:16.502800903 +0000 UTC m=+137.871746306" watchObservedRunningTime="2025-02-13 15:23:16.502919859 +0000 UTC m=+137.871865286" Feb 13 15:23:17.929861 kubelet[3444]: E0213 15:23:17.929720 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-vgcpp" podUID="032605a5-8fc4-46a8-a56a-e1e5c0ff201a" Feb 13 15:23:17.929861 kubelet[3444]: E0213 15:23:17.929783 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-f5wfh" podUID="9000b7a3-c2a9-408f-9ae7-931706efec09" Feb 13 15:23:19.412170 systemd[1]: run-containerd-runc-k8s.io-bf7549b0179d29227cc491abc516e3993a79a7b9379215e9e1aed7ca57a8daeb-runc.n4w8Yn.mount: Deactivated successfully. Feb 13 15:23:20.652434 systemd-networkd[1789]: lxc_health: Link UP Feb 13 15:23:20.666893 (udev-worker)[6111]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:23:20.678809 systemd-networkd[1789]: lxc_health: Gained carrier Feb 13 15:23:21.999027 kubelet[3444]: E0213 15:23:21.998975 3444 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44194->127.0.0.1:35165: write tcp 127.0.0.1:44194->127.0.0.1:35165: write: broken pipe Feb 13 15:23:22.586148 systemd-networkd[1789]: lxc_health: Gained IPv6LL Feb 13 15:23:25.438375 ntpd[1905]: Listen normally on 15 lxc_health [fe80::e2:daff:fee9:21d0%14]:123 Feb 13 15:23:25.438995 ntpd[1905]: 13 Feb 15:23:25 ntpd[1905]: Listen normally on 15 lxc_health [fe80::e2:daff:fee9:21d0%14]:123 Feb 13 15:23:28.846007 systemd[1]: run-containerd-runc-k8s.io-bf7549b0179d29227cc491abc516e3993a79a7b9379215e9e1aed7ca57a8daeb-runc.E4j2sT.mount: Deactivated successfully. Feb 13 15:23:29.028371 sshd[5268]: Connection closed by 147.75.109.163 port 35498 Feb 13 15:23:29.028882 sshd-session[5266]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:29.037050 systemd[1]: sshd@31-172.31.23.231:22-147.75.109.163:35498.service: Deactivated successfully. Feb 13 15:23:29.044059 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 15:23:29.049645 systemd-logind[1911]: Session 32 logged out. Waiting for processes to exit. Feb 13 15:23:29.054505 systemd-logind[1911]: Removed session 32.