Jan 13 21:10:30.186871 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 21:10:30.186917 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:10:30.186942 kernel: KASLR disabled due to lack of seed Jan 13 21:10:30.186959 kernel: efi: EFI v2.7 by EDK II Jan 13 21:10:30.186975 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 13 21:10:30.186991 kernel: ACPI: Early table checksum verification disabled Jan 13 21:10:30.187008 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 21:10:30.187024 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 21:10:30.187040 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 21:10:30.187055 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 21:10:30.187076 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 21:10:30.187091 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 21:10:30.187107 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 21:10:30.187123 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 21:10:30.187141 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 21:10:30.187162 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 21:10:30.187180 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 21:10:30.187196 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 21:10:30.187212 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 21:10:30.187229 kernel: printk: bootconsole [uart0] enabled Jan 13 21:10:30.189331 kernel: NUMA: Failed to initialise from firmware Jan 13 21:10:30.189369 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 21:10:30.189388 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 21:10:30.189405 kernel: Zone ranges: Jan 13 21:10:30.189423 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 21:10:30.189439 kernel: DMA32 empty Jan 13 21:10:30.189467 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 21:10:30.189485 kernel: Movable zone start for each node Jan 13 21:10:30.189502 kernel: Early memory node ranges Jan 13 21:10:30.189520 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 21:10:30.189537 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 21:10:30.189555 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 21:10:30.189573 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 21:10:30.189590 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 21:10:30.189607 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 21:10:30.189625 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 21:10:30.189643 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 21:10:30.189660 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 21:10:30.189683 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 21:10:30.189700 kernel: psci: probing for conduit method from ACPI. Jan 13 21:10:30.189733 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 21:10:30.189751 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:10:30.189769 kernel: psci: Trusted OS migration not required Jan 13 21:10:30.189793 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:10:30.189811 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:10:30.189829 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:10:30.189847 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 21:10:30.189865 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:10:30.189883 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:10:30.189900 kernel: CPU features: detected: Spectre-v2 Jan 13 21:10:30.189918 kernel: CPU features: detected: Spectre-v3a Jan 13 21:10:30.189936 kernel: CPU features: detected: Spectre-BHB Jan 13 21:10:30.189953 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 21:10:30.189971 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 21:10:30.189994 kernel: alternatives: applying boot alternatives Jan 13 21:10:30.190015 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:10:30.190036 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:10:30.190055 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:10:30.190073 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:10:30.190090 kernel: Fallback order for Node 0: 0 Jan 13 21:10:30.190109 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 21:10:30.190127 kernel: Policy zone: Normal Jan 13 21:10:30.190145 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:10:30.190163 kernel: software IO TLB: area num 2. Jan 13 21:10:30.190183 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 21:10:30.190209 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 13 21:10:30.190228 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:10:30.190291 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:10:30.190317 kernel: rcu: RCU event tracing is enabled. Jan 13 21:10:30.190336 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:10:30.190354 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:10:30.190373 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:10:30.190391 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:10:30.190409 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:10:30.190426 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:10:30.190444 kernel: GICv3: 96 SPIs implemented Jan 13 21:10:30.190469 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:10:30.190487 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:10:30.190505 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 21:10:30.190523 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 21:10:30.190540 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 21:10:30.190558 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:10:30.190576 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:10:30.190593 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 21:10:30.190611 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 21:10:30.190628 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 21:10:30.190646 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:10:30.190663 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 21:10:30.190687 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 21:10:30.190704 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 21:10:30.190722 kernel: Console: colour dummy device 80x25 Jan 13 21:10:30.190740 kernel: printk: console [tty1] enabled Jan 13 21:10:30.190758 kernel: ACPI: Core revision 20230628 Jan 13 21:10:30.190776 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 21:10:30.190795 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:10:30.190813 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:10:30.190831 kernel: landlock: Up and running. Jan 13 21:10:30.190853 kernel: SELinux: Initializing. Jan 13 21:10:30.190871 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:10:30.190889 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:10:30.190907 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:10:30.190925 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:10:30.190943 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:10:30.190962 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:10:30.190980 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 21:10:30.190997 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 21:10:30.191020 kernel: Remapping and enabling EFI services. Jan 13 21:10:30.191038 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:10:30.191055 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:10:30.191073 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 21:10:30.191091 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 21:10:30.191109 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 21:10:30.191127 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:10:30.191145 kernel: SMP: Total of 2 processors activated. Jan 13 21:10:30.191162 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:10:30.191185 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 21:10:30.191203 kernel: CPU features: detected: CRC32 instructions Jan 13 21:10:30.191221 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:10:30.193456 kernel: alternatives: applying system-wide alternatives Jan 13 21:10:30.193495 kernel: devtmpfs: initialized Jan 13 21:10:30.193515 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:10:30.193534 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:10:30.193553 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:10:30.193572 kernel: SMBIOS 3.0.0 present. Jan 13 21:10:30.193591 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 21:10:30.193616 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:10:30.193635 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:10:30.193654 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:10:30.193673 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:10:30.193692 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:10:30.193711 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Jan 13 21:10:30.193730 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:10:30.193754 kernel: cpuidle: using governor menu Jan 13 21:10:30.193773 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:10:30.193792 kernel: ASID allocator initialised with 65536 entries Jan 13 21:10:30.193811 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:10:30.193830 kernel: Serial: AMBA PL011 UART driver Jan 13 21:10:30.193849 kernel: Modules: 17520 pages in range for non-PLT usage Jan 13 21:10:30.193869 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:10:30.193890 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:10:30.193910 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:10:30.193935 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:10:30.193956 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:10:30.193974 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:10:30.193993 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:10:30.194012 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:10:30.194031 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:10:30.194050 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:10:30.194069 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:10:30.194088 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:10:30.194111 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:10:30.194131 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:10:30.194149 kernel: ACPI: Interpreter enabled Jan 13 21:10:30.194168 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:10:30.194186 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:10:30.194205 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 21:10:30.194564 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:10:30.194777 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:10:30.194981 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:10:30.195181 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 21:10:30.197828 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 21:10:30.197873 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 21:10:30.197893 kernel: acpiphp: Slot [1] registered Jan 13 21:10:30.197912 kernel: acpiphp: Slot [2] registered Jan 13 21:10:30.197931 kernel: acpiphp: Slot [3] registered Jan 13 21:10:30.197950 kernel: acpiphp: Slot [4] registered Jan 13 21:10:30.197982 kernel: acpiphp: Slot [5] registered Jan 13 21:10:30.198001 kernel: acpiphp: Slot [6] registered Jan 13 21:10:30.198019 kernel: acpiphp: Slot [7] registered Jan 13 21:10:30.198038 kernel: acpiphp: Slot [8] registered Jan 13 21:10:30.198056 kernel: acpiphp: Slot [9] registered Jan 13 21:10:30.198075 kernel: acpiphp: Slot [10] registered Jan 13 21:10:30.198093 kernel: acpiphp: Slot [11] registered Jan 13 21:10:30.198112 kernel: acpiphp: Slot [12] registered Jan 13 21:10:30.198130 kernel: acpiphp: Slot [13] registered Jan 13 21:10:30.198149 kernel: acpiphp: Slot [14] registered Jan 13 21:10:30.198172 kernel: acpiphp: Slot [15] registered Jan 13 21:10:30.198191 kernel: acpiphp: Slot [16] registered Jan 13 21:10:30.198209 kernel: acpiphp: Slot [17] registered Jan 13 21:10:30.198227 kernel: acpiphp: Slot [18] registered Jan 13 21:10:30.198330 kernel: acpiphp: Slot [19] registered Jan 13 21:10:30.198352 kernel: acpiphp: Slot [20] registered Jan 13 21:10:30.198371 kernel: acpiphp: Slot [21] registered Jan 13 21:10:30.198390 kernel: acpiphp: Slot [22] registered Jan 13 21:10:30.198408 kernel: acpiphp: Slot [23] registered Jan 13 21:10:30.198433 kernel: acpiphp: Slot [24] registered Jan 13 21:10:30.198453 kernel: acpiphp: Slot [25] registered Jan 13 21:10:30.198471 kernel: acpiphp: Slot [26] registered Jan 13 21:10:30.198489 kernel: acpiphp: Slot [27] registered Jan 13 21:10:30.198508 kernel: acpiphp: Slot [28] registered Jan 13 21:10:30.198527 kernel: acpiphp: Slot [29] registered Jan 13 21:10:30.198545 kernel: acpiphp: Slot [30] registered Jan 13 21:10:30.198564 kernel: acpiphp: Slot [31] registered Jan 13 21:10:30.198583 kernel: PCI host bridge to bus 0000:00 Jan 13 21:10:30.198814 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 21:10:30.199008 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:10:30.199188 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 21:10:30.201536 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 21:10:30.201813 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 21:10:30.202043 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 21:10:30.202287 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 21:10:30.202527 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 21:10:30.202740 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 21:10:30.202945 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:10:30.203170 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 21:10:30.204022 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 21:10:30.204234 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 21:10:30.204506 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 21:10:30.204712 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:10:30.204918 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 21:10:30.205123 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 21:10:30.205438 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 21:10:30.205650 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 21:10:30.205862 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 21:10:30.206059 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 21:10:30.208347 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:10:30.208602 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 21:10:30.208628 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:10:30.208649 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:10:30.208668 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:10:30.208687 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:10:30.208706 kernel: iommu: Default domain type: Translated Jan 13 21:10:30.208725 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:10:30.208754 kernel: efivars: Registered efivars operations Jan 13 21:10:30.208773 kernel: vgaarb: loaded Jan 13 21:10:30.208792 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:10:30.208810 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:10:30.208829 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:10:30.208847 kernel: pnp: PnP ACPI init Jan 13 21:10:30.209066 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 21:10:30.209095 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:10:30.209119 kernel: NET: Registered PF_INET protocol family Jan 13 21:10:30.209139 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:10:30.209158 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:10:30.209177 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:10:30.209196 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:10:30.209215 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:10:30.209234 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:10:30.209913 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:10:30.209935 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:10:30.209961 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:10:30.209980 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:10:30.209998 kernel: kvm [1]: HYP mode not available Jan 13 21:10:30.210017 kernel: Initialise system trusted keyrings Jan 13 21:10:30.210036 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:10:30.210055 kernel: Key type asymmetric registered Jan 13 21:10:30.210073 kernel: Asymmetric key parser 'x509' registered Jan 13 21:10:30.210091 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:10:30.210110 kernel: io scheduler mq-deadline registered Jan 13 21:10:30.210135 kernel: io scheduler kyber registered Jan 13 21:10:30.210154 kernel: io scheduler bfq registered Jan 13 21:10:30.210407 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 21:10:30.210437 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:10:30.210456 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:10:30.210475 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 21:10:30.210494 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 21:10:30.210513 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:10:30.210539 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 21:10:30.210745 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 21:10:30.210772 kernel: printk: console [ttyS0] disabled Jan 13 21:10:30.210792 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 21:10:30.210812 kernel: printk: console [ttyS0] enabled Jan 13 21:10:30.210830 kernel: printk: bootconsole [uart0] disabled Jan 13 21:10:30.210849 kernel: thunder_xcv, ver 1.0 Jan 13 21:10:30.210867 kernel: thunder_bgx, ver 1.0 Jan 13 21:10:30.210886 kernel: nicpf, ver 1.0 Jan 13 21:10:30.210909 kernel: nicvf, ver 1.0 Jan 13 21:10:30.211121 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:10:30.211450 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:10:29 UTC (1736802629) Jan 13 21:10:30.211479 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:10:30.211498 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 21:10:30.211517 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:10:30.211536 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:10:30.211554 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:10:30.211581 kernel: Segment Routing with IPv6 Jan 13 21:10:30.211600 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:10:30.211619 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:10:30.211638 kernel: Key type dns_resolver registered Jan 13 21:10:30.211656 kernel: registered taskstats version 1 Jan 13 21:10:30.211675 kernel: Loading compiled-in X.509 certificates Jan 13 21:10:30.211693 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:10:30.211712 kernel: Key type .fscrypt registered Jan 13 21:10:30.211731 kernel: Key type fscrypt-provisioning registered Jan 13 21:10:30.211754 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:10:30.211772 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:10:30.211791 kernel: ima: No architecture policies found Jan 13 21:10:30.211809 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:10:30.211828 kernel: clk: Disabling unused clocks Jan 13 21:10:30.211846 kernel: Freeing unused kernel memory: 39360K Jan 13 21:10:30.211865 kernel: Run /init as init process Jan 13 21:10:30.211883 kernel: with arguments: Jan 13 21:10:30.211901 kernel: /init Jan 13 21:10:30.211919 kernel: with environment: Jan 13 21:10:30.211943 kernel: HOME=/ Jan 13 21:10:30.211961 kernel: TERM=linux Jan 13 21:10:30.211980 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:10:30.212002 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:10:30.212026 systemd[1]: Detected virtualization amazon. Jan 13 21:10:30.212046 systemd[1]: Detected architecture arm64. Jan 13 21:10:30.212066 systemd[1]: Running in initrd. Jan 13 21:10:30.212091 systemd[1]: No hostname configured, using default hostname. Jan 13 21:10:30.212111 systemd[1]: Hostname set to . Jan 13 21:10:30.212132 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:10:30.212152 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:10:30.212173 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:10:30.212193 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:10:30.212215 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:10:30.212236 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:10:30.212284 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:10:30.212307 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:10:30.212332 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:10:30.212353 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:10:30.212374 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:10:30.212394 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:10:30.212415 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:10:30.212441 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:10:30.212461 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:10:30.212482 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:10:30.212502 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:10:30.212523 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:10:30.212544 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:10:30.212564 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:10:30.212585 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:10:30.212605 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:10:30.212631 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:10:30.212651 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:10:30.212672 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:10:30.212692 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:10:30.212713 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:10:30.212733 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:10:30.212754 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:10:30.212774 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:10:30.212800 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:10:30.212821 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:10:30.212842 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:10:30.212899 systemd-journald[250]: Collecting audit messages is disabled. Jan 13 21:10:30.212948 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:10:30.212972 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:10:30.212992 systemd-journald[250]: Journal started Jan 13 21:10:30.213033 systemd-journald[250]: Runtime Journal (/run/log/journal/ec249c27c844ed1fb741acb06f83ca06) is 8.0M, max 75.3M, 67.3M free. Jan 13 21:10:30.191325 systemd-modules-load[251]: Inserted module 'overlay' Jan 13 21:10:30.232923 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:30.232991 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:10:30.233020 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:10:30.235426 kernel: Bridge firewalling registered Jan 13 21:10:30.235542 systemd-modules-load[251]: Inserted module 'br_netfilter' Jan 13 21:10:30.240333 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:10:30.243214 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:10:30.264743 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:10:30.272574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:10:30.279459 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:10:30.281426 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:10:30.319002 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:10:30.341381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:10:30.346970 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:10:30.352531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:10:30.373637 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:10:30.380573 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:10:30.398387 dracut-cmdline[286]: dracut-dracut-053 Jan 13 21:10:30.406311 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:10:30.472646 systemd-resolved[288]: Positive Trust Anchors: Jan 13 21:10:30.472688 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:10:30.472751 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:10:30.563266 kernel: SCSI subsystem initialized Jan 13 21:10:30.570282 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:10:30.583289 kernel: iscsi: registered transport (tcp) Jan 13 21:10:30.605506 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:10:30.605595 kernel: QLogic iSCSI HBA Driver Jan 13 21:10:30.689288 kernel: random: crng init done Jan 13 21:10:30.689579 systemd-resolved[288]: Defaulting to hostname 'linux'. Jan 13 21:10:30.692971 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:10:30.697074 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:10:30.719004 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:10:30.730013 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:10:30.773293 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:10:30.773383 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:10:30.776292 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:10:30.845288 kernel: raid6: neonx8 gen() 6738 MB/s Jan 13 21:10:30.862282 kernel: raid6: neonx4 gen() 6553 MB/s Jan 13 21:10:30.879275 kernel: raid6: neonx2 gen() 5459 MB/s Jan 13 21:10:30.896275 kernel: raid6: neonx1 gen() 3953 MB/s Jan 13 21:10:30.913279 kernel: raid6: int64x8 gen() 3827 MB/s Jan 13 21:10:30.930274 kernel: raid6: int64x4 gen() 3716 MB/s Jan 13 21:10:30.947275 kernel: raid6: int64x2 gen() 3610 MB/s Jan 13 21:10:30.965029 kernel: raid6: int64x1 gen() 2770 MB/s Jan 13 21:10:30.965072 kernel: raid6: using algorithm neonx8 gen() 6738 MB/s Jan 13 21:10:30.983014 kernel: raid6: .... xor() 4839 MB/s, rmw enabled Jan 13 21:10:30.983064 kernel: raid6: using neon recovery algorithm Jan 13 21:10:30.991613 kernel: xor: measuring software checksum speed Jan 13 21:10:30.991679 kernel: 8regs : 10986 MB/sec Jan 13 21:10:30.992704 kernel: 32regs : 11948 MB/sec Jan 13 21:10:30.993877 kernel: arm64_neon : 9520 MB/sec Jan 13 21:10:30.993910 kernel: xor: using function: 32regs (11948 MB/sec) Jan 13 21:10:31.079295 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:10:31.098669 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:10:31.108592 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:10:31.151733 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jan 13 21:10:31.160530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:10:31.175503 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:10:31.214635 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jan 13 21:10:31.270702 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:10:31.280736 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:10:31.405958 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:10:31.418571 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:10:31.469983 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:10:31.475218 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:10:31.501510 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:10:31.504406 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:10:31.530630 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:10:31.563498 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:10:31.617099 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:10:31.617265 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 21:10:31.651040 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 21:10:31.651424 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 21:10:31.651716 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 21:10:31.651751 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 21:10:31.652045 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 21:10:31.652285 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:f2:b1:49:40:c9 Jan 13 21:10:31.630992 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:10:31.631223 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:10:31.635584 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:10:31.637695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:10:31.637968 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:31.640314 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:10:31.650764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:10:31.674564 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:10:31.674627 kernel: GPT:9289727 != 16777215 Jan 13 21:10:31.674653 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:10:31.675399 kernel: GPT:9289727 != 16777215 Jan 13 21:10:31.676342 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:10:31.677259 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:31.680901 (udev-worker)[528]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:10:31.708532 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:31.718734 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:10:31.772768 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (515) Jan 13 21:10:31.781364 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:10:31.817775 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (523) Jan 13 21:10:31.895088 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 21:10:31.916980 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 21:10:31.917177 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 21:10:31.962685 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 21:10:31.978550 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:10:31.992577 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:10:32.004201 disk-uuid[660]: Primary Header is updated. Jan 13 21:10:32.004201 disk-uuid[660]: Secondary Entries is updated. Jan 13 21:10:32.004201 disk-uuid[660]: Secondary Header is updated. Jan 13 21:10:32.016292 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:32.025287 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:33.030297 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:33.033343 disk-uuid[661]: The operation has completed successfully. Jan 13 21:10:33.226341 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:10:33.226574 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:10:33.281581 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:10:33.289470 sh[919]: Success Jan 13 21:10:33.316321 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:10:33.478745 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:10:33.491505 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:10:33.493845 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:10:33.531119 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:10:33.531184 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:10:33.533000 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:10:33.534359 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:10:33.534418 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:10:33.559278 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:10:33.562483 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:10:33.566444 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:10:33.574603 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:10:33.584760 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:10:33.622716 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:33.622865 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:10:33.624706 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:10:33.631346 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:10:33.652791 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:10:33.657689 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:33.670105 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:10:33.696495 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:10:33.798454 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:10:33.808590 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:10:33.881590 systemd-networkd[1112]: lo: Link UP Jan 13 21:10:33.881625 systemd-networkd[1112]: lo: Gained carrier Jan 13 21:10:33.885597 systemd-networkd[1112]: Enumeration completed Jan 13 21:10:33.886454 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:10:33.886461 systemd-networkd[1112]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:10:33.888202 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:10:33.892031 systemd-networkd[1112]: eth0: Link UP Jan 13 21:10:33.892039 systemd-networkd[1112]: eth0: Gained carrier Jan 13 21:10:33.892059 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:10:33.924604 systemd-networkd[1112]: eth0: DHCPv4 address 172.31.22.69/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:10:33.947056 systemd[1]: Reached target network.target - Network. Jan 13 21:10:33.948023 ignition[1036]: Ignition 2.19.0 Jan 13 21:10:33.953329 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:10:33.948037 ignition[1036]: Stage: fetch-offline Jan 13 21:10:33.948649 ignition[1036]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:33.948672 ignition[1036]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:33.949644 ignition[1036]: Ignition finished successfully Jan 13 21:10:33.979131 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:10:34.010955 ignition[1121]: Ignition 2.19.0 Jan 13 21:10:34.011856 ignition[1121]: Stage: fetch Jan 13 21:10:34.012707 ignition[1121]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:34.012732 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:34.012881 ignition[1121]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:34.025809 ignition[1121]: PUT result: OK Jan 13 21:10:34.031785 ignition[1121]: parsed url from cmdline: "" Jan 13 21:10:34.031932 ignition[1121]: no config URL provided Jan 13 21:10:34.032126 ignition[1121]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:10:34.032153 ignition[1121]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:10:34.032212 ignition[1121]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:34.035956 ignition[1121]: PUT result: OK Jan 13 21:10:34.036031 ignition[1121]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 21:10:34.043389 ignition[1121]: GET result: OK Jan 13 21:10:34.043542 ignition[1121]: parsing config with SHA512: 325d387ce2d3a0bff6465afa2e21d94d1788c316275ed543c5539c27785a708b4ceb4d78667c51416198e77912c0c5bb5065d3ad376c259406c4363e557cf16a Jan 13 21:10:34.052580 unknown[1121]: fetched base config from "system" Jan 13 21:10:34.052611 unknown[1121]: fetched base config from "system" Jan 13 21:10:34.052626 unknown[1121]: fetched user config from "aws" Jan 13 21:10:34.055992 ignition[1121]: fetch: fetch complete Jan 13 21:10:34.056005 ignition[1121]: fetch: fetch passed Jan 13 21:10:34.061865 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:10:34.056109 ignition[1121]: Ignition finished successfully Jan 13 21:10:34.074917 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:10:34.100864 ignition[1129]: Ignition 2.19.0 Jan 13 21:10:34.100897 ignition[1129]: Stage: kargs Jan 13 21:10:34.101870 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:34.101895 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:34.102054 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:34.103862 ignition[1129]: PUT result: OK Jan 13 21:10:34.113395 ignition[1129]: kargs: kargs passed Jan 13 21:10:34.113685 ignition[1129]: Ignition finished successfully Jan 13 21:10:34.119051 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:10:34.129554 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:10:34.157754 ignition[1135]: Ignition 2.19.0 Jan 13 21:10:34.157783 ignition[1135]: Stage: disks Jan 13 21:10:34.158633 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:34.158659 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:34.158806 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:34.160420 ignition[1135]: PUT result: OK Jan 13 21:10:34.170306 ignition[1135]: disks: disks passed Jan 13 21:10:34.170586 ignition[1135]: Ignition finished successfully Jan 13 21:10:34.176323 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:10:34.178942 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:10:34.182420 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:10:34.186600 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:10:34.190487 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:10:34.193951 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:10:34.217880 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:10:34.264597 systemd-fsck[1143]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:10:34.270297 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:10:34.280462 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:10:34.378289 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:10:34.379813 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:10:34.383388 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:10:34.405443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:10:34.411512 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:10:34.415531 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:10:34.415700 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:10:34.415750 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:10:34.435310 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1162) Jan 13 21:10:34.438950 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:34.438996 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:10:34.439023 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:10:34.447927 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:10:34.455287 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:10:34.459878 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:10:34.465095 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:10:34.585228 initrd-setup-root[1186]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:10:34.595548 initrd-setup-root[1193]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:10:34.605054 initrd-setup-root[1200]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:10:34.615325 initrd-setup-root[1207]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:10:34.771677 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:10:34.791962 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:10:34.796872 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:10:34.813856 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:10:34.816022 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:34.867528 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:10:34.872222 ignition[1274]: INFO : Ignition 2.19.0 Jan 13 21:10:34.872222 ignition[1274]: INFO : Stage: mount Jan 13 21:10:34.875693 ignition[1274]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:34.875693 ignition[1274]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:34.875693 ignition[1274]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:34.882662 ignition[1274]: INFO : PUT result: OK Jan 13 21:10:34.886736 ignition[1274]: INFO : mount: mount passed Jan 13 21:10:34.888727 ignition[1274]: INFO : Ignition finished successfully Jan 13 21:10:34.892723 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:10:34.904424 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:10:34.941648 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:10:34.962409 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1286) Jan 13 21:10:34.962472 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:34.965567 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:10:34.965602 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:10:34.971277 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:10:34.975033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:10:35.011174 ignition[1303]: INFO : Ignition 2.19.0 Jan 13 21:10:35.011174 ignition[1303]: INFO : Stage: files Jan 13 21:10:35.015303 ignition[1303]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:35.015303 ignition[1303]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:35.015303 ignition[1303]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:35.021454 ignition[1303]: INFO : PUT result: OK Jan 13 21:10:35.026082 ignition[1303]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:10:35.029995 ignition[1303]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:10:35.029995 ignition[1303]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:10:35.037792 ignition[1303]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:10:35.040920 ignition[1303]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:10:35.043764 unknown[1303]: wrote ssh authorized keys file for user: core Jan 13 21:10:35.047606 ignition[1303]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:10:35.051176 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:10:35.051176 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 21:10:35.075702 systemd-networkd[1112]: eth0: Gained IPv6LL Jan 13 21:10:35.171651 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:10:35.337347 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:10:35.337347 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:10:35.344494 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 21:10:35.881918 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:10:36.121362 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:10:36.121362 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:10:36.121362 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:10:36.121362 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:10:36.121362 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:10:36.138592 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:10:36.138592 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:10:36.138592 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:10:36.138592 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:10:36.138592 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:10:36.138592 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:10:36.138592 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:10:36.138592 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:10:36.138592 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:10:36.138592 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 21:10:36.590357 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:10:37.357854 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:10:37.357854 ignition[1303]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:10:37.369626 ignition[1303]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:10:37.369626 ignition[1303]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:10:37.369626 ignition[1303]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:10:37.369626 ignition[1303]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:10:37.369626 ignition[1303]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:10:37.369626 ignition[1303]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:10:37.369626 ignition[1303]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:10:37.369626 ignition[1303]: INFO : files: files passed Jan 13 21:10:37.369626 ignition[1303]: INFO : Ignition finished successfully Jan 13 21:10:37.365311 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:10:37.387111 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:10:37.419631 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:10:37.422715 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:10:37.422903 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:10:37.448108 initrd-setup-root-after-ignition[1331]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:10:37.448108 initrd-setup-root-after-ignition[1331]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:10:37.455765 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:10:37.459120 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:10:37.463130 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:10:37.479467 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:10:37.539345 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:10:37.539770 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:10:37.546435 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:10:37.548670 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:10:37.552369 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:10:37.573819 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:10:37.600175 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:10:37.611701 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:10:37.644111 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:10:37.645203 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:10:37.647644 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:10:37.648007 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:10:37.648361 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:10:37.649737 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:10:37.650055 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:10:37.650541 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:10:37.651378 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:10:37.651960 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:10:37.652569 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:10:37.653508 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:10:37.654414 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:10:37.654993 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:10:37.655859 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:10:37.656376 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:10:37.656666 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:10:37.657854 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:10:37.658547 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:10:37.659020 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:10:37.674384 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:10:37.678834 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:10:37.679081 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:10:37.688551 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:10:37.688833 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:10:37.694265 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:10:37.694503 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:10:37.728809 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:10:37.755612 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:10:37.759724 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:10:37.760180 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:10:37.768119 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:10:37.769071 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:10:37.788842 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:10:37.793650 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:10:37.815287 ignition[1355]: INFO : Ignition 2.19.0 Jan 13 21:10:37.815287 ignition[1355]: INFO : Stage: umount Jan 13 21:10:37.822347 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:37.822347 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:37.822347 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:37.822347 ignition[1355]: INFO : PUT result: OK Jan 13 21:10:37.819855 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:10:37.834937 ignition[1355]: INFO : umount: umount passed Jan 13 21:10:37.834937 ignition[1355]: INFO : Ignition finished successfully Jan 13 21:10:37.841419 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:10:37.841835 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:10:37.847606 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:10:37.847916 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:10:37.852028 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:10:37.852122 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:10:37.858526 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:10:37.858624 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:10:37.860637 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:10:37.860736 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:10:37.862691 systemd[1]: Stopped target network.target - Network. Jan 13 21:10:37.865060 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:10:37.865179 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:10:37.879016 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:10:37.880641 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:10:37.882287 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:10:37.884527 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:10:37.886233 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:10:37.895818 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:10:37.895908 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:10:37.897979 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:10:37.898054 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:10:37.901103 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:10:37.901208 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:10:37.903837 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:10:37.903924 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:10:37.906824 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:10:37.906940 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:10:37.909369 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:10:37.912085 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:10:37.932290 systemd-networkd[1112]: eth0: DHCPv6 lease lost Jan 13 21:10:37.936999 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:10:37.938599 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:10:37.942940 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:10:37.945738 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:10:37.952122 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:10:37.952283 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:10:37.973708 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:10:37.978021 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:10:37.978139 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:10:37.982173 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:10:37.982364 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:10:37.992212 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:10:37.992342 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:10:37.994459 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:10:37.994544 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:10:37.997526 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:10:38.027971 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:10:38.028267 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:10:38.035553 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:10:38.035756 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:10:38.039567 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:10:38.039648 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:10:38.041937 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:10:38.044472 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:10:38.051952 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:10:38.052100 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:10:38.055917 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:10:38.056022 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:10:38.076677 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:10:38.081596 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:10:38.081720 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:10:38.088000 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:10:38.088109 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:10:38.093523 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:10:38.093624 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:10:38.098420 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:10:38.098526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:38.099391 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:10:38.100090 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:10:38.109105 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:10:38.109435 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:10:38.118029 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:10:38.142182 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:10:38.160195 systemd[1]: Switching root. Jan 13 21:10:38.197820 systemd-journald[250]: Journal stopped Jan 13 21:10:40.122659 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jan 13 21:10:40.122785 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:10:40.122839 kernel: SELinux: policy capability open_perms=1 Jan 13 21:10:40.122879 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:10:40.122910 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:10:40.122949 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:10:40.122981 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:10:40.123015 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:10:40.123046 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:10:40.123077 kernel: audit: type=1403 audit(1736802638.565:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:10:40.123118 systemd[1]: Successfully loaded SELinux policy in 50.317ms. Jan 13 21:10:40.123165 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.350ms. Jan 13 21:10:40.123202 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:10:40.123236 systemd[1]: Detected virtualization amazon. Jan 13 21:10:40.124381 systemd[1]: Detected architecture arm64. Jan 13 21:10:40.124415 systemd[1]: Detected first boot. Jan 13 21:10:40.124459 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:10:40.124494 zram_generator::config[1397]: No configuration found. Jan 13 21:10:40.124548 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:10:40.124582 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:10:40.124616 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:10:40.124647 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:10:40.124681 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:10:40.124713 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:10:40.124750 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:10:40.124785 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:10:40.124819 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:10:40.124853 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:10:40.124887 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:10:40.124921 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:10:40.124954 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:10:40.124985 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:10:40.125019 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:10:40.125055 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:10:40.125090 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:10:40.125124 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:10:40.125154 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:10:40.125187 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:10:40.125221 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:10:40.125292 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:10:40.125352 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:10:40.125392 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:10:40.125426 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:10:40.125460 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:10:40.125490 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:10:40.125522 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:10:40.125552 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:10:40.125585 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:10:40.125617 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:10:40.125649 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:10:40.125685 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:10:40.125717 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:10:40.125752 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:10:40.125783 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:10:40.125812 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:10:40.125844 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:10:40.125874 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:10:40.125905 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:10:40.125936 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:10:40.125971 systemd[1]: Reached target machines.target - Containers. Jan 13 21:10:40.126005 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:10:40.126039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:10:40.126069 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:10:40.126102 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:10:40.126137 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:10:40.126167 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:10:40.126219 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:10:40.126956 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:10:40.126998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:10:40.127031 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:10:40.127064 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:10:40.127094 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:10:40.127126 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:10:40.127161 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:10:40.127230 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:10:40.127327 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:10:40.127367 kernel: fuse: init (API version 7.39) Jan 13 21:10:40.127398 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:10:40.127431 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:10:40.129379 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:10:40.129421 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:10:40.129452 systemd[1]: Stopped verity-setup.service. Jan 13 21:10:40.129481 kernel: loop: module loaded Jan 13 21:10:40.129511 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:10:40.129544 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:10:40.129581 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:10:40.129612 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:10:40.129641 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:10:40.129674 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:10:40.129708 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:10:40.129739 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:10:40.129770 kernel: ACPI: bus type drm_connector registered Jan 13 21:10:40.129800 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:10:40.129833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:10:40.129862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:10:40.129892 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:10:40.129923 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:10:40.129953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:10:40.130034 systemd-journald[1481]: Collecting audit messages is disabled. Jan 13 21:10:40.130089 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:10:40.130119 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:10:40.130153 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:10:40.130190 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:10:40.130219 systemd-journald[1481]: Journal started Jan 13 21:10:40.130338 systemd-journald[1481]: Runtime Journal (/run/log/journal/ec249c27c844ed1fb741acb06f83ca06) is 8.0M, max 75.3M, 67.3M free. Jan 13 21:10:39.555566 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:10:39.582686 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 21:10:39.583503 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:10:40.135192 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:10:40.142322 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:10:40.144298 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:10:40.148114 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:10:40.152334 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:10:40.186465 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:10:40.200474 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:10:40.206457 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:10:40.208639 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:10:40.208706 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:10:40.216499 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:10:40.232515 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:10:40.249113 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:10:40.251837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:10:40.263540 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:10:40.271319 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:10:40.274433 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:10:40.280596 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:10:40.282784 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:10:40.287561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:10:40.306776 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:10:40.315202 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:10:40.323713 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:10:40.329592 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:10:40.332776 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:10:40.335999 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:10:40.391442 systemd-journald[1481]: Time spent on flushing to /var/log/journal/ec249c27c844ed1fb741acb06f83ca06 is 208.448ms for 911 entries. Jan 13 21:10:40.391442 systemd-journald[1481]: System Journal (/var/log/journal/ec249c27c844ed1fb741acb06f83ca06) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:10:40.611729 systemd-journald[1481]: Received client request to flush runtime journal. Jan 13 21:10:40.612203 kernel: loop0: detected capacity change from 0 to 52536 Jan 13 21:10:40.612296 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:10:40.417980 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:10:40.420747 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:10:40.433711 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:10:40.500705 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:10:40.579543 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:10:40.593870 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:10:40.607294 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. Jan 13 21:10:40.607495 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. Jan 13 21:10:40.619328 kernel: loop1: detected capacity change from 0 to 114328 Jan 13 21:10:40.620316 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:10:40.633890 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:10:40.640725 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:10:40.653233 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:10:40.659352 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:10:40.693326 kernel: loop2: detected capacity change from 0 to 194512 Jan 13 21:10:40.690581 udevadm[1540]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:10:40.777413 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:10:40.793365 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:10:40.868956 systemd-tmpfiles[1550]: ACLs are not supported, ignoring. Jan 13 21:10:40.870109 systemd-tmpfiles[1550]: ACLs are not supported, ignoring. Jan 13 21:10:40.883509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:10:40.973312 kernel: loop3: detected capacity change from 0 to 114432 Jan 13 21:10:41.039305 kernel: loop4: detected capacity change from 0 to 52536 Jan 13 21:10:41.067306 kernel: loop5: detected capacity change from 0 to 114328 Jan 13 21:10:41.097454 kernel: loop6: detected capacity change from 0 to 194512 Jan 13 21:10:41.135517 kernel: loop7: detected capacity change from 0 to 114432 Jan 13 21:10:41.155488 (sd-merge)[1556]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 21:10:41.159225 (sd-merge)[1556]: Merged extensions into '/usr'. Jan 13 21:10:41.174561 systemd[1]: Reloading requested from client PID 1525 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:10:41.174594 systemd[1]: Reloading... Jan 13 21:10:41.299476 ldconfig[1520]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:10:41.368275 zram_generator::config[1580]: No configuration found. Jan 13 21:10:41.682357 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:10:41.793921 systemd[1]: Reloading finished in 617 ms. Jan 13 21:10:41.834333 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:10:41.837056 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:10:41.840376 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:10:41.862516 systemd[1]: Starting ensure-sysext.service... Jan 13 21:10:41.866384 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:10:41.872625 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:10:41.892475 systemd[1]: Reloading requested from client PID 1636 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:10:41.892497 systemd[1]: Reloading... Jan 13 21:10:41.931966 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:10:41.932786 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:10:41.938130 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:10:41.940852 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 13 21:10:41.941027 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 13 21:10:41.956316 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:10:41.956349 systemd-tmpfiles[1637]: Skipping /boot Jan 13 21:10:41.984890 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:10:41.984920 systemd-tmpfiles[1637]: Skipping /boot Jan 13 21:10:42.024537 systemd-udevd[1638]: Using default interface naming scheme 'v255'. Jan 13 21:10:42.113418 zram_generator::config[1666]: No configuration found. Jan 13 21:10:42.295278 (udev-worker)[1675]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:10:42.428286 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1679) Jan 13 21:10:42.533121 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:10:42.703501 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:10:42.704034 systemd[1]: Reloading finished in 810 ms. Jan 13 21:10:42.744262 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:10:42.749319 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:10:42.879664 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:10:42.886323 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:10:42.900711 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:10:42.909597 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:10:42.912756 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:10:42.916631 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:10:42.925744 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:10:42.933599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:10:42.939956 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:10:42.949608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:10:42.952663 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:10:42.957765 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:10:42.967718 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:10:42.970753 lvm[1834]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:10:42.977719 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:10:42.985048 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:10:42.987157 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:10:42.994571 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:10:43.003553 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:10:43.008299 systemd[1]: Finished ensure-sysext.service. Jan 13 21:10:43.048625 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:10:43.059793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:10:43.060667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:10:43.083021 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:10:43.085890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:10:43.090081 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:10:43.096418 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:10:43.112729 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:10:43.126006 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:10:43.129390 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:10:43.133406 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:10:43.135387 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:10:43.143498 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:10:43.156486 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:10:43.161118 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:10:43.174624 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:10:43.184013 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:10:43.198591 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:10:43.228090 lvm[1868]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:10:43.236695 augenrules[1873]: No rules Jan 13 21:10:43.242424 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:10:43.277912 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:10:43.310391 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:10:43.313360 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:10:43.334189 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:10:43.343540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:43.355769 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:10:43.432506 systemd-networkd[1846]: lo: Link UP Jan 13 21:10:43.433144 systemd-networkd[1846]: lo: Gained carrier Jan 13 21:10:43.436158 systemd-networkd[1846]: Enumeration completed Jan 13 21:10:43.436449 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:10:43.437897 systemd-networkd[1846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:10:43.437905 systemd-networkd[1846]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:10:43.440351 systemd-networkd[1846]: eth0: Link UP Jan 13 21:10:43.440838 systemd-networkd[1846]: eth0: Gained carrier Jan 13 21:10:43.441007 systemd-networkd[1846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:10:43.448617 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:10:43.455365 systemd-networkd[1846]: eth0: DHCPv4 address 172.31.22.69/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:10:43.455844 systemd-resolved[1847]: Positive Trust Anchors: Jan 13 21:10:43.455881 systemd-resolved[1847]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:10:43.455944 systemd-resolved[1847]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:10:43.463475 systemd-resolved[1847]: Defaulting to hostname 'linux'. Jan 13 21:10:43.467207 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:10:43.469547 systemd[1]: Reached target network.target - Network. Jan 13 21:10:43.471758 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:10:43.474663 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:10:43.476905 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:10:43.479263 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:10:43.481854 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:10:43.484274 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:10:43.486712 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:10:43.488948 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:10:43.489002 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:10:43.490692 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:10:43.494112 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:10:43.500059 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:10:43.512785 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:10:43.515977 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:10:43.518580 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:10:43.520675 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:10:43.523168 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:10:43.523222 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:10:43.535563 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:10:43.541099 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:10:43.547736 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:10:43.555570 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:10:43.564709 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:10:43.566764 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:10:43.580734 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:10:43.599582 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:10:43.605775 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:10:43.619820 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:10:43.624578 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:10:43.630483 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:10:43.647531 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:10:43.651731 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:10:43.652724 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:10:43.666172 jq[1898]: false Jan 13 21:10:43.688969 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:10:43.705202 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:10:43.713444 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:10:43.713863 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:10:43.751221 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:10:43.751662 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:10:43.774585 dbus-daemon[1897]: [system] SELinux support is enabled Jan 13 21:10:43.784912 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:33 UTC 2025 (1): Starting Jan 13 21:10:43.784912 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:10:43.775733 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:10:43.775170 ntpd[1901]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:33 UTC 2025 (1): Starting Jan 13 21:10:43.775215 ntpd[1901]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:10:43.799310 jq[1912]: true Jan 13 21:10:43.799690 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: ---------------------------------------------------- Jan 13 21:10:43.799690 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:10:43.799690 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:10:43.799690 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: corporation. Support and training for ntp-4 are Jan 13 21:10:43.799690 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: available at https://www.nwtime.org/support Jan 13 21:10:43.799690 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: ---------------------------------------------------- Jan 13 21:10:43.775237 ntpd[1901]: ---------------------------------------------------- Jan 13 21:10:43.786786 ntpd[1901]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:10:43.786821 ntpd[1901]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:10:43.786845 ntpd[1901]: corporation. Support and training for ntp-4 are Jan 13 21:10:43.786865 ntpd[1901]: available at https://www.nwtime.org/support Jan 13 21:10:43.786883 ntpd[1901]: ---------------------------------------------------- Jan 13 21:10:43.804981 dbus-daemon[1897]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1846 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:10:43.820277 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: proto: precision = 0.096 usec (-23) Jan 13 21:10:43.818046 ntpd[1901]: proto: precision = 0.096 usec (-23) Jan 13 21:10:43.829797 ntpd[1901]: basedate set to 2025-01-01 Jan 13 21:10:43.831691 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: basedate set to 2025-01-01 Jan 13 21:10:43.831691 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: gps base set to 2025-01-05 (week 2348) Jan 13 21:10:43.829847 ntpd[1901]: gps base set to 2025-01-05 (week 2348) Jan 13 21:10:43.842261 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:10:43.847806 tar[1921]: linux-arm64/helm Jan 13 21:10:43.842329 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:10:43.845669 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:10:43.845709 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:10:43.849851 ntpd[1901]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:10:43.856582 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:10:43.856582 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:10:43.849941 ntpd[1901]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:10:43.863510 extend-filesystems[1899]: Found loop4 Jan 13 21:10:43.863510 extend-filesystems[1899]: Found loop5 Jan 13 21:10:43.863510 extend-filesystems[1899]: Found loop6 Jan 13 21:10:43.863510 extend-filesystems[1899]: Found loop7 Jan 13 21:10:43.863510 extend-filesystems[1899]: Found nvme0n1 Jan 13 21:10:43.863510 extend-filesystems[1899]: Found nvme0n1p1 Jan 13 21:10:43.863510 extend-filesystems[1899]: Found nvme0n1p2 Jan 13 21:10:43.863510 extend-filesystems[1899]: Found nvme0n1p3 Jan 13 21:10:43.863510 extend-filesystems[1899]: Found usr Jan 13 21:10:43.863510 extend-filesystems[1899]: Found nvme0n1p4 Jan 13 21:10:43.863510 extend-filesystems[1899]: Found nvme0n1p6 Jan 13 21:10:43.863510 extend-filesystems[1899]: Found nvme0n1p7 Jan 13 21:10:43.863510 extend-filesystems[1899]: Found nvme0n1p9 Jan 13 21:10:43.863510 extend-filesystems[1899]: Checking size of /dev/nvme0n1p9 Jan 13 21:10:43.982497 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 21:10:43.861832 ntpd[1901]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:10:43.982634 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:10:43.982634 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: Listen normally on 3 eth0 172.31.22.69:123 Jan 13 21:10:43.982634 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: Listen normally on 4 lo [::1]:123 Jan 13 21:10:43.982634 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: bind(21) AF_INET6 fe80::4f2:b1ff:fe49:40c9%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:10:43.982634 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: unable to create socket on eth0 (5) for fe80::4f2:b1ff:fe49:40c9%2#123 Jan 13 21:10:43.982634 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: failed to init interface for address fe80::4f2:b1ff:fe49:40c9%2 Jan 13 21:10:43.982634 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: Listening on routing socket on fd #21 for interface updates Jan 13 21:10:43.982634 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:10:43.982634 ntpd[1901]: 13 Jan 21:10:43 ntpd[1901]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:10:43.983018 jq[1928]: true Jan 13 21:10:43.911621 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:10:43.983534 extend-filesystems[1899]: Resized partition /dev/nvme0n1p9 Jan 13 21:10:43.861943 ntpd[1901]: Listen normally on 3 eth0 172.31.22.69:123 Jan 13 21:10:43.933549 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:10:43.989199 update_engine[1909]: I20250113 21:10:43.986862 1909 main.cc:92] Flatcar Update Engine starting Jan 13 21:10:43.989609 extend-filesystems[1941]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:10:43.862010 ntpd[1901]: Listen normally on 4 lo [::1]:123 Jan 13 21:10:43.933884 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:10:43.862087 ntpd[1901]: bind(21) AF_INET6 fe80::4f2:b1ff:fe49:40c9%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:10:43.862126 ntpd[1901]: unable to create socket on eth0 (5) for fe80::4f2:b1ff:fe49:40c9%2#123 Jan 13 21:10:43.862153 ntpd[1901]: failed to init interface for address fe80::4f2:b1ff:fe49:40c9%2 Jan 13 21:10:43.862210 ntpd[1901]: Listening on routing socket on fd #21 for interface updates Jan 13 21:10:43.869638 dbus-daemon[1897]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:10:43.924124 ntpd[1901]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:10:43.924217 ntpd[1901]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:10:44.007757 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:10:44.017611 update_engine[1909]: I20250113 21:10:44.009794 1909 update_check_scheduler.cc:74] Next update check in 6m13s Jan 13 21:10:44.029485 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:10:44.031598 (ntainerd)[1943]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:10:44.056344 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 21:10:44.084680 extend-filesystems[1941]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 21:10:44.084680 extend-filesystems[1941]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:10:44.084680 extend-filesystems[1941]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 21:10:44.097351 extend-filesystems[1899]: Resized filesystem in /dev/nvme0n1p9 Jan 13 21:10:44.089660 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:10:44.099789 coreos-metadata[1896]: Jan 13 21:10:44.078 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:10:44.099789 coreos-metadata[1896]: Jan 13 21:10:44.088 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 21:10:44.099789 coreos-metadata[1896]: Jan 13 21:10:44.088 INFO Fetch successful Jan 13 21:10:44.099789 coreos-metadata[1896]: Jan 13 21:10:44.088 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 21:10:44.099789 coreos-metadata[1896]: Jan 13 21:10:44.088 INFO Fetch successful Jan 13 21:10:44.099789 coreos-metadata[1896]: Jan 13 21:10:44.088 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 21:10:44.099789 coreos-metadata[1896]: Jan 13 21:10:44.088 INFO Fetch successful Jan 13 21:10:44.099789 coreos-metadata[1896]: Jan 13 21:10:44.088 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 21:10:44.099789 coreos-metadata[1896]: Jan 13 21:10:44.088 INFO Fetch successful Jan 13 21:10:44.099789 coreos-metadata[1896]: Jan 13 21:10:44.088 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 21:10:44.099789 coreos-metadata[1896]: Jan 13 21:10:44.099 INFO Fetch failed with 404: resource not found Jan 13 21:10:44.099789 coreos-metadata[1896]: Jan 13 21:10:44.099 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 21:10:44.113472 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:10:44.122170 coreos-metadata[1896]: Jan 13 21:10:44.106 INFO Fetch successful Jan 13 21:10:44.122170 coreos-metadata[1896]: Jan 13 21:10:44.106 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 21:10:44.122170 coreos-metadata[1896]: Jan 13 21:10:44.106 INFO Fetch successful Jan 13 21:10:44.122170 coreos-metadata[1896]: Jan 13 21:10:44.106 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 21:10:44.122170 coreos-metadata[1896]: Jan 13 21:10:44.106 INFO Fetch successful Jan 13 21:10:44.122170 coreos-metadata[1896]: Jan 13 21:10:44.106 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 21:10:44.122170 coreos-metadata[1896]: Jan 13 21:10:44.106 INFO Fetch successful Jan 13 21:10:44.122170 coreos-metadata[1896]: Jan 13 21:10:44.106 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 21:10:44.122170 coreos-metadata[1896]: Jan 13 21:10:44.108 INFO Fetch successful Jan 13 21:10:44.129425 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:10:44.222436 systemd-logind[1908]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:10:44.224381 systemd-logind[1908]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 21:10:44.224986 systemd-logind[1908]: New seat seat0. Jan 13 21:10:44.227666 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:10:44.279295 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1693) Jan 13 21:10:44.293142 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:10:44.317785 bash[1981]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:10:44.323523 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:10:44.334539 systemd[1]: Starting sshkeys.service... Jan 13 21:10:44.372349 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:10:44.375555 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:10:44.410111 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:10:44.435894 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:10:44.558503 dbus-daemon[1897]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:10:44.560438 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:10:44.566711 dbus-daemon[1897]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1937 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:10:44.626126 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:10:44.644845 locksmithd[1949]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:10:44.674642 polkitd[2011]: Started polkitd version 121 Jan 13 21:10:44.700416 polkitd[2011]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:10:44.700535 polkitd[2011]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:10:44.709687 containerd[1943]: time="2025-01-13T21:10:44.706965107Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:10:44.712090 polkitd[2011]: Finished loading, compiling and executing 2 rules Jan 13 21:10:44.725473 dbus-daemon[1897]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:10:44.725953 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:10:44.729851 polkitd[2011]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:10:44.767736 coreos-metadata[1991]: Jan 13 21:10:44.767 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:10:44.772403 coreos-metadata[1991]: Jan 13 21:10:44.771 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 21:10:44.772403 coreos-metadata[1991]: Jan 13 21:10:44.772 INFO Fetch successful Jan 13 21:10:44.772403 coreos-metadata[1991]: Jan 13 21:10:44.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:10:44.774011 coreos-metadata[1991]: Jan 13 21:10:44.773 INFO Fetch successful Jan 13 21:10:44.781346 unknown[1991]: wrote ssh authorized keys file for user: core Jan 13 21:10:44.789340 ntpd[1901]: bind(24) AF_INET6 fe80::4f2:b1ff:fe49:40c9%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:10:44.790584 ntpd[1901]: 13 Jan 21:10:44 ntpd[1901]: bind(24) AF_INET6 fe80::4f2:b1ff:fe49:40c9%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:10:44.790584 ntpd[1901]: 13 Jan 21:10:44 ntpd[1901]: unable to create socket on eth0 (6) for fe80::4f2:b1ff:fe49:40c9%2#123 Jan 13 21:10:44.790584 ntpd[1901]: 13 Jan 21:10:44 ntpd[1901]: failed to init interface for address fe80::4f2:b1ff:fe49:40c9%2 Jan 13 21:10:44.789410 ntpd[1901]: unable to create socket on eth0 (6) for fe80::4f2:b1ff:fe49:40c9%2#123 Jan 13 21:10:44.789440 ntpd[1901]: failed to init interface for address fe80::4f2:b1ff:fe49:40c9%2 Jan 13 21:10:44.803824 systemd-hostnamed[1937]: Hostname set to (transient) Jan 13 21:10:44.804000 systemd-resolved[1847]: System hostname changed to 'ip-172-31-22-69'. Jan 13 21:10:44.884734 update-ssh-keys[2059]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:10:44.883318 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:10:44.895019 systemd[1]: Finished sshkeys.service. Jan 13 21:10:44.915589 containerd[1943]: time="2025-01-13T21:10:44.915277692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:44.922284 containerd[1943]: time="2025-01-13T21:10:44.921284712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:44.922284 containerd[1943]: time="2025-01-13T21:10:44.921371364Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:10:44.922284 containerd[1943]: time="2025-01-13T21:10:44.921406800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:10:44.922284 containerd[1943]: time="2025-01-13T21:10:44.921715416Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:10:44.922284 containerd[1943]: time="2025-01-13T21:10:44.921753084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:44.922284 containerd[1943]: time="2025-01-13T21:10:44.921887100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:44.922284 containerd[1943]: time="2025-01-13T21:10:44.921916368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:44.924299 containerd[1943]: time="2025-01-13T21:10:44.922214808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:44.924299 containerd[1943]: time="2025-01-13T21:10:44.923857572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:44.924299 containerd[1943]: time="2025-01-13T21:10:44.923910096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:44.924299 containerd[1943]: time="2025-01-13T21:10:44.923936676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:44.924299 containerd[1943]: time="2025-01-13T21:10:44.924138192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:44.924797 containerd[1943]: time="2025-01-13T21:10:44.924583788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:44.924893 containerd[1943]: time="2025-01-13T21:10:44.924844404Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:44.924963 containerd[1943]: time="2025-01-13T21:10:44.924890400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:10:44.925105 containerd[1943]: time="2025-01-13T21:10:44.925066260Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:10:44.925212 containerd[1943]: time="2025-01-13T21:10:44.925175004Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:10:44.932844 containerd[1943]: time="2025-01-13T21:10:44.932767608Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:10:44.932966 containerd[1943]: time="2025-01-13T21:10:44.932873472Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:10:44.932966 containerd[1943]: time="2025-01-13T21:10:44.932911272Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:10:44.933085 containerd[1943]: time="2025-01-13T21:10:44.932961276Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:10:44.933085 containerd[1943]: time="2025-01-13T21:10:44.933003144Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:10:44.933356 containerd[1943]: time="2025-01-13T21:10:44.933313656Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:10:44.933921 containerd[1943]: time="2025-01-13T21:10:44.933869364Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:10:44.934162 containerd[1943]: time="2025-01-13T21:10:44.934118616Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:10:44.934260 containerd[1943]: time="2025-01-13T21:10:44.934165344Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:10:44.934260 containerd[1943]: time="2025-01-13T21:10:44.934198008Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:10:44.937664 containerd[1943]: time="2025-01-13T21:10:44.934228992Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:10:44.937664 containerd[1943]: time="2025-01-13T21:10:44.937204032Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:10:44.937664 containerd[1943]: time="2025-01-13T21:10:44.937316700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:10:44.937664 containerd[1943]: time="2025-01-13T21:10:44.937358376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:10:44.937664 containerd[1943]: time="2025-01-13T21:10:44.937395912Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:10:44.937664 containerd[1943]: time="2025-01-13T21:10:44.937435008Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:10:44.937664 containerd[1943]: time="2025-01-13T21:10:44.937478220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:10:44.937664 containerd[1943]: time="2025-01-13T21:10:44.937507464Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:10:44.937664 containerd[1943]: time="2025-01-13T21:10:44.937557672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.937664 containerd[1943]: time="2025-01-13T21:10:44.937591176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.937664 containerd[1943]: time="2025-01-13T21:10:44.937627092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.937664 containerd[1943]: time="2025-01-13T21:10:44.937660596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.937692456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.937724712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.937754880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.937784952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.937815000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.937860000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.937897632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.937928976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.937957980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.938004276Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.938057652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.938099004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938235 containerd[1943]: time="2025-01-13T21:10:44.938126904Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:10:44.938796 containerd[1943]: time="2025-01-13T21:10:44.938260296Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:10:44.938796 containerd[1943]: time="2025-01-13T21:10:44.938307096Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:10:44.938796 containerd[1943]: time="2025-01-13T21:10:44.938342928Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:10:44.938796 containerd[1943]: time="2025-01-13T21:10:44.938372232Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:10:44.938796 containerd[1943]: time="2025-01-13T21:10:44.938406924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.938796 containerd[1943]: time="2025-01-13T21:10:44.938436132Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:10:44.938796 containerd[1943]: time="2025-01-13T21:10:44.938459724Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:10:44.938796 containerd[1943]: time="2025-01-13T21:10:44.938485152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:10:44.942022 containerd[1943]: time="2025-01-13T21:10:44.940068024Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:10:44.942022 containerd[1943]: time="2025-01-13T21:10:44.940877772Z" level=info msg="Connect containerd service" Jan 13 21:10:44.942022 containerd[1943]: time="2025-01-13T21:10:44.940985664Z" level=info msg="using legacy CRI server" Jan 13 21:10:44.942022 containerd[1943]: time="2025-01-13T21:10:44.941027964Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:10:44.942022 containerd[1943]: time="2025-01-13T21:10:44.941332044Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:10:44.946527 containerd[1943]: time="2025-01-13T21:10:44.945686592Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:10:44.946527 containerd[1943]: time="2025-01-13T21:10:44.945879480Z" level=info msg="Start subscribing containerd event" Jan 13 21:10:44.946527 containerd[1943]: time="2025-01-13T21:10:44.945972852Z" level=info msg="Start recovering state" Jan 13 21:10:44.946527 containerd[1943]: time="2025-01-13T21:10:44.946125468Z" level=info msg="Start event monitor" Jan 13 21:10:44.946527 containerd[1943]: time="2025-01-13T21:10:44.946151856Z" level=info msg="Start snapshots syncer" Jan 13 21:10:44.946527 containerd[1943]: time="2025-01-13T21:10:44.946174764Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:10:44.946527 containerd[1943]: time="2025-01-13T21:10:44.946209924Z" level=info msg="Start streaming server" Jan 13 21:10:44.951441 containerd[1943]: time="2025-01-13T21:10:44.949832940Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:10:44.951441 containerd[1943]: time="2025-01-13T21:10:44.950057796Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:10:44.952486 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:10:44.956425 containerd[1943]: time="2025-01-13T21:10:44.952890516Z" level=info msg="containerd successfully booted in 0.253175s" Jan 13 21:10:45.315438 systemd-networkd[1846]: eth0: Gained IPv6LL Jan 13 21:10:45.323963 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:10:45.327795 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:10:45.340373 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 21:10:45.345773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:10:45.351379 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:10:45.357875 sshd_keygen[1939]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:10:45.461116 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:10:45.480203 amazon-ssm-agent[2103]: Initializing new seelog logger Jan 13 21:10:45.480203 amazon-ssm-agent[2103]: New Seelog Logger Creation Complete Jan 13 21:10:45.480203 amazon-ssm-agent[2103]: 2025/01/13 21:10:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:45.480203 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:45.485130 amazon-ssm-agent[2103]: 2025/01/13 21:10:45 processing appconfig overrides Jan 13 21:10:45.486456 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:10:45.488495 amazon-ssm-agent[2103]: 2025/01/13 21:10:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:45.488495 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:45.488495 amazon-ssm-agent[2103]: 2025/01/13 21:10:45 processing appconfig overrides Jan 13 21:10:45.488694 amazon-ssm-agent[2103]: 2025/01/13 21:10:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:45.488694 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:45.488812 amazon-ssm-agent[2103]: 2025/01/13 21:10:45 processing appconfig overrides Jan 13 21:10:45.494880 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO Proxy environment variables: Jan 13 21:10:45.497790 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:10:45.504794 amazon-ssm-agent[2103]: 2025/01/13 21:10:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:45.504794 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:45.506031 amazon-ssm-agent[2103]: 2025/01/13 21:10:45 processing appconfig overrides Jan 13 21:10:45.511955 systemd[1]: Started sshd@0-172.31.22.69:22-139.178.89.65:40262.service - OpenSSH per-connection server daemon (139.178.89.65:40262). Jan 13 21:10:45.567077 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:10:45.568068 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:10:45.584826 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:10:45.593508 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO https_proxy: Jan 13 21:10:45.646327 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:10:45.661816 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:10:45.671885 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:10:45.674972 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:10:45.693152 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO http_proxy: Jan 13 21:10:45.746533 tar[1921]: linux-arm64/LICENSE Jan 13 21:10:45.747068 tar[1921]: linux-arm64/README.md Jan 13 21:10:45.773278 sshd[2126]: Accepted publickey for core from 139.178.89.65 port 40262 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:10:45.776046 sshd[2126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:45.782700 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:10:45.793399 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO no_proxy: Jan 13 21:10:45.814344 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:10:45.825786 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:10:45.837336 systemd-logind[1908]: New session 1 of user core. Jan 13 21:10:45.882291 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:10:45.891554 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO Checking if agent identity type OnPrem can be assumed Jan 13 21:10:45.899835 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:10:45.923508 (systemd)[2144]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:10:45.990364 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO Checking if agent identity type EC2 can be assumed Jan 13 21:10:46.089696 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO Agent will take identity from EC2 Jan 13 21:10:46.188402 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:10:46.189600 systemd[2144]: Queued start job for default target default.target. Jan 13 21:10:46.205949 systemd[2144]: Created slice app.slice - User Application Slice. Jan 13 21:10:46.206073 systemd[2144]: Reached target paths.target - Paths. Jan 13 21:10:46.206105 systemd[2144]: Reached target timers.target - Timers. Jan 13 21:10:46.211493 systemd[2144]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:10:46.231758 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:10:46.231758 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:10:46.231758 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 21:10:46.231758 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 21:10:46.231758 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 21:10:46.231758 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 21:10:46.231758 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO [Registrar] Starting registrar module Jan 13 21:10:46.231758 amazon-ssm-agent[2103]: 2025-01-13 21:10:45 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 21:10:46.231758 amazon-ssm-agent[2103]: 2025-01-13 21:10:46 INFO [EC2Identity] EC2 registration was successful. Jan 13 21:10:46.231758 amazon-ssm-agent[2103]: 2025-01-13 21:10:46 INFO [CredentialRefresher] credentialRefresher has started Jan 13 21:10:46.231758 amazon-ssm-agent[2103]: 2025-01-13 21:10:46 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 21:10:46.231758 amazon-ssm-agent[2103]: 2025-01-13 21:10:46 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 21:10:46.236216 systemd[2144]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:10:46.236495 systemd[2144]: Reached target sockets.target - Sockets. Jan 13 21:10:46.236531 systemd[2144]: Reached target basic.target - Basic System. Jan 13 21:10:46.236614 systemd[2144]: Reached target default.target - Main User Target. Jan 13 21:10:46.236682 systemd[2144]: Startup finished in 298ms. Jan 13 21:10:46.236916 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:10:46.249745 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:10:46.287203 amazon-ssm-agent[2103]: 2025-01-13 21:10:46 INFO [CredentialRefresher] Next credential rotation will be in 32.09163257626667 minutes Jan 13 21:10:46.411902 systemd[1]: Started sshd@1-172.31.22.69:22-139.178.89.65:40266.service - OpenSSH per-connection server daemon (139.178.89.65:40266). Jan 13 21:10:46.589218 sshd[2156]: Accepted publickey for core from 139.178.89.65 port 40266 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:10:46.592616 sshd[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:46.601772 systemd-logind[1908]: New session 2 of user core. Jan 13 21:10:46.609700 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:10:46.745630 sshd[2156]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:46.753488 systemd[1]: sshd@1-172.31.22.69:22-139.178.89.65:40266.service: Deactivated successfully. Jan 13 21:10:46.758205 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:10:46.759833 systemd-logind[1908]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:10:46.761808 systemd-logind[1908]: Removed session 2. Jan 13 21:10:46.779521 systemd[1]: Started sshd@2-172.31.22.69:22-139.178.89.65:40276.service - OpenSSH per-connection server daemon (139.178.89.65:40276). Jan 13 21:10:46.966952 sshd[2163]: Accepted publickey for core from 139.178.89.65 port 40276 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:10:46.970011 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:46.978399 systemd-logind[1908]: New session 3 of user core. Jan 13 21:10:46.988588 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:10:47.122605 sshd[2163]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:47.130319 systemd[1]: sshd@2-172.31.22.69:22-139.178.89.65:40276.service: Deactivated successfully. Jan 13 21:10:47.134524 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:10:47.136588 systemd-logind[1908]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:10:47.139365 systemd-logind[1908]: Removed session 3. Jan 13 21:10:47.261869 amazon-ssm-agent[2103]: 2025-01-13 21:10:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 21:10:47.363002 amazon-ssm-agent[2103]: 2025-01-13 21:10:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2170) started Jan 13 21:10:47.463971 amazon-ssm-agent[2103]: 2025-01-13 21:10:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 21:10:47.778100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:10:47.781410 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:10:47.784856 systemd[1]: Startup finished in 1.172s (kernel) + 8.764s (initrd) + 9.267s (userspace) = 19.204s. Jan 13 21:10:47.790679 ntpd[1901]: Listen normally on 7 eth0 [fe80::4f2:b1ff:fe49:40c9%2]:123 Jan 13 21:10:47.791664 ntpd[1901]: 13 Jan 21:10:47 ntpd[1901]: Listen normally on 7 eth0 [fe80::4f2:b1ff:fe49:40c9%2]:123 Jan 13 21:10:47.804080 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:10:48.774351 kubelet[2185]: E0113 21:10:48.774171 2185 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:10:48.779270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:10:48.779625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:10:48.780152 systemd[1]: kubelet.service: Consumed 1.344s CPU time. Jan 13 21:10:57.161849 systemd[1]: Started sshd@3-172.31.22.69:22-139.178.89.65:45996.service - OpenSSH per-connection server daemon (139.178.89.65:45996). Jan 13 21:10:57.328654 sshd[2198]: Accepted publickey for core from 139.178.89.65 port 45996 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:10:57.331951 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:57.341700 systemd-logind[1908]: New session 4 of user core. Jan 13 21:10:57.349606 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:10:57.478717 sshd[2198]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:57.483183 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:10:57.484582 systemd[1]: sshd@3-172.31.22.69:22-139.178.89.65:45996.service: Deactivated successfully. Jan 13 21:10:57.490088 systemd-logind[1908]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:10:57.492115 systemd-logind[1908]: Removed session 4. Jan 13 21:10:57.517756 systemd[1]: Started sshd@4-172.31.22.69:22-139.178.89.65:46008.service - OpenSSH per-connection server daemon (139.178.89.65:46008). Jan 13 21:10:57.695109 sshd[2205]: Accepted publickey for core from 139.178.89.65 port 46008 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:10:57.698431 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:57.708431 systemd-logind[1908]: New session 5 of user core. Jan 13 21:10:57.720663 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:10:57.842540 sshd[2205]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:57.849802 systemd[1]: sshd@4-172.31.22.69:22-139.178.89.65:46008.service: Deactivated successfully. Jan 13 21:10:57.854062 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:10:57.855207 systemd-logind[1908]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:10:57.856871 systemd-logind[1908]: Removed session 5. Jan 13 21:10:57.881809 systemd[1]: Started sshd@5-172.31.22.69:22-139.178.89.65:46016.service - OpenSSH per-connection server daemon (139.178.89.65:46016). Jan 13 21:10:58.064332 sshd[2212]: Accepted publickey for core from 139.178.89.65 port 46016 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:10:58.067080 sshd[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:58.077102 systemd-logind[1908]: New session 6 of user core. Jan 13 21:10:58.083548 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:10:58.212600 sshd[2212]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:58.219849 systemd[1]: sshd@5-172.31.22.69:22-139.178.89.65:46016.service: Deactivated successfully. Jan 13 21:10:58.223196 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:10:58.226475 systemd-logind[1908]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:10:58.228615 systemd-logind[1908]: Removed session 6. Jan 13 21:10:58.262228 systemd[1]: Started sshd@6-172.31.22.69:22-139.178.89.65:46018.service - OpenSSH per-connection server daemon (139.178.89.65:46018). Jan 13 21:10:58.434537 sshd[2219]: Accepted publickey for core from 139.178.89.65 port 46018 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:10:58.437363 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:58.446156 systemd-logind[1908]: New session 7 of user core. Jan 13 21:10:58.454550 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:10:58.574662 sudo[2222]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:10:58.575729 sudo[2222]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:10:58.597021 sudo[2222]: pam_unix(sudo:session): session closed for user root Jan 13 21:10:58.623689 sshd[2219]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:58.632832 systemd-logind[1908]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:10:58.633858 systemd[1]: sshd@6-172.31.22.69:22-139.178.89.65:46018.service: Deactivated successfully. Jan 13 21:10:58.639216 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:10:58.656944 systemd-logind[1908]: Removed session 7. Jan 13 21:10:58.668719 systemd[1]: Started sshd@7-172.31.22.69:22-139.178.89.65:46022.service - OpenSSH per-connection server daemon (139.178.89.65:46022). Jan 13 21:10:58.806202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:10:58.814655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:10:58.839930 sshd[2227]: Accepted publickey for core from 139.178.89.65 port 46022 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:10:58.843782 sshd[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:58.858779 systemd-logind[1908]: New session 8 of user core. Jan 13 21:10:58.865720 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:10:59.000998 sudo[2234]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:10:59.001801 sudo[2234]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:10:59.018764 sudo[2234]: pam_unix(sudo:session): session closed for user root Jan 13 21:10:59.031373 sudo[2233]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:10:59.032657 sudo[2233]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:10:59.057075 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:10:59.086564 auditctl[2237]: No rules Jan 13 21:10:59.087946 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:10:59.088307 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:10:59.103594 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:10:59.167889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:10:59.177307 augenrules[2261]: No rules Jan 13 21:10:59.182137 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:10:59.183186 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:10:59.187149 sudo[2233]: pam_unix(sudo:session): session closed for user root Jan 13 21:10:59.216580 sshd[2227]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:59.225109 systemd[1]: sshd@7-172.31.22.69:22-139.178.89.65:46022.service: Deactivated successfully. Jan 13 21:10:59.232108 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:10:59.236676 systemd-logind[1908]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:10:59.260876 systemd[1]: Started sshd@8-172.31.22.69:22-139.178.89.65:46024.service - OpenSSH per-connection server daemon (139.178.89.65:46024). Jan 13 21:10:59.263827 systemd-logind[1908]: Removed session 8. Jan 13 21:10:59.296331 kubelet[2258]: E0113 21:10:59.295456 2258 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:10:59.305342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:10:59.305698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:10:59.447340 sshd[2275]: Accepted publickey for core from 139.178.89.65 port 46024 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:10:59.450752 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:59.458488 systemd-logind[1908]: New session 9 of user core. Jan 13 21:10:59.471526 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:10:59.577654 sudo[2279]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:10:59.578408 sudo[2279]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:11:00.023755 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:11:00.027525 (dockerd)[2296]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:11:00.403923 dockerd[2296]: time="2025-01-13T21:11:00.403737674Z" level=info msg="Starting up" Jan 13 21:11:00.544045 dockerd[2296]: time="2025-01-13T21:11:00.543645969Z" level=info msg="Loading containers: start." Jan 13 21:11:00.715393 kernel: Initializing XFRM netlink socket Jan 13 21:11:00.748165 (udev-worker)[2318]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:11:00.830519 systemd-networkd[1846]: docker0: Link UP Jan 13 21:11:00.853699 dockerd[2296]: time="2025-01-13T21:11:00.853553682Z" level=info msg="Loading containers: done." Jan 13 21:11:00.878666 dockerd[2296]: time="2025-01-13T21:11:00.878125059Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:11:00.878666 dockerd[2296]: time="2025-01-13T21:11:00.878619583Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:11:00.878953 dockerd[2296]: time="2025-01-13T21:11:00.878845790Z" level=info msg="Daemon has completed initialization" Jan 13 21:11:00.937603 dockerd[2296]: time="2025-01-13T21:11:00.936538945Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:11:00.936908 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:11:02.213076 containerd[1943]: time="2025-01-13T21:11:02.213008171Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:11:02.939091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441865331.mount: Deactivated successfully. Jan 13 21:11:05.465753 containerd[1943]: time="2025-01-13T21:11:05.465693041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:05.467683 containerd[1943]: time="2025-01-13T21:11:05.467592663Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Jan 13 21:11:05.468605 containerd[1943]: time="2025-01-13T21:11:05.468512171Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:05.474492 containerd[1943]: time="2025-01-13T21:11:05.474439474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:05.477340 containerd[1943]: time="2025-01-13T21:11:05.476813794Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 3.263738276s" Jan 13 21:11:05.477340 containerd[1943]: time="2025-01-13T21:11:05.476887845Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 21:11:05.520062 containerd[1943]: time="2025-01-13T21:11:05.520003396Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:11:08.989673 containerd[1943]: time="2025-01-13T21:11:08.989590341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:08.991848 containerd[1943]: time="2025-01-13T21:11:08.991778035Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Jan 13 21:11:08.992778 containerd[1943]: time="2025-01-13T21:11:08.992692290Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:09.000116 containerd[1943]: time="2025-01-13T21:11:09.000012228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:09.002468 containerd[1943]: time="2025-01-13T21:11:09.002225841Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 3.482150504s" Jan 13 21:11:09.002468 containerd[1943]: time="2025-01-13T21:11:09.002312497Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 21:11:09.046562 containerd[1943]: time="2025-01-13T21:11:09.046389092Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:11:09.556022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:11:09.562604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:09.852924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:09.872755 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:11:09.978002 kubelet[2515]: E0113 21:11:09.977166 2515 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:11:09.983713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:11:09.984079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:11:10.595627 containerd[1943]: time="2025-01-13T21:11:10.595570610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:10.598285 containerd[1943]: time="2025-01-13T21:11:10.597362202Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Jan 13 21:11:10.603374 containerd[1943]: time="2025-01-13T21:11:10.603298848Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:10.610022 containerd[1943]: time="2025-01-13T21:11:10.609951824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:10.612391 containerd[1943]: time="2025-01-13T21:11:10.612328183Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.565877742s" Jan 13 21:11:10.612935 containerd[1943]: time="2025-01-13T21:11:10.612388585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 21:11:10.657906 containerd[1943]: time="2025-01-13T21:11:10.657833420Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:11:12.154653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991799423.mount: Deactivated successfully. Jan 13 21:11:12.736117 containerd[1943]: time="2025-01-13T21:11:12.735933464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:12.737747 containerd[1943]: time="2025-01-13T21:11:12.737693499Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Jan 13 21:11:12.738966 containerd[1943]: time="2025-01-13T21:11:12.738880078Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:12.742842 containerd[1943]: time="2025-01-13T21:11:12.742669198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:12.744885 containerd[1943]: time="2025-01-13T21:11:12.744308982Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 2.086412846s" Jan 13 21:11:12.744885 containerd[1943]: time="2025-01-13T21:11:12.744367057Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 21:11:12.786580 containerd[1943]: time="2025-01-13T21:11:12.786518363Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:11:13.344197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611672234.mount: Deactivated successfully. Jan 13 21:11:14.364192 containerd[1943]: time="2025-01-13T21:11:14.364043634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:14.366698 containerd[1943]: time="2025-01-13T21:11:14.366619526Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 13 21:11:14.367426 containerd[1943]: time="2025-01-13T21:11:14.367306182Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:14.373587 containerd[1943]: time="2025-01-13T21:11:14.373473017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:14.376175 containerd[1943]: time="2025-01-13T21:11:14.375958942Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.5893805s" Jan 13 21:11:14.376175 containerd[1943]: time="2025-01-13T21:11:14.376023002Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 21:11:14.417186 containerd[1943]: time="2025-01-13T21:11:14.417120215Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:11:14.829455 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:11:14.938573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457070444.mount: Deactivated successfully. Jan 13 21:11:14.945303 containerd[1943]: time="2025-01-13T21:11:14.944879149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:14.948009 containerd[1943]: time="2025-01-13T21:11:14.947950009Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 13 21:11:14.949676 containerd[1943]: time="2025-01-13T21:11:14.949585595Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:14.955859 containerd[1943]: time="2025-01-13T21:11:14.954000058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:14.956335 containerd[1943]: time="2025-01-13T21:11:14.955788292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 538.598044ms" Jan 13 21:11:14.956505 containerd[1943]: time="2025-01-13T21:11:14.956468879Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 21:11:14.999543 containerd[1943]: time="2025-01-13T21:11:14.999476161Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:11:15.573484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2811597247.mount: Deactivated successfully. Jan 13 21:11:19.494284 containerd[1943]: time="2025-01-13T21:11:19.492848257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:19.496683 containerd[1943]: time="2025-01-13T21:11:19.496617527Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jan 13 21:11:19.497540 containerd[1943]: time="2025-01-13T21:11:19.497479188Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:19.503378 containerd[1943]: time="2025-01-13T21:11:19.503319786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:19.505899 containerd[1943]: time="2025-01-13T21:11:19.505835672Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 4.50629761s" Jan 13 21:11:19.506042 containerd[1943]: time="2025-01-13T21:11:19.505894886Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 21:11:20.042909 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:11:20.054756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:20.444749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:20.458828 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:11:20.583152 kubelet[2697]: E0113 21:11:20.583056 2697 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:11:20.587143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:11:20.587735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:11:27.868896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:27.876778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:27.919678 systemd[1]: Reloading requested from client PID 2731 ('systemctl') (unit session-9.scope)... Jan 13 21:11:27.919717 systemd[1]: Reloading... Jan 13 21:11:28.197326 zram_generator::config[2774]: No configuration found. Jan 13 21:11:28.426521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:11:28.601901 systemd[1]: Reloading finished in 681 ms. Jan 13 21:11:28.694540 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:11:28.694767 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:11:28.696514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:28.703834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:28.980509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:28.996909 (kubelet)[2835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:11:29.084873 kubelet[2835]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:11:29.084873 kubelet[2835]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:11:29.084873 kubelet[2835]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:11:29.085474 kubelet[2835]: I0113 21:11:29.084969 2835 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:11:29.596236 update_engine[1909]: I20250113 21:11:29.595283 1909 update_attempter.cc:509] Updating boot flags... Jan 13 21:11:29.693304 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (2856) Jan 13 21:11:29.727201 kubelet[2835]: I0113 21:11:29.727133 2835 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:11:29.727201 kubelet[2835]: I0113 21:11:29.727196 2835 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:11:29.727604 kubelet[2835]: I0113 21:11:29.727560 2835 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:11:29.780197 kubelet[2835]: E0113 21:11:29.780155 2835 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.22.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:29.782131 kubelet[2835]: I0113 21:11:29.781890 2835 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:11:29.801008 kubelet[2835]: I0113 21:11:29.800967 2835 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:11:29.801756 kubelet[2835]: I0113 21:11:29.801724 2835 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:11:29.802714 kubelet[2835]: I0113 21:11:29.802303 2835 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:11:29.802714 kubelet[2835]: I0113 21:11:29.802364 2835 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:11:29.802714 kubelet[2835]: I0113 21:11:29.802390 2835 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:11:29.804975 kubelet[2835]: I0113 21:11:29.804909 2835 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:11:29.811619 kubelet[2835]: I0113 21:11:29.811143 2835 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:11:29.811619 kubelet[2835]: I0113 21:11:29.811211 2835 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:11:29.811619 kubelet[2835]: I0113 21:11:29.811297 2835 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:11:29.811619 kubelet[2835]: I0113 21:11:29.811337 2835 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:11:29.812002 kubelet[2835]: W0113 21:11:29.811922 2835 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.22.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-69&limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:29.812073 kubelet[2835]: E0113 21:11:29.812015 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-69&limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:29.816863 kubelet[2835]: W0113 21:11:29.816384 2835 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.22.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:29.816863 kubelet[2835]: E0113 21:11:29.816467 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:29.818221 kubelet[2835]: I0113 21:11:29.817268 2835 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:11:29.818221 kubelet[2835]: I0113 21:11:29.817898 2835 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:11:29.819932 kubelet[2835]: W0113 21:11:29.819868 2835 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:11:29.823320 kubelet[2835]: I0113 21:11:29.822382 2835 server.go:1256] "Started kubelet" Jan 13 21:11:29.847393 kubelet[2835]: I0113 21:11:29.847174 2835 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:11:29.856581 kubelet[2835]: E0113 21:11:29.856525 2835 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.69:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.69:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-69.181a5cd82bcd15e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-69,UID:ip-172-31-22-69,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-69,},FirstTimestamp:2025-01-13 21:11:29.822299618 +0000 UTC m=+0.816514834,LastTimestamp:2025-01-13 21:11:29.822299618 +0000 UTC m=+0.816514834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-69,}" Jan 13 21:11:29.864360 kubelet[2835]: I0113 21:11:29.862712 2835 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:11:29.866681 kubelet[2835]: I0113 21:11:29.866632 2835 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:11:29.868901 kubelet[2835]: I0113 21:11:29.868861 2835 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:11:29.869108 kubelet[2835]: I0113 21:11:29.869063 2835 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:11:29.869602 kubelet[2835]: I0113 21:11:29.869566 2835 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:11:29.870308 kubelet[2835]: I0113 21:11:29.870212 2835 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:11:29.873965 kubelet[2835]: I0113 21:11:29.873902 2835 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:11:29.874574 kubelet[2835]: I0113 21:11:29.874545 2835 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:11:29.874842 kubelet[2835]: I0113 21:11:29.874811 2835 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:11:29.875666 kubelet[2835]: E0113 21:11:29.875608 2835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-69?timeout=10s\": dial tcp 172.31.22.69:6443: connect: connection refused" interval="200ms" Jan 13 21:11:29.886299 kubelet[2835]: W0113 21:11:29.883813 2835 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.22.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:29.886550 kubelet[2835]: E0113 21:11:29.886513 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:29.890641 kubelet[2835]: I0113 21:11:29.890572 2835 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:11:29.905271 kubelet[2835]: E0113 21:11:29.898936 2835 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:11:30.005679 kubelet[2835]: I0113 21:11:30.005628 2835 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-69" Jan 13 21:11:30.008805 kubelet[2835]: E0113 21:11:30.008755 2835 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.69:6443/api/v1/nodes\": dial tcp 172.31.22.69:6443: connect: connection refused" node="ip-172-31-22-69" Jan 13 21:11:30.016874 kubelet[2835]: I0113 21:11:30.016802 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:11:30.018381 kubelet[2835]: I0113 21:11:30.018208 2835 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:11:30.018767 kubelet[2835]: I0113 21:11:30.018699 2835 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:11:30.019021 kubelet[2835]: I0113 21:11:30.019001 2835 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:11:30.023553 kubelet[2835]: I0113 21:11:30.023493 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:11:30.023553 kubelet[2835]: I0113 21:11:30.023542 2835 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:11:30.024400 kubelet[2835]: I0113 21:11:30.023576 2835 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:11:30.024400 kubelet[2835]: E0113 21:11:30.023667 2835 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:11:30.025319 kubelet[2835]: I0113 21:11:30.024840 2835 policy_none.go:49] "None policy: Start" Jan 13 21:11:30.033895 kubelet[2835]: W0113 21:11:30.033830 2835 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.22.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:30.033895 kubelet[2835]: E0113 21:11:30.033902 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:30.034508 kubelet[2835]: I0113 21:11:30.034070 2835 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:11:30.034508 kubelet[2835]: I0113 21:11:30.034137 2835 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:11:30.048135 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:11:30.065008 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:11:30.071652 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:11:30.077071 kubelet[2835]: E0113 21:11:30.077014 2835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-69?timeout=10s\": dial tcp 172.31.22.69:6443: connect: connection refused" interval="400ms" Jan 13 21:11:30.081310 kubelet[2835]: I0113 21:11:30.081262 2835 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:11:30.082792 kubelet[2835]: I0113 21:11:30.081714 2835 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:11:30.089591 kubelet[2835]: E0113 21:11:30.089511 2835 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-69\" not found" Jan 13 21:11:30.124175 kubelet[2835]: I0113 21:11:30.123968 2835 topology_manager.go:215] "Topology Admit Handler" podUID="df4adad0af788819c5663c18ad45ec79" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-69" Jan 13 21:11:30.127547 kubelet[2835]: I0113 21:11:30.127133 2835 topology_manager.go:215] "Topology Admit Handler" podUID="2017de283be89adf90bfc38703308e5f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-69" Jan 13 21:11:30.133538 kubelet[2835]: I0113 21:11:30.133497 2835 topology_manager.go:215] "Topology Admit Handler" podUID="bd055251af529f8bb1e44858b5f1401a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-69" Jan 13 21:11:30.146139 systemd[1]: Created slice kubepods-burstable-poddf4adad0af788819c5663c18ad45ec79.slice - libcontainer container kubepods-burstable-poddf4adad0af788819c5663c18ad45ec79.slice. Jan 13 21:11:30.178425 systemd[1]: Created slice kubepods-burstable-pod2017de283be89adf90bfc38703308e5f.slice - libcontainer container kubepods-burstable-pod2017de283be89adf90bfc38703308e5f.slice. Jan 13 21:11:30.179275 kubelet[2835]: I0113 21:11:30.178563 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df4adad0af788819c5663c18ad45ec79-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-69\" (UID: \"df4adad0af788819c5663c18ad45ec79\") " pod="kube-system/kube-apiserver-ip-172-31-22-69" Jan 13 21:11:30.179275 kubelet[2835]: I0113 21:11:30.178624 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2017de283be89adf90bfc38703308e5f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-69\" (UID: \"2017de283be89adf90bfc38703308e5f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-69" Jan 13 21:11:30.179275 kubelet[2835]: I0113 21:11:30.178670 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2017de283be89adf90bfc38703308e5f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-69\" (UID: \"2017de283be89adf90bfc38703308e5f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-69" Jan 13 21:11:30.179275 kubelet[2835]: I0113 21:11:30.178720 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2017de283be89adf90bfc38703308e5f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-69\" (UID: \"2017de283be89adf90bfc38703308e5f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-69" Jan 13 21:11:30.179275 kubelet[2835]: I0113 21:11:30.178764 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df4adad0af788819c5663c18ad45ec79-ca-certs\") pod \"kube-apiserver-ip-172-31-22-69\" (UID: \"df4adad0af788819c5663c18ad45ec79\") " pod="kube-system/kube-apiserver-ip-172-31-22-69" Jan 13 21:11:30.179600 kubelet[2835]: I0113 21:11:30.178808 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df4adad0af788819c5663c18ad45ec79-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-69\" (UID: \"df4adad0af788819c5663c18ad45ec79\") " pod="kube-system/kube-apiserver-ip-172-31-22-69" Jan 13 21:11:30.179600 kubelet[2835]: I0113 21:11:30.178851 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2017de283be89adf90bfc38703308e5f-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-69\" (UID: \"2017de283be89adf90bfc38703308e5f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-69" Jan 13 21:11:30.179600 kubelet[2835]: I0113 21:11:30.178897 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2017de283be89adf90bfc38703308e5f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-69\" (UID: \"2017de283be89adf90bfc38703308e5f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-69" Jan 13 21:11:30.179600 kubelet[2835]: I0113 21:11:30.178943 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd055251af529f8bb1e44858b5f1401a-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-69\" (UID: \"bd055251af529f8bb1e44858b5f1401a\") " pod="kube-system/kube-scheduler-ip-172-31-22-69" Jan 13 21:11:30.190568 systemd[1]: Created slice kubepods-burstable-podbd055251af529f8bb1e44858b5f1401a.slice - libcontainer container kubepods-burstable-podbd055251af529f8bb1e44858b5f1401a.slice. Jan 13 21:11:30.211969 kubelet[2835]: I0113 21:11:30.211903 2835 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-69" Jan 13 21:11:30.212679 kubelet[2835]: E0113 21:11:30.212642 2835 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.69:6443/api/v1/nodes\": dial tcp 172.31.22.69:6443: connect: connection refused" node="ip-172-31-22-69" Jan 13 21:11:30.472528 containerd[1943]: time="2025-01-13T21:11:30.471995338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-69,Uid:df4adad0af788819c5663c18ad45ec79,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:30.478492 kubelet[2835]: E0113 21:11:30.478448 2835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-69?timeout=10s\": dial tcp 172.31.22.69:6443: connect: connection refused" interval="800ms" Jan 13 21:11:30.485821 containerd[1943]: time="2025-01-13T21:11:30.485538303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-69,Uid:2017de283be89adf90bfc38703308e5f,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:30.496209 containerd[1943]: time="2025-01-13T21:11:30.496150162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-69,Uid:bd055251af529f8bb1e44858b5f1401a,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:30.615463 kubelet[2835]: I0113 21:11:30.615358 2835 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-69" Jan 13 21:11:30.615958 kubelet[2835]: E0113 21:11:30.615901 2835 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.69:6443/api/v1/nodes\": dial tcp 172.31.22.69:6443: connect: connection refused" node="ip-172-31-22-69" Jan 13 21:11:30.998358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248113058.mount: Deactivated successfully. Jan 13 21:11:31.008957 containerd[1943]: time="2025-01-13T21:11:31.008811133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:31.010403 containerd[1943]: time="2025-01-13T21:11:31.010275528Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 21:11:31.011839 containerd[1943]: time="2025-01-13T21:11:31.011740931Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:31.014556 containerd[1943]: time="2025-01-13T21:11:31.014460091Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:31.016277 containerd[1943]: time="2025-01-13T21:11:31.016171779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:11:31.016938 containerd[1943]: time="2025-01-13T21:11:31.016889248Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:31.017849 containerd[1943]: time="2025-01-13T21:11:31.017685578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:11:31.023127 containerd[1943]: time="2025-01-13T21:11:31.023025462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:31.027331 containerd[1943]: time="2025-01-13T21:11:31.026952766Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 541.292363ms" Jan 13 21:11:31.031546 containerd[1943]: time="2025-01-13T21:11:31.031423937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 559.311262ms" Jan 13 21:11:31.035702 containerd[1943]: time="2025-01-13T21:11:31.035349273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.066937ms" Jan 13 21:11:31.055154 kubelet[2835]: W0113 21:11:31.055035 2835 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.22.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:31.055154 kubelet[2835]: E0113 21:11:31.055129 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:31.060412 kubelet[2835]: W0113 21:11:31.057929 2835 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.22.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:31.060412 kubelet[2835]: E0113 21:11:31.060337 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:31.106402 kubelet[2835]: W0113 21:11:31.106350 2835 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.22.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:31.107369 kubelet[2835]: E0113 21:11:31.107319 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:31.227633 containerd[1943]: time="2025-01-13T21:11:31.226958833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:31.227633 containerd[1943]: time="2025-01-13T21:11:31.227069514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:31.227633 containerd[1943]: time="2025-01-13T21:11:31.227106983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:31.227633 containerd[1943]: time="2025-01-13T21:11:31.227311421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:31.236273 containerd[1943]: time="2025-01-13T21:11:31.234520486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:31.236273 containerd[1943]: time="2025-01-13T21:11:31.234642022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:31.236273 containerd[1943]: time="2025-01-13T21:11:31.234668492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:31.236273 containerd[1943]: time="2025-01-13T21:11:31.234876300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:31.236273 containerd[1943]: time="2025-01-13T21:11:31.234201374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:31.236273 containerd[1943]: time="2025-01-13T21:11:31.235688258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:31.236273 containerd[1943]: time="2025-01-13T21:11:31.236072762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:31.238833 containerd[1943]: time="2025-01-13T21:11:31.238666944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:31.281744 kubelet[2835]: E0113 21:11:31.280229 2835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-69?timeout=10s\": dial tcp 172.31.22.69:6443: connect: connection refused" interval="1.6s" Jan 13 21:11:31.282124 systemd[1]: Started cri-containerd-b8a602a3352add36a520df1d4717a8b6f206719ba5eab5aed7217b615d511ebe.scope - libcontainer container b8a602a3352add36a520df1d4717a8b6f206719ba5eab5aed7217b615d511ebe. Jan 13 21:11:31.299831 systemd[1]: Started cri-containerd-ee9ed06fa75479089ad54adece6b5c8e7c29e71d72af88e63ed92ca6ff550108.scope - libcontainer container ee9ed06fa75479089ad54adece6b5c8e7c29e71d72af88e63ed92ca6ff550108. Jan 13 21:11:31.317682 systemd[1]: Started cri-containerd-f84e3fbe5dfb1524622a1522e30adae653e0ee37cbcc5a61f48ecc172a7ecc7e.scope - libcontainer container f84e3fbe5dfb1524622a1522e30adae653e0ee37cbcc5a61f48ecc172a7ecc7e. Jan 13 21:11:31.383742 kubelet[2835]: W0113 21:11:31.382635 2835 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.22.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-69&limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:31.383991 kubelet[2835]: E0113 21:11:31.383761 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-69&limit=500&resourceVersion=0": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:31.421986 kubelet[2835]: I0113 21:11:31.420882 2835 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-69" Jan 13 21:11:31.424294 kubelet[2835]: E0113 21:11:31.423162 2835 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.69:6443/api/v1/nodes\": dial tcp 172.31.22.69:6443: connect: connection refused" node="ip-172-31-22-69" Jan 13 21:11:31.427688 containerd[1943]: time="2025-01-13T21:11:31.427629824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-69,Uid:2017de283be89adf90bfc38703308e5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8a602a3352add36a520df1d4717a8b6f206719ba5eab5aed7217b615d511ebe\"" Jan 13 21:11:31.453415 containerd[1943]: time="2025-01-13T21:11:31.453354543Z" level=info msg="CreateContainer within sandbox \"b8a602a3352add36a520df1d4717a8b6f206719ba5eab5aed7217b615d511ebe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:11:31.468031 containerd[1943]: time="2025-01-13T21:11:31.467971643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-69,Uid:df4adad0af788819c5663c18ad45ec79,Namespace:kube-system,Attempt:0,} returns sandbox id \"f84e3fbe5dfb1524622a1522e30adae653e0ee37cbcc5a61f48ecc172a7ecc7e\"" Jan 13 21:11:31.477881 containerd[1943]: time="2025-01-13T21:11:31.477807299Z" level=info msg="CreateContainer within sandbox \"f84e3fbe5dfb1524622a1522e30adae653e0ee37cbcc5a61f48ecc172a7ecc7e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:11:31.482999 containerd[1943]: time="2025-01-13T21:11:31.482144593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-69,Uid:bd055251af529f8bb1e44858b5f1401a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee9ed06fa75479089ad54adece6b5c8e7c29e71d72af88e63ed92ca6ff550108\"" Jan 13 21:11:31.488961 containerd[1943]: time="2025-01-13T21:11:31.488869809Z" level=info msg="CreateContainer within sandbox \"ee9ed06fa75479089ad54adece6b5c8e7c29e71d72af88e63ed92ca6ff550108\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:11:31.494473 containerd[1943]: time="2025-01-13T21:11:31.494391727Z" level=info msg="CreateContainer within sandbox \"b8a602a3352add36a520df1d4717a8b6f206719ba5eab5aed7217b615d511ebe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b7363ac32885807a5303e41c14e8bad5f06dc07fb7edb5bf4cbbeb891ab79142\"" Jan 13 21:11:31.496305 containerd[1943]: time="2025-01-13T21:11:31.495574143Z" level=info msg="StartContainer for \"b7363ac32885807a5303e41c14e8bad5f06dc07fb7edb5bf4cbbeb891ab79142\"" Jan 13 21:11:31.505778 containerd[1943]: time="2025-01-13T21:11:31.505706243Z" level=info msg="CreateContainer within sandbox \"f84e3fbe5dfb1524622a1522e30adae653e0ee37cbcc5a61f48ecc172a7ecc7e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0eed08746dfe13a67b3888b213e69462e04b36eb0d65978141988a86e28afa86\"" Jan 13 21:11:31.506739 containerd[1943]: time="2025-01-13T21:11:31.506653733Z" level=info msg="StartContainer for \"0eed08746dfe13a67b3888b213e69462e04b36eb0d65978141988a86e28afa86\"" Jan 13 21:11:31.521157 containerd[1943]: time="2025-01-13T21:11:31.520976320Z" level=info msg="CreateContainer within sandbox \"ee9ed06fa75479089ad54adece6b5c8e7c29e71d72af88e63ed92ca6ff550108\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fa919c108277839929b4cf248d8fc5768878d1f936f3a287b5eaa420a7086673\"" Jan 13 21:11:31.522085 containerd[1943]: time="2025-01-13T21:11:31.522035474Z" level=info msg="StartContainer for \"fa919c108277839929b4cf248d8fc5768878d1f936f3a287b5eaa420a7086673\"" Jan 13 21:11:31.573566 systemd[1]: Started cri-containerd-b7363ac32885807a5303e41c14e8bad5f06dc07fb7edb5bf4cbbeb891ab79142.scope - libcontainer container b7363ac32885807a5303e41c14e8bad5f06dc07fb7edb5bf4cbbeb891ab79142. Jan 13 21:11:31.611577 systemd[1]: Started cri-containerd-fa919c108277839929b4cf248d8fc5768878d1f936f3a287b5eaa420a7086673.scope - libcontainer container fa919c108277839929b4cf248d8fc5768878d1f936f3a287b5eaa420a7086673. Jan 13 21:11:31.624944 systemd[1]: Started cri-containerd-0eed08746dfe13a67b3888b213e69462e04b36eb0d65978141988a86e28afa86.scope - libcontainer container 0eed08746dfe13a67b3888b213e69462e04b36eb0d65978141988a86e28afa86. Jan 13 21:11:31.704002 containerd[1943]: time="2025-01-13T21:11:31.703911999Z" level=info msg="StartContainer for \"b7363ac32885807a5303e41c14e8bad5f06dc07fb7edb5bf4cbbeb891ab79142\" returns successfully" Jan 13 21:11:31.778067 containerd[1943]: time="2025-01-13T21:11:31.777817348Z" level=info msg="StartContainer for \"0eed08746dfe13a67b3888b213e69462e04b36eb0d65978141988a86e28afa86\" returns successfully" Jan 13 21:11:31.792609 containerd[1943]: time="2025-01-13T21:11:31.791702574Z" level=info msg="StartContainer for \"fa919c108277839929b4cf248d8fc5768878d1f936f3a287b5eaa420a7086673\" returns successfully" Jan 13 21:11:31.951626 kubelet[2835]: E0113 21:11:31.951570 2835 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.22.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.22.69:6443: connect: connection refused Jan 13 21:11:33.026133 kubelet[2835]: I0113 21:11:33.026084 2835 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-69" Jan 13 21:11:38.820732 kubelet[2835]: I0113 21:11:38.820669 2835 apiserver.go:52] "Watching apiserver" Jan 13 21:11:38.859038 kubelet[2835]: E0113 21:11:38.858974 2835 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-69\" not found" node="ip-172-31-22-69" Jan 13 21:11:38.874056 kubelet[2835]: I0113 21:11:38.873908 2835 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:11:38.907922 kubelet[2835]: I0113 21:11:38.907815 2835 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-22-69" Jan 13 21:11:38.961286 kubelet[2835]: E0113 21:11:38.959279 2835 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-22-69.181a5cd82bcd15e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-69,UID:ip-172-31-22-69,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-69,},FirstTimestamp:2025-01-13 21:11:29.822299618 +0000 UTC m=+0.816514834,LastTimestamp:2025-01-13 21:11:29.822299618 +0000 UTC m=+0.816514834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-69,}" Jan 13 21:11:39.032879 kubelet[2835]: E0113 21:11:39.032580 2835 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-22-69.181a5cd8305df925 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-69,UID:ip-172-31-22-69,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-22-69,},FirstTimestamp:2025-01-13 21:11:29.898903845 +0000 UTC m=+0.893118893,LastTimestamp:2025-01-13 21:11:29.898903845 +0000 UTC m=+0.893118893,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-69,}" Jan 13 21:11:41.690664 systemd[1]: Reloading requested from client PID 3211 ('systemctl') (unit session-9.scope)... Jan 13 21:11:41.690699 systemd[1]: Reloading... Jan 13 21:11:41.915357 zram_generator::config[3257]: No configuration found. Jan 13 21:11:42.139892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:11:42.342202 systemd[1]: Reloading finished in 650 ms. Jan 13 21:11:42.432941 kubelet[2835]: I0113 21:11:42.432313 2835 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:11:42.432922 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:42.443436 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:11:42.443896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:42.443993 systemd[1]: kubelet.service: Consumed 1.593s CPU time, 114.8M memory peak, 0B memory swap peak. Jan 13 21:11:42.456049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:42.812566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:42.828069 (kubelet)[3311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:11:42.957292 kubelet[3311]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:11:42.957292 kubelet[3311]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:11:42.957292 kubelet[3311]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:11:42.957292 kubelet[3311]: I0113 21:11:42.956581 3311 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:11:42.966834 kubelet[3311]: I0113 21:11:42.966791 3311 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:11:42.967336 kubelet[3311]: I0113 21:11:42.967142 3311 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:11:42.968330 kubelet[3311]: I0113 21:11:42.967722 3311 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:11:42.970985 kubelet[3311]: I0113 21:11:42.970945 3311 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:11:42.974890 kubelet[3311]: I0113 21:11:42.974800 3311 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:11:42.997274 kubelet[3311]: I0113 21:11:42.996084 3311 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:11:42.996405 sudo[3324]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:11:42.997875 kubelet[3311]: I0113 21:11:42.997677 3311 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:11:42.999626 kubelet[3311]: I0113 21:11:42.997952 3311 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:11:42.999626 kubelet[3311]: I0113 21:11:42.998009 3311 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:11:42.999626 kubelet[3311]: I0113 21:11:42.998032 3311 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:11:42.999626 kubelet[3311]: I0113 21:11:42.998081 3311 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:11:42.999626 kubelet[3311]: I0113 21:11:42.998300 3311 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:11:42.999626 kubelet[3311]: I0113 21:11:42.998336 3311 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:11:42.999626 kubelet[3311]: I0113 21:11:42.998389 3311 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:11:42.999357 sudo[3324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:11:43.001768 kubelet[3311]: I0113 21:11:42.998418 3311 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:11:43.005701 kubelet[3311]: I0113 21:11:43.004499 3311 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:11:43.005701 kubelet[3311]: I0113 21:11:43.004865 3311 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:11:43.008071 kubelet[3311]: I0113 21:11:43.007957 3311 server.go:1256] "Started kubelet" Jan 13 21:11:43.014685 kubelet[3311]: I0113 21:11:43.014620 3311 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:11:43.028649 kubelet[3311]: I0113 21:11:43.028396 3311 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:11:43.030714 kubelet[3311]: I0113 21:11:43.030466 3311 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:11:43.034951 kubelet[3311]: I0113 21:11:43.034751 3311 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:11:43.037797 kubelet[3311]: I0113 21:11:43.037457 3311 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:11:43.038010 kubelet[3311]: I0113 21:11:43.037844 3311 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:11:43.060992 kubelet[3311]: I0113 21:11:43.060747 3311 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:11:43.061358 kubelet[3311]: I0113 21:11:43.061165 3311 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:11:43.062850 kubelet[3311]: I0113 21:11:43.062714 3311 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:11:43.072661 kubelet[3311]: I0113 21:11:43.072431 3311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:11:43.074338 kubelet[3311]: I0113 21:11:43.074183 3311 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:11:43.078271 kubelet[3311]: I0113 21:11:43.076964 3311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:11:43.078271 kubelet[3311]: I0113 21:11:43.076999 3311 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:11:43.078271 kubelet[3311]: I0113 21:11:43.077028 3311 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:11:43.078271 kubelet[3311]: E0113 21:11:43.077144 3311 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:11:43.097413 kubelet[3311]: I0113 21:11:43.097228 3311 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:11:43.126407 kubelet[3311]: E0113 21:11:43.126370 3311 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:11:43.187214 kubelet[3311]: E0113 21:11:43.186681 3311 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:11:43.201029 kubelet[3311]: I0113 21:11:43.200755 3311 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-69" Jan 13 21:11:43.245435 kubelet[3311]: I0113 21:11:43.245001 3311 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-22-69" Jan 13 21:11:43.245435 kubelet[3311]: I0113 21:11:43.245136 3311 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-22-69" Jan 13 21:11:43.338771 kubelet[3311]: I0113 21:11:43.338704 3311 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:11:43.338771 kubelet[3311]: I0113 21:11:43.338744 3311 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:11:43.338771 kubelet[3311]: I0113 21:11:43.338778 3311 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:11:43.339061 kubelet[3311]: I0113 21:11:43.339015 3311 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:11:43.339129 kubelet[3311]: I0113 21:11:43.339064 3311 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:11:43.339129 kubelet[3311]: I0113 21:11:43.339084 3311 policy_none.go:49] "None policy: Start" Jan 13 21:11:43.341543 kubelet[3311]: I0113 21:11:43.340963 3311 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:11:43.341543 kubelet[3311]: I0113 21:11:43.341013 3311 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:11:43.343231 kubelet[3311]: I0113 21:11:43.341968 3311 state_mem.go:75] "Updated machine memory state" Jan 13 21:11:43.356722 kubelet[3311]: I0113 21:11:43.356384 3311 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:11:43.357493 kubelet[3311]: I0113 21:11:43.357450 3311 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:11:43.390542 kubelet[3311]: I0113 21:11:43.388705 3311 topology_manager.go:215] "Topology Admit Handler" podUID="df4adad0af788819c5663c18ad45ec79" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-69" Jan 13 21:11:43.390542 kubelet[3311]: I0113 21:11:43.388834 3311 topology_manager.go:215] "Topology Admit Handler" podUID="2017de283be89adf90bfc38703308e5f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-69" Jan 13 21:11:43.390542 kubelet[3311]: I0113 21:11:43.388931 3311 topology_manager.go:215] "Topology Admit Handler" podUID="bd055251af529f8bb1e44858b5f1401a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-69" Jan 13 21:11:43.441765 kubelet[3311]: I0113 21:11:43.441706 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2017de283be89adf90bfc38703308e5f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-69\" (UID: \"2017de283be89adf90bfc38703308e5f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-69" Jan 13 21:11:43.441926 kubelet[3311]: I0113 21:11:43.441802 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd055251af529f8bb1e44858b5f1401a-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-69\" (UID: \"bd055251af529f8bb1e44858b5f1401a\") " pod="kube-system/kube-scheduler-ip-172-31-22-69" Jan 13 21:11:43.441926 kubelet[3311]: I0113 21:11:43.441898 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df4adad0af788819c5663c18ad45ec79-ca-certs\") pod \"kube-apiserver-ip-172-31-22-69\" (UID: \"df4adad0af788819c5663c18ad45ec79\") " pod="kube-system/kube-apiserver-ip-172-31-22-69" Jan 13 21:11:43.443161 kubelet[3311]: I0113 21:11:43.443061 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2017de283be89adf90bfc38703308e5f-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-69\" (UID: \"2017de283be89adf90bfc38703308e5f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-69" Jan 13 21:11:43.443360 kubelet[3311]: I0113 21:11:43.443185 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2017de283be89adf90bfc38703308e5f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-69\" (UID: \"2017de283be89adf90bfc38703308e5f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-69" Jan 13 21:11:43.443360 kubelet[3311]: I0113 21:11:43.443261 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2017de283be89adf90bfc38703308e5f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-69\" (UID: \"2017de283be89adf90bfc38703308e5f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-69" Jan 13 21:11:43.443360 kubelet[3311]: I0113 21:11:43.443313 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df4adad0af788819c5663c18ad45ec79-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-69\" (UID: \"df4adad0af788819c5663c18ad45ec79\") " pod="kube-system/kube-apiserver-ip-172-31-22-69" Jan 13 21:11:43.443514 kubelet[3311]: I0113 21:11:43.443363 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df4adad0af788819c5663c18ad45ec79-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-69\" (UID: \"df4adad0af788819c5663c18ad45ec79\") " pod="kube-system/kube-apiserver-ip-172-31-22-69" Jan 13 21:11:43.443514 kubelet[3311]: I0113 21:11:43.443410 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2017de283be89adf90bfc38703308e5f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-69\" (UID: \"2017de283be89adf90bfc38703308e5f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-69" Jan 13 21:11:44.001869 kubelet[3311]: I0113 21:11:44.000012 3311 apiserver.go:52] "Watching apiserver" Jan 13 21:11:44.023728 sudo[3324]: pam_unix(sudo:session): session closed for user root Jan 13 21:11:44.038911 kubelet[3311]: I0113 21:11:44.038791 3311 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:11:44.272949 kubelet[3311]: I0113 21:11:44.272766 3311 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-69" podStartSLOduration=1.272679953 podStartE2EDuration="1.272679953s" podCreationTimestamp="2025-01-13 21:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:11:44.259760417 +0000 UTC m=+1.420535852" watchObservedRunningTime="2025-01-13 21:11:44.272679953 +0000 UTC m=+1.433455364" Jan 13 21:11:44.300998 kubelet[3311]: E0113 21:11:44.299645 3311 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-22-69\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-69" Jan 13 21:11:44.308272 kubelet[3311]: I0113 21:11:44.306383 3311 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-69" podStartSLOduration=1.306232313 podStartE2EDuration="1.306232313s" podCreationTimestamp="2025-01-13 21:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:11:44.273336677 +0000 UTC m=+1.434112160" watchObservedRunningTime="2025-01-13 21:11:44.306232313 +0000 UTC m=+1.467007736" Jan 13 21:11:44.324428 kubelet[3311]: I0113 21:11:44.324195 3311 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-69" podStartSLOduration=1.324138461 podStartE2EDuration="1.324138461s" podCreationTimestamp="2025-01-13 21:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:11:44.307272713 +0000 UTC m=+1.468048136" watchObservedRunningTime="2025-01-13 21:11:44.324138461 +0000 UTC m=+1.484913884" Jan 13 21:11:46.807523 sudo[2279]: pam_unix(sudo:session): session closed for user root Jan 13 21:11:46.831509 sshd[2275]: pam_unix(sshd:session): session closed for user core Jan 13 21:11:46.837469 systemd[1]: sshd@8-172.31.22.69:22-139.178.89.65:46024.service: Deactivated successfully. Jan 13 21:11:46.842431 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:11:46.843232 systemd[1]: session-9.scope: Consumed 12.331s CPU time, 184.1M memory peak, 0B memory swap peak. Jan 13 21:11:46.847139 systemd-logind[1908]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:11:46.849558 systemd-logind[1908]: Removed session 9. Jan 13 21:11:55.848198 kubelet[3311]: I0113 21:11:55.848108 3311 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:11:55.850457 kubelet[3311]: I0113 21:11:55.849815 3311 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:11:55.850557 containerd[1943]: time="2025-01-13T21:11:55.849404178Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:11:56.747054 kubelet[3311]: I0113 21:11:56.746972 3311 topology_manager.go:215] "Topology Admit Handler" podUID="8a4929fa-3779-4d94-b48f-c479234fd9bb" podNamespace="kube-system" podName="kube-proxy-dhnkv" Jan 13 21:11:56.770457 systemd[1]: Created slice kubepods-besteffort-pod8a4929fa_3779_4d94_b48f_c479234fd9bb.slice - libcontainer container kubepods-besteffort-pod8a4929fa_3779_4d94_b48f_c479234fd9bb.slice. Jan 13 21:11:56.792753 kubelet[3311]: I0113 21:11:56.788101 3311 topology_manager.go:215] "Topology Admit Handler" podUID="575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" podNamespace="kube-system" podName="cilium-r47hz" Jan 13 21:11:56.809304 systemd[1]: Created slice kubepods-burstable-pod575b19ff_95b7_4f56_b6b6_bfb62aaddc3a.slice - libcontainer container kubepods-burstable-pod575b19ff_95b7_4f56_b6b6_bfb62aaddc3a.slice. Jan 13 21:11:56.821446 kubelet[3311]: I0113 21:11:56.821398 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-etc-cni-netd\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.821781 kubelet[3311]: I0113 21:11:56.821757 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-hubble-tls\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.821781 kubelet[3311]: I0113 21:11:56.821857 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-xtables-lock\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.822627 kubelet[3311]: I0113 21:11:56.822108 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cni-path\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.822889 kubelet[3311]: I0113 21:11:56.822597 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-clustermesh-secrets\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.823375 kubelet[3311]: I0113 21:11:56.823024 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a4929fa-3779-4d94-b48f-c479234fd9bb-lib-modules\") pod \"kube-proxy-dhnkv\" (UID: \"8a4929fa-3779-4d94-b48f-c479234fd9bb\") " pod="kube-system/kube-proxy-dhnkv" Jan 13 21:11:56.823660 kubelet[3311]: I0113 21:11:56.823516 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-hostproc\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.823660 kubelet[3311]: I0113 21:11:56.823621 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cilium-cgroup\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.824406 kubelet[3311]: I0113 21:11:56.824150 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-lib-modules\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.824406 kubelet[3311]: I0113 21:11:56.824284 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a4929fa-3779-4d94-b48f-c479234fd9bb-kube-proxy\") pod \"kube-proxy-dhnkv\" (UID: \"8a4929fa-3779-4d94-b48f-c479234fd9bb\") " pod="kube-system/kube-proxy-dhnkv" Jan 13 21:11:56.825212 kubelet[3311]: I0113 21:11:56.824960 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfn29\" (UniqueName: \"kubernetes.io/projected/8a4929fa-3779-4d94-b48f-c479234fd9bb-kube-api-access-lfn29\") pod \"kube-proxy-dhnkv\" (UID: \"8a4929fa-3779-4d94-b48f-c479234fd9bb\") " pod="kube-system/kube-proxy-dhnkv" Jan 13 21:11:56.825212 kubelet[3311]: I0113 21:11:56.825094 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-host-proc-sys-kernel\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.825212 kubelet[3311]: I0113 21:11:56.825177 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw2kq\" (UniqueName: \"kubernetes.io/projected/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-kube-api-access-hw2kq\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.826765 kubelet[3311]: I0113 21:11:56.826371 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a4929fa-3779-4d94-b48f-c479234fd9bb-xtables-lock\") pod \"kube-proxy-dhnkv\" (UID: \"8a4929fa-3779-4d94-b48f-c479234fd9bb\") " pod="kube-system/kube-proxy-dhnkv" Jan 13 21:11:56.826765 kubelet[3311]: I0113 21:11:56.826443 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-host-proc-sys-net\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.826765 kubelet[3311]: I0113 21:11:56.826488 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-bpf-maps\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.826765 kubelet[3311]: I0113 21:11:56.826533 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cilium-config-path\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.826765 kubelet[3311]: I0113 21:11:56.826579 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cilium-run\") pod \"cilium-r47hz\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " pod="kube-system/cilium-r47hz" Jan 13 21:11:56.964280 kubelet[3311]: I0113 21:11:56.961786 3311 topology_manager.go:215] "Topology Admit Handler" podUID="f6ec314f-a219-44ac-86c3-1313601fb2d1" podNamespace="kube-system" podName="cilium-operator-5cc964979-5686z" Jan 13 21:11:56.993015 systemd[1]: Created slice kubepods-besteffort-podf6ec314f_a219_44ac_86c3_1313601fb2d1.slice - libcontainer container kubepods-besteffort-podf6ec314f_a219_44ac_86c3_1313601fb2d1.slice. Jan 13 21:11:57.029698 kubelet[3311]: I0113 21:11:57.028899 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb49c\" (UniqueName: \"kubernetes.io/projected/f6ec314f-a219-44ac-86c3-1313601fb2d1-kube-api-access-vb49c\") pod \"cilium-operator-5cc964979-5686z\" (UID: \"f6ec314f-a219-44ac-86c3-1313601fb2d1\") " pod="kube-system/cilium-operator-5cc964979-5686z" Jan 13 21:11:57.029698 kubelet[3311]: I0113 21:11:57.029003 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6ec314f-a219-44ac-86c3-1313601fb2d1-cilium-config-path\") pod \"cilium-operator-5cc964979-5686z\" (UID: \"f6ec314f-a219-44ac-86c3-1313601fb2d1\") " pod="kube-system/cilium-operator-5cc964979-5686z" Jan 13 21:11:57.119238 containerd[1943]: time="2025-01-13T21:11:57.118194916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r47hz,Uid:575b19ff-95b7-4f56-b6b6-bfb62aaddc3a,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:57.200380 containerd[1943]: time="2025-01-13T21:11:57.199857113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:57.200380 containerd[1943]: time="2025-01-13T21:11:57.199955033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:57.201010 containerd[1943]: time="2025-01-13T21:11:57.200024045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:57.201010 containerd[1943]: time="2025-01-13T21:11:57.200347997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:57.241605 systemd[1]: Started cri-containerd-c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e.scope - libcontainer container c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e. Jan 13 21:11:57.291149 containerd[1943]: time="2025-01-13T21:11:57.290262677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r47hz,Uid:575b19ff-95b7-4f56-b6b6-bfb62aaddc3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\"" Jan 13 21:11:57.298402 containerd[1943]: time="2025-01-13T21:11:57.298048373Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:11:57.303222 containerd[1943]: time="2025-01-13T21:11:57.303017741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-5686z,Uid:f6ec314f-a219-44ac-86c3-1313601fb2d1,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:57.344302 containerd[1943]: time="2025-01-13T21:11:57.343680006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:57.344302 containerd[1943]: time="2025-01-13T21:11:57.343872570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:57.344302 containerd[1943]: time="2025-01-13T21:11:57.343952274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:57.344302 containerd[1943]: time="2025-01-13T21:11:57.344136690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:57.379775 systemd[1]: Started cri-containerd-fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399.scope - libcontainer container fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399. Jan 13 21:11:57.384742 containerd[1943]: time="2025-01-13T21:11:57.384644250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dhnkv,Uid:8a4929fa-3779-4d94-b48f-c479234fd9bb,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:57.433685 containerd[1943]: time="2025-01-13T21:11:57.432518454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:57.433685 containerd[1943]: time="2025-01-13T21:11:57.433539534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:57.433685 containerd[1943]: time="2025-01-13T21:11:57.433620678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:57.435700 containerd[1943]: time="2025-01-13T21:11:57.434472990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:57.485457 systemd[1]: Started cri-containerd-5c80a024200a1637df2b6ca5ff68a7caa950986121731aee5fca40cd5f4be288.scope - libcontainer container 5c80a024200a1637df2b6ca5ff68a7caa950986121731aee5fca40cd5f4be288. Jan 13 21:11:57.490600 containerd[1943]: time="2025-01-13T21:11:57.489450270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-5686z,Uid:f6ec314f-a219-44ac-86c3-1313601fb2d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\"" Jan 13 21:11:57.548055 containerd[1943]: time="2025-01-13T21:11:57.545891683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dhnkv,Uid:8a4929fa-3779-4d94-b48f-c479234fd9bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c80a024200a1637df2b6ca5ff68a7caa950986121731aee5fca40cd5f4be288\"" Jan 13 21:11:57.556141 containerd[1943]: time="2025-01-13T21:11:57.555977515Z" level=info msg="CreateContainer within sandbox \"5c80a024200a1637df2b6ca5ff68a7caa950986121731aee5fca40cd5f4be288\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:11:57.579986 containerd[1943]: time="2025-01-13T21:11:57.579775267Z" level=info msg="CreateContainer within sandbox \"5c80a024200a1637df2b6ca5ff68a7caa950986121731aee5fca40cd5f4be288\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fc6d0da9b542b087094f6bc690ac50ca790d5cf2bbdd430d01a2955d78d2f746\"" Jan 13 21:11:57.581215 containerd[1943]: time="2025-01-13T21:11:57.581113135Z" level=info msg="StartContainer for \"fc6d0da9b542b087094f6bc690ac50ca790d5cf2bbdd430d01a2955d78d2f746\"" Jan 13 21:11:57.626633 systemd[1]: Started cri-containerd-fc6d0da9b542b087094f6bc690ac50ca790d5cf2bbdd430d01a2955d78d2f746.scope - libcontainer container fc6d0da9b542b087094f6bc690ac50ca790d5cf2bbdd430d01a2955d78d2f746. Jan 13 21:11:57.690623 containerd[1943]: time="2025-01-13T21:11:57.690546295Z" level=info msg="StartContainer for \"fc6d0da9b542b087094f6bc690ac50ca790d5cf2bbdd430d01a2955d78d2f746\" returns successfully" Jan 13 21:11:58.339122 kubelet[3311]: I0113 21:11:58.339020 3311 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dhnkv" podStartSLOduration=2.338938927 podStartE2EDuration="2.338938927s" podCreationTimestamp="2025-01-13 21:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:11:58.336047227 +0000 UTC m=+15.496822650" watchObservedRunningTime="2025-01-13 21:11:58.338938927 +0000 UTC m=+15.499714338" Jan 13 21:12:12.103713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount624121877.mount: Deactivated successfully. Jan 13 21:12:14.691846 containerd[1943]: time="2025-01-13T21:12:14.691755828Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:14.693691 containerd[1943]: time="2025-01-13T21:12:14.693632352Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651554" Jan 13 21:12:14.694711 containerd[1943]: time="2025-01-13T21:12:14.694545132Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:14.698226 containerd[1943]: time="2025-01-13T21:12:14.698085048Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 17.399976711s" Jan 13 21:12:14.698226 containerd[1943]: time="2025-01-13T21:12:14.698151756Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 21:12:14.699843 containerd[1943]: time="2025-01-13T21:12:14.699423396Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:12:14.703415 containerd[1943]: time="2025-01-13T21:12:14.702535776Z" level=info msg="CreateContainer within sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:12:14.734756 containerd[1943]: time="2025-01-13T21:12:14.734609604Z" level=info msg="CreateContainer within sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893\"" Jan 13 21:12:14.737296 containerd[1943]: time="2025-01-13T21:12:14.735788100Z" level=info msg="StartContainer for \"713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893\"" Jan 13 21:12:14.803559 systemd[1]: Started cri-containerd-713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893.scope - libcontainer container 713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893. Jan 13 21:12:14.855330 containerd[1943]: time="2025-01-13T21:12:14.855172813Z" level=info msg="StartContainer for \"713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893\" returns successfully" Jan 13 21:12:14.885291 systemd[1]: cri-containerd-713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893.scope: Deactivated successfully. Jan 13 21:12:15.717088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893-rootfs.mount: Deactivated successfully. Jan 13 21:12:16.158200 containerd[1943]: time="2025-01-13T21:12:16.158004887Z" level=info msg="shim disconnected" id=713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893 namespace=k8s.io Jan 13 21:12:16.158200 containerd[1943]: time="2025-01-13T21:12:16.158194943Z" level=warning msg="cleaning up after shim disconnected" id=713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893 namespace=k8s.io Jan 13 21:12:16.158200 containerd[1943]: time="2025-01-13T21:12:16.158221151Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:16.385330 containerd[1943]: time="2025-01-13T21:12:16.385227888Z" level=info msg="CreateContainer within sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:12:16.421509 containerd[1943]: time="2025-01-13T21:12:16.421358232Z" level=info msg="CreateContainer within sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6\"" Jan 13 21:12:16.422877 containerd[1943]: time="2025-01-13T21:12:16.422804964Z" level=info msg="StartContainer for \"0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6\"" Jan 13 21:12:16.481702 systemd[1]: Started cri-containerd-0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6.scope - libcontainer container 0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6. Jan 13 21:12:16.542431 containerd[1943]: time="2025-01-13T21:12:16.538441021Z" level=info msg="StartContainer for \"0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6\" returns successfully" Jan 13 21:12:16.562753 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:12:16.563642 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:16.563955 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:12:16.573203 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:12:16.574044 systemd[1]: cri-containerd-0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6.scope: Deactivated successfully. Jan 13 21:12:16.625998 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:16.643827 containerd[1943]: time="2025-01-13T21:12:16.643457425Z" level=info msg="shim disconnected" id=0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6 namespace=k8s.io Jan 13 21:12:16.643827 containerd[1943]: time="2025-01-13T21:12:16.643573297Z" level=warning msg="cleaning up after shim disconnected" id=0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6 namespace=k8s.io Jan 13 21:12:16.643827 containerd[1943]: time="2025-01-13T21:12:16.643595905Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:16.718223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6-rootfs.mount: Deactivated successfully. Jan 13 21:12:17.399356 containerd[1943]: time="2025-01-13T21:12:17.396740197Z" level=info msg="CreateContainer within sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:12:17.438302 containerd[1943]: time="2025-01-13T21:12:17.438201205Z" level=info msg="CreateContainer within sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf\"" Jan 13 21:12:17.439783 containerd[1943]: time="2025-01-13T21:12:17.439680637Z" level=info msg="StartContainer for \"0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf\"" Jan 13 21:12:17.531589 systemd[1]: Started cri-containerd-0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf.scope - libcontainer container 0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf. Jan 13 21:12:17.591755 containerd[1943]: time="2025-01-13T21:12:17.590756090Z" level=info msg="StartContainer for \"0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf\" returns successfully" Jan 13 21:12:17.599877 systemd[1]: cri-containerd-0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf.scope: Deactivated successfully. Jan 13 21:12:17.654204 containerd[1943]: time="2025-01-13T21:12:17.654032282Z" level=info msg="shim disconnected" id=0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf namespace=k8s.io Jan 13 21:12:17.654204 containerd[1943]: time="2025-01-13T21:12:17.654109466Z" level=warning msg="cleaning up after shim disconnected" id=0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf namespace=k8s.io Jan 13 21:12:17.654204 containerd[1943]: time="2025-01-13T21:12:17.654131834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:17.718402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf-rootfs.mount: Deactivated successfully. Jan 13 21:12:18.397903 containerd[1943]: time="2025-01-13T21:12:18.397700726Z" level=info msg="CreateContainer within sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:12:18.429661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2951169182.mount: Deactivated successfully. Jan 13 21:12:18.431700 containerd[1943]: time="2025-01-13T21:12:18.431635922Z" level=info msg="CreateContainer within sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e\"" Jan 13 21:12:18.436832 containerd[1943]: time="2025-01-13T21:12:18.436304774Z" level=info msg="StartContainer for \"17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e\"" Jan 13 21:12:18.497521 systemd[1]: Started cri-containerd-17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e.scope - libcontainer container 17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e. Jan 13 21:12:18.546066 systemd[1]: cri-containerd-17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e.scope: Deactivated successfully. Jan 13 21:12:18.549416 containerd[1943]: time="2025-01-13T21:12:18.548218551Z" level=info msg="StartContainer for \"17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e\" returns successfully" Jan 13 21:12:18.591173 containerd[1943]: time="2025-01-13T21:12:18.591088803Z" level=info msg="shim disconnected" id=17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e namespace=k8s.io Jan 13 21:12:18.591173 containerd[1943]: time="2025-01-13T21:12:18.591165315Z" level=warning msg="cleaning up after shim disconnected" id=17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e namespace=k8s.io Jan 13 21:12:18.591660 containerd[1943]: time="2025-01-13T21:12:18.591186591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:18.718157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e-rootfs.mount: Deactivated successfully. Jan 13 21:12:19.407667 containerd[1943]: time="2025-01-13T21:12:19.407583183Z" level=info msg="CreateContainer within sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:12:19.443516 containerd[1943]: time="2025-01-13T21:12:19.443455311Z" level=info msg="CreateContainer within sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\"" Jan 13 21:12:19.444822 containerd[1943]: time="2025-01-13T21:12:19.444731211Z" level=info msg="StartContainer for \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\"" Jan 13 21:12:19.509536 systemd[1]: Started cri-containerd-3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee.scope - libcontainer container 3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee. Jan 13 21:12:19.569965 containerd[1943]: time="2025-01-13T21:12:19.568685968Z" level=info msg="StartContainer for \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\" returns successfully" Jan 13 21:12:19.785769 kubelet[3311]: I0113 21:12:19.785529 3311 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:12:19.841131 kubelet[3311]: I0113 21:12:19.840995 3311 topology_manager.go:215] "Topology Admit Handler" podUID="125decff-4d16-4972-86ca-f81dbb906126" podNamespace="kube-system" podName="coredns-76f75df574-hx8sp" Jan 13 21:12:19.846944 kubelet[3311]: I0113 21:12:19.845681 3311 topology_manager.go:215] "Topology Admit Handler" podUID="1bb81c1e-cdda-45dd-ad96-bc24124a4ac6" podNamespace="kube-system" podName="coredns-76f75df574-2cl29" Jan 13 21:12:19.858010 kubelet[3311]: W0113 21:12:19.857427 3311 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-22-69" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-69' and this object Jan 13 21:12:19.858010 kubelet[3311]: E0113 21:12:19.857687 3311 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-22-69" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-69' and this object Jan 13 21:12:19.878222 systemd[1]: Created slice kubepods-burstable-pod125decff_4d16_4972_86ca_f81dbb906126.slice - libcontainer container kubepods-burstable-pod125decff_4d16_4972_86ca_f81dbb906126.slice. Jan 13 21:12:19.900349 systemd[1]: Created slice kubepods-burstable-pod1bb81c1e_cdda_45dd_ad96_bc24124a4ac6.slice - libcontainer container kubepods-burstable-pod1bb81c1e_cdda_45dd_ad96_bc24124a4ac6.slice. Jan 13 21:12:19.906677 kubelet[3311]: I0113 21:12:19.906503 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bb81c1e-cdda-45dd-ad96-bc24124a4ac6-config-volume\") pod \"coredns-76f75df574-2cl29\" (UID: \"1bb81c1e-cdda-45dd-ad96-bc24124a4ac6\") " pod="kube-system/coredns-76f75df574-2cl29" Jan 13 21:12:19.906677 kubelet[3311]: I0113 21:12:19.906578 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fhkp\" (UniqueName: \"kubernetes.io/projected/1bb81c1e-cdda-45dd-ad96-bc24124a4ac6-kube-api-access-4fhkp\") pod \"coredns-76f75df574-2cl29\" (UID: \"1bb81c1e-cdda-45dd-ad96-bc24124a4ac6\") " pod="kube-system/coredns-76f75df574-2cl29" Jan 13 21:12:19.906677 kubelet[3311]: I0113 21:12:19.906632 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/125decff-4d16-4972-86ca-f81dbb906126-config-volume\") pod \"coredns-76f75df574-hx8sp\" (UID: \"125decff-4d16-4972-86ca-f81dbb906126\") " pod="kube-system/coredns-76f75df574-hx8sp" Jan 13 21:12:19.906956 kubelet[3311]: I0113 21:12:19.906701 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d6dg\" (UniqueName: \"kubernetes.io/projected/125decff-4d16-4972-86ca-f81dbb906126-kube-api-access-5d6dg\") pod \"coredns-76f75df574-hx8sp\" (UID: \"125decff-4d16-4972-86ca-f81dbb906126\") " pod="kube-system/coredns-76f75df574-hx8sp" Jan 13 21:12:21.009413 kubelet[3311]: E0113 21:12:21.008855 3311 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:12:21.009413 kubelet[3311]: E0113 21:12:21.008987 3311 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1bb81c1e-cdda-45dd-ad96-bc24124a4ac6-config-volume podName:1bb81c1e-cdda-45dd-ad96-bc24124a4ac6 nodeName:}" failed. No retries permitted until 2025-01-13 21:12:21.508953815 +0000 UTC m=+38.669729214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1bb81c1e-cdda-45dd-ad96-bc24124a4ac6-config-volume") pod "coredns-76f75df574-2cl29" (UID: "1bb81c1e-cdda-45dd-ad96-bc24124a4ac6") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:12:21.009413 kubelet[3311]: E0113 21:12:21.009318 3311 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:12:21.009413 kubelet[3311]: E0113 21:12:21.009372 3311 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/125decff-4d16-4972-86ca-f81dbb906126-config-volume podName:125decff-4d16-4972-86ca-f81dbb906126 nodeName:}" failed. No retries permitted until 2025-01-13 21:12:21.509354399 +0000 UTC m=+38.670129810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/125decff-4d16-4972-86ca-f81dbb906126-config-volume") pod "coredns-76f75df574-hx8sp" (UID: "125decff-4d16-4972-86ca-f81dbb906126") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:12:21.690435 containerd[1943]: time="2025-01-13T21:12:21.690150127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hx8sp,Uid:125decff-4d16-4972-86ca-f81dbb906126,Namespace:kube-system,Attempt:0,}" Jan 13 21:12:21.712300 containerd[1943]: time="2025-01-13T21:12:21.710585743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2cl29,Uid:1bb81c1e-cdda-45dd-ad96-bc24124a4ac6,Namespace:kube-system,Attempt:0,}" Jan 13 21:12:22.580963 systemd[1]: Started sshd@9-172.31.22.69:22-139.178.89.65:55960.service - OpenSSH per-connection server daemon (139.178.89.65:55960). Jan 13 21:12:22.761923 sshd[4092]: Accepted publickey for core from 139.178.89.65 port 55960 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:22.765090 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:22.773357 systemd-logind[1908]: New session 10 of user core. Jan 13 21:12:22.781539 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:12:23.071890 sshd[4092]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:23.080175 systemd[1]: sshd@9-172.31.22.69:22-139.178.89.65:55960.service: Deactivated successfully. Jan 13 21:12:23.085937 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:12:23.088540 systemd-logind[1908]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:12:23.092307 systemd-logind[1908]: Removed session 10. Jan 13 21:12:25.969738 containerd[1943]: time="2025-01-13T21:12:25.969653580Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:25.971620 containerd[1943]: time="2025-01-13T21:12:25.971562300Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137730" Jan 13 21:12:25.972703 containerd[1943]: time="2025-01-13T21:12:25.972586248Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:25.976369 containerd[1943]: time="2025-01-13T21:12:25.976149588Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 11.276657552s" Jan 13 21:12:25.976369 containerd[1943]: time="2025-01-13T21:12:25.976211376Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 21:12:25.980360 containerd[1943]: time="2025-01-13T21:12:25.980067312Z" level=info msg="CreateContainer within sandbox \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:12:26.005150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680109472.mount: Deactivated successfully. Jan 13 21:12:26.007799 containerd[1943]: time="2025-01-13T21:12:26.006410840Z" level=info msg="CreateContainer within sandbox \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\"" Jan 13 21:12:26.012165 containerd[1943]: time="2025-01-13T21:12:26.012073640Z" level=info msg="StartContainer for \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\"" Jan 13 21:12:26.074564 systemd[1]: Started cri-containerd-13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11.scope - libcontainer container 13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11. Jan 13 21:12:26.141523 containerd[1943]: time="2025-01-13T21:12:26.141319821Z" level=info msg="StartContainer for \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\" returns successfully" Jan 13 21:12:26.473518 kubelet[3311]: I0113 21:12:26.473255 3311 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-r47hz" podStartSLOduration=13.067174095 podStartE2EDuration="30.47317693s" podCreationTimestamp="2025-01-13 21:11:56 +0000 UTC" firstStartedPulling="2025-01-13 21:11:57.292971989 +0000 UTC m=+14.453747388" lastFinishedPulling="2025-01-13 21:12:14.698974812 +0000 UTC m=+31.859750223" observedRunningTime="2025-01-13 21:12:20.452284228 +0000 UTC m=+37.613059663" watchObservedRunningTime="2025-01-13 21:12:26.47317693 +0000 UTC m=+43.633952365" Jan 13 21:12:26.476114 kubelet[3311]: I0113 21:12:26.475526 3311 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-5686z" podStartSLOduration=1.992157092 podStartE2EDuration="30.475468762s" podCreationTimestamp="2025-01-13 21:11:56 +0000 UTC" firstStartedPulling="2025-01-13 21:11:57.493438494 +0000 UTC m=+14.654213893" lastFinishedPulling="2025-01-13 21:12:25.976750152 +0000 UTC m=+43.137525563" observedRunningTime="2025-01-13 21:12:26.47541019 +0000 UTC m=+43.636185625" watchObservedRunningTime="2025-01-13 21:12:26.475468762 +0000 UTC m=+43.636244257" Jan 13 21:12:28.110790 systemd[1]: Started sshd@10-172.31.22.69:22-139.178.89.65:55976.service - OpenSSH per-connection server daemon (139.178.89.65:55976). Jan 13 21:12:28.300480 sshd[4154]: Accepted publickey for core from 139.178.89.65 port 55976 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:28.304048 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:28.313108 systemd-logind[1908]: New session 11 of user core. Jan 13 21:12:28.321566 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:12:28.666505 sshd[4154]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:28.673150 systemd[1]: sshd@10-172.31.22.69:22-139.178.89.65:55976.service: Deactivated successfully. Jan 13 21:12:28.678402 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:12:28.680411 systemd-logind[1908]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:12:28.682122 systemd-logind[1908]: Removed session 11. Jan 13 21:12:29.312020 systemd-networkd[1846]: cilium_host: Link UP Jan 13 21:12:29.314928 systemd-networkd[1846]: cilium_net: Link UP Jan 13 21:12:29.316511 systemd-networkd[1846]: cilium_net: Gained carrier Jan 13 21:12:29.317697 systemd-networkd[1846]: cilium_host: Gained carrier Jan 13 21:12:29.322228 (udev-worker)[4171]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:29.324207 (udev-worker)[4172]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:29.520948 systemd-networkd[1846]: cilium_vxlan: Link UP Jan 13 21:12:29.520972 systemd-networkd[1846]: cilium_vxlan: Gained carrier Jan 13 21:12:30.005348 kernel: NET: Registered PF_ALG protocol family Jan 13 21:12:30.019449 systemd-networkd[1846]: cilium_net: Gained IPv6LL Jan 13 21:12:30.275501 systemd-networkd[1846]: cilium_host: Gained IPv6LL Jan 13 21:12:30.659534 systemd-networkd[1846]: cilium_vxlan: Gained IPv6LL Jan 13 21:12:31.392544 systemd-networkd[1846]: lxc_health: Link UP Jan 13 21:12:31.406568 systemd-networkd[1846]: lxc_health: Gained carrier Jan 13 21:12:31.790086 systemd-networkd[1846]: lxcca46c48133f1: Link UP Jan 13 21:12:31.801089 kernel: eth0: renamed from tmp36538 Jan 13 21:12:31.805745 systemd-networkd[1846]: lxcca46c48133f1: Gained carrier Jan 13 21:12:31.828037 (udev-worker)[4502]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:31.851009 systemd-networkd[1846]: lxc55b3f345b23c: Link UP Jan 13 21:12:31.856447 kernel: eth0: renamed from tmpd5fc4 Jan 13 21:12:31.861420 systemd-networkd[1846]: lxc55b3f345b23c: Gained carrier Jan 13 21:12:32.771610 systemd-networkd[1846]: lxc_health: Gained IPv6LL Jan 13 21:12:32.900940 systemd-networkd[1846]: lxc55b3f345b23c: Gained IPv6LL Jan 13 21:12:33.717795 systemd[1]: Started sshd@11-172.31.22.69:22-139.178.89.65:45564.service - OpenSSH per-connection server daemon (139.178.89.65:45564). Jan 13 21:12:33.795950 systemd-networkd[1846]: lxcca46c48133f1: Gained IPv6LL Jan 13 21:12:33.943293 sshd[4529]: Accepted publickey for core from 139.178.89.65 port 45564 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:33.946640 sshd[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:33.962364 systemd-logind[1908]: New session 12 of user core. Jan 13 21:12:33.970597 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:12:34.282696 sshd[4529]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:34.293618 systemd[1]: sshd@11-172.31.22.69:22-139.178.89.65:45564.service: Deactivated successfully. Jan 13 21:12:34.303483 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:12:34.317457 systemd-logind[1908]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:12:34.320855 systemd-logind[1908]: Removed session 12. Jan 13 21:12:36.788155 ntpd[1901]: Listen normally on 8 cilium_host 192.168.0.144:123 Jan 13 21:12:36.788747 ntpd[1901]: 13 Jan 21:12:36 ntpd[1901]: Listen normally on 8 cilium_host 192.168.0.144:123 Jan 13 21:12:36.789049 ntpd[1901]: Listen normally on 9 cilium_net [fe80::4061:2bff:fedc:b43a%4]:123 Jan 13 21:12:36.789390 ntpd[1901]: 13 Jan 21:12:36 ntpd[1901]: Listen normally on 9 cilium_net [fe80::4061:2bff:fedc:b43a%4]:123 Jan 13 21:12:36.789390 ntpd[1901]: 13 Jan 21:12:36 ntpd[1901]: Listen normally on 10 cilium_host [fe80::a4f1:d2ff:fed7:d7bd%5]:123 Jan 13 21:12:36.789156 ntpd[1901]: Listen normally on 10 cilium_host [fe80::a4f1:d2ff:fed7:d7bd%5]:123 Jan 13 21:12:36.789806 ntpd[1901]: Listen normally on 11 cilium_vxlan [fe80::86c:bfff:fe14:abbc%6]:123 Jan 13 21:12:36.791362 ntpd[1901]: 13 Jan 21:12:36 ntpd[1901]: Listen normally on 11 cilium_vxlan [fe80::86c:bfff:fe14:abbc%6]:123 Jan 13 21:12:36.791362 ntpd[1901]: 13 Jan 21:12:36 ntpd[1901]: Listen normally on 12 lxc_health [fe80::642b:dcff:fe7e:8e2b%8]:123 Jan 13 21:12:36.791362 ntpd[1901]: 13 Jan 21:12:36 ntpd[1901]: Listen normally on 13 lxcca46c48133f1 [fe80::c07a:53ff:fe5d:8611%10]:123 Jan 13 21:12:36.791362 ntpd[1901]: 13 Jan 21:12:36 ntpd[1901]: Listen normally on 14 lxc55b3f345b23c [fe80::bce7:c4ff:fe8c:f0f2%12]:123 Jan 13 21:12:36.789970 ntpd[1901]: Listen normally on 12 lxc_health [fe80::642b:dcff:fe7e:8e2b%8]:123 Jan 13 21:12:36.790047 ntpd[1901]: Listen normally on 13 lxcca46c48133f1 [fe80::c07a:53ff:fe5d:8611%10]:123 Jan 13 21:12:36.790119 ntpd[1901]: Listen normally on 14 lxc55b3f345b23c [fe80::bce7:c4ff:fe8c:f0f2%12]:123 Jan 13 21:12:39.327719 systemd[1]: Started sshd@12-172.31.22.69:22-139.178.89.65:45570.service - OpenSSH per-connection server daemon (139.178.89.65:45570). Jan 13 21:12:39.524784 sshd[4551]: Accepted publickey for core from 139.178.89.65 port 45570 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:39.528312 sshd[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:39.537984 systemd-logind[1908]: New session 13 of user core. Jan 13 21:12:39.547833 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:12:39.850963 sshd[4551]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:39.863310 systemd[1]: sshd@12-172.31.22.69:22-139.178.89.65:45570.service: Deactivated successfully. Jan 13 21:12:39.871984 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:12:39.874098 systemd-logind[1908]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:12:39.877876 systemd-logind[1908]: Removed session 13. Jan 13 21:12:40.985939 containerd[1943]: time="2025-01-13T21:12:40.985697654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:12:40.989461 containerd[1943]: time="2025-01-13T21:12:40.988820018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:12:40.989461 containerd[1943]: time="2025-01-13T21:12:40.988877738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:40.989461 containerd[1943]: time="2025-01-13T21:12:40.989226890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:41.048224 systemd[1]: Started cri-containerd-36538e9b489551fd7c1cff00f42169ed5d7a14bc40fc9979870fe1dff3306228.scope - libcontainer container 36538e9b489551fd7c1cff00f42169ed5d7a14bc40fc9979870fe1dff3306228. Jan 13 21:12:41.123121 containerd[1943]: time="2025-01-13T21:12:41.122920679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:12:41.123121 containerd[1943]: time="2025-01-13T21:12:41.123036167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:12:41.124082 containerd[1943]: time="2025-01-13T21:12:41.123778823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:41.125161 containerd[1943]: time="2025-01-13T21:12:41.124646591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:41.190132 systemd[1]: Started cri-containerd-d5fc458933505dcdeb71a3b2d0bb019ade08bda76306aa82ef5df85f6da8da20.scope - libcontainer container d5fc458933505dcdeb71a3b2d0bb019ade08bda76306aa82ef5df85f6da8da20. Jan 13 21:12:41.228916 containerd[1943]: time="2025-01-13T21:12:41.228645612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hx8sp,Uid:125decff-4d16-4972-86ca-f81dbb906126,Namespace:kube-system,Attempt:0,} returns sandbox id \"36538e9b489551fd7c1cff00f42169ed5d7a14bc40fc9979870fe1dff3306228\"" Jan 13 21:12:41.241639 containerd[1943]: time="2025-01-13T21:12:41.241088988Z" level=info msg="CreateContainer within sandbox \"36538e9b489551fd7c1cff00f42169ed5d7a14bc40fc9979870fe1dff3306228\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:12:41.275266 containerd[1943]: time="2025-01-13T21:12:41.271619628Z" level=info msg="CreateContainer within sandbox \"36538e9b489551fd7c1cff00f42169ed5d7a14bc40fc9979870fe1dff3306228\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dbb5f75f978356c26a209d0e12ab2076f79ef8c929e397260276287323a1905a\"" Jan 13 21:12:41.277744 containerd[1943]: time="2025-01-13T21:12:41.275860008Z" level=info msg="StartContainer for \"dbb5f75f978356c26a209d0e12ab2076f79ef8c929e397260276287323a1905a\"" Jan 13 21:12:41.377582 systemd[1]: Started cri-containerd-dbb5f75f978356c26a209d0e12ab2076f79ef8c929e397260276287323a1905a.scope - libcontainer container dbb5f75f978356c26a209d0e12ab2076f79ef8c929e397260276287323a1905a. Jan 13 21:12:41.386985 containerd[1943]: time="2025-01-13T21:12:41.386861256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2cl29,Uid:1bb81c1e-cdda-45dd-ad96-bc24124a4ac6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5fc458933505dcdeb71a3b2d0bb019ade08bda76306aa82ef5df85f6da8da20\"" Jan 13 21:12:41.405091 containerd[1943]: time="2025-01-13T21:12:41.404873148Z" level=info msg="CreateContainer within sandbox \"d5fc458933505dcdeb71a3b2d0bb019ade08bda76306aa82ef5df85f6da8da20\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:12:41.428857 containerd[1943]: time="2025-01-13T21:12:41.428792617Z" level=info msg="CreateContainer within sandbox \"d5fc458933505dcdeb71a3b2d0bb019ade08bda76306aa82ef5df85f6da8da20\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf3d33dc51ae5ef97874a94897ea1b321643e9347ee7b38ae6e0d2397757b303\"" Jan 13 21:12:41.431786 containerd[1943]: time="2025-01-13T21:12:41.431387557Z" level=info msg="StartContainer for \"bf3d33dc51ae5ef97874a94897ea1b321643e9347ee7b38ae6e0d2397757b303\"" Jan 13 21:12:41.485035 containerd[1943]: time="2025-01-13T21:12:41.484897849Z" level=info msg="StartContainer for \"dbb5f75f978356c26a209d0e12ab2076f79ef8c929e397260276287323a1905a\" returns successfully" Jan 13 21:12:41.547568 systemd[1]: Started cri-containerd-bf3d33dc51ae5ef97874a94897ea1b321643e9347ee7b38ae6e0d2397757b303.scope - libcontainer container bf3d33dc51ae5ef97874a94897ea1b321643e9347ee7b38ae6e0d2397757b303. Jan 13 21:12:41.567479 kubelet[3311]: I0113 21:12:41.567410 3311 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hx8sp" podStartSLOduration=45.567317821 podStartE2EDuration="45.567317821s" podCreationTimestamp="2025-01-13 21:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:12:41.565858129 +0000 UTC m=+58.726633528" watchObservedRunningTime="2025-01-13 21:12:41.567317821 +0000 UTC m=+58.728093328" Jan 13 21:12:41.676842 containerd[1943]: time="2025-01-13T21:12:41.676767590Z" level=info msg="StartContainer for \"bf3d33dc51ae5ef97874a94897ea1b321643e9347ee7b38ae6e0d2397757b303\" returns successfully" Jan 13 21:12:42.595116 kubelet[3311]: I0113 21:12:42.594945 3311 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2cl29" podStartSLOduration=46.594786506 podStartE2EDuration="46.594786506s" podCreationTimestamp="2025-01-13 21:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:12:42.594669746 +0000 UTC m=+59.755445181" watchObservedRunningTime="2025-01-13 21:12:42.594786506 +0000 UTC m=+59.755562337" Jan 13 21:12:44.892895 systemd[1]: Started sshd@13-172.31.22.69:22-139.178.89.65:40832.service - OpenSSH per-connection server daemon (139.178.89.65:40832). Jan 13 21:12:45.071848 sshd[4737]: Accepted publickey for core from 139.178.89.65 port 40832 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:45.074981 sshd[4737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:45.088399 systemd-logind[1908]: New session 14 of user core. Jan 13 21:12:45.097782 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:12:45.367497 sshd[4737]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:45.378187 systemd[1]: sshd@13-172.31.22.69:22-139.178.89.65:40832.service: Deactivated successfully. Jan 13 21:12:45.384326 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:12:45.387619 systemd-logind[1908]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:12:45.392031 systemd-logind[1908]: Removed session 14. Jan 13 21:12:50.421007 systemd[1]: Started sshd@14-172.31.22.69:22-139.178.89.65:40846.service - OpenSSH per-connection server daemon (139.178.89.65:40846). Jan 13 21:12:50.602026 sshd[4751]: Accepted publickey for core from 139.178.89.65 port 40846 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:50.605773 sshd[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:50.617560 systemd-logind[1908]: New session 15 of user core. Jan 13 21:12:50.623651 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:12:50.882973 sshd[4751]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:50.890085 systemd-logind[1908]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:12:50.890602 systemd[1]: sshd@14-172.31.22.69:22-139.178.89.65:40846.service: Deactivated successfully. Jan 13 21:12:50.895369 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:12:50.899839 systemd-logind[1908]: Removed session 15. Jan 13 21:12:50.920806 systemd[1]: Started sshd@15-172.31.22.69:22-139.178.89.65:40854.service - OpenSSH per-connection server daemon (139.178.89.65:40854). Jan 13 21:12:51.108990 sshd[4765]: Accepted publickey for core from 139.178.89.65 port 40854 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:51.112062 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:51.120591 systemd-logind[1908]: New session 16 of user core. Jan 13 21:12:51.135520 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:12:51.477198 sshd[4765]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:51.495117 systemd[1]: sshd@15-172.31.22.69:22-139.178.89.65:40854.service: Deactivated successfully. Jan 13 21:12:51.504044 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:12:51.508794 systemd-logind[1908]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:12:51.529865 systemd[1]: Started sshd@16-172.31.22.69:22-139.178.89.65:40098.service - OpenSSH per-connection server daemon (139.178.89.65:40098). Jan 13 21:12:51.532854 systemd-logind[1908]: Removed session 16. Jan 13 21:12:51.717227 sshd[4776]: Accepted publickey for core from 139.178.89.65 port 40098 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:51.720501 sshd[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:51.730442 systemd-logind[1908]: New session 17 of user core. Jan 13 21:12:51.737635 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:12:52.009312 sshd[4776]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:52.015815 systemd-logind[1908]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:12:52.018599 systemd[1]: sshd@16-172.31.22.69:22-139.178.89.65:40098.service: Deactivated successfully. Jan 13 21:12:52.023877 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:12:52.026020 systemd-logind[1908]: Removed session 17. Jan 13 21:12:57.050822 systemd[1]: Started sshd@17-172.31.22.69:22-139.178.89.65:40100.service - OpenSSH per-connection server daemon (139.178.89.65:40100). Jan 13 21:12:57.229284 sshd[4791]: Accepted publickey for core from 139.178.89.65 port 40100 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:57.232734 sshd[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:57.242455 systemd-logind[1908]: New session 18 of user core. Jan 13 21:12:57.248998 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:12:57.511170 sshd[4791]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:57.517517 systemd[1]: sshd@17-172.31.22.69:22-139.178.89.65:40100.service: Deactivated successfully. Jan 13 21:12:57.523058 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:12:57.529361 systemd-logind[1908]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:12:57.531427 systemd-logind[1908]: Removed session 18. Jan 13 21:13:02.550774 systemd[1]: Started sshd@18-172.31.22.69:22-139.178.89.65:38974.service - OpenSSH per-connection server daemon (139.178.89.65:38974). Jan 13 21:13:02.728436 sshd[4809]: Accepted publickey for core from 139.178.89.65 port 38974 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:02.731119 sshd[4809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:02.739522 systemd-logind[1908]: New session 19 of user core. Jan 13 21:13:02.744533 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:13:02.991501 sshd[4809]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:02.997397 systemd-logind[1908]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:13:02.997997 systemd[1]: sshd@18-172.31.22.69:22-139.178.89.65:38974.service: Deactivated successfully. Jan 13 21:13:03.005086 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:13:03.012013 systemd-logind[1908]: Removed session 19. Jan 13 21:13:08.036784 systemd[1]: Started sshd@19-172.31.22.69:22-139.178.89.65:38978.service - OpenSSH per-connection server daemon (139.178.89.65:38978). Jan 13 21:13:08.224774 sshd[4823]: Accepted publickey for core from 139.178.89.65 port 38978 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:08.228538 sshd[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:08.237258 systemd-logind[1908]: New session 20 of user core. Jan 13 21:13:08.243537 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:13:08.494630 sshd[4823]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:08.501596 systemd[1]: sshd@19-172.31.22.69:22-139.178.89.65:38978.service: Deactivated successfully. Jan 13 21:13:08.506079 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:13:08.508754 systemd-logind[1908]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:13:08.512119 systemd-logind[1908]: Removed session 20. Jan 13 21:13:08.538777 systemd[1]: Started sshd@20-172.31.22.69:22-139.178.89.65:38986.service - OpenSSH per-connection server daemon (139.178.89.65:38986). Jan 13 21:13:08.724328 sshd[4836]: Accepted publickey for core from 139.178.89.65 port 38986 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:08.726971 sshd[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:08.734656 systemd-logind[1908]: New session 21 of user core. Jan 13 21:13:08.742505 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:13:09.043523 sshd[4836]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:09.050809 systemd[1]: sshd@20-172.31.22.69:22-139.178.89.65:38986.service: Deactivated successfully. Jan 13 21:13:09.056114 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:13:09.058558 systemd-logind[1908]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:13:09.062010 systemd-logind[1908]: Removed session 21. Jan 13 21:13:09.087894 systemd[1]: Started sshd@21-172.31.22.69:22-139.178.89.65:38994.service - OpenSSH per-connection server daemon (139.178.89.65:38994). Jan 13 21:13:09.279151 sshd[4847]: Accepted publickey for core from 139.178.89.65 port 38994 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:09.282176 sshd[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:09.292136 systemd-logind[1908]: New session 22 of user core. Jan 13 21:13:09.300607 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:13:11.898737 sshd[4847]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:11.909180 systemd[1]: sshd@21-172.31.22.69:22-139.178.89.65:38994.service: Deactivated successfully. Jan 13 21:13:11.918148 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:13:11.927598 systemd-logind[1908]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:13:11.955818 systemd[1]: Started sshd@22-172.31.22.69:22-139.178.89.65:56406.service - OpenSSH per-connection server daemon (139.178.89.65:56406). Jan 13 21:13:11.961915 systemd-logind[1908]: Removed session 22. Jan 13 21:13:12.150281 sshd[4864]: Accepted publickey for core from 139.178.89.65 port 56406 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:12.153823 sshd[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:12.161429 systemd-logind[1908]: New session 23 of user core. Jan 13 21:13:12.170510 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:13:12.688510 sshd[4864]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:12.696301 systemd[1]: sshd@22-172.31.22.69:22-139.178.89.65:56406.service: Deactivated successfully. Jan 13 21:13:12.699850 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:13:12.703813 systemd-logind[1908]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:13:12.705859 systemd-logind[1908]: Removed session 23. Jan 13 21:13:12.728370 systemd[1]: Started sshd@23-172.31.22.69:22-139.178.89.65:56416.service - OpenSSH per-connection server daemon (139.178.89.65:56416). Jan 13 21:13:12.907903 sshd[4876]: Accepted publickey for core from 139.178.89.65 port 56416 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:12.912599 sshd[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:12.921942 systemd-logind[1908]: New session 24 of user core. Jan 13 21:13:12.930515 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:13:13.168061 sshd[4876]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:13.174969 systemd[1]: sshd@23-172.31.22.69:22-139.178.89.65:56416.service: Deactivated successfully. Jan 13 21:13:13.178820 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:13:13.185043 systemd-logind[1908]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:13:13.187800 systemd-logind[1908]: Removed session 24. Jan 13 21:13:18.211771 systemd[1]: Started sshd@24-172.31.22.69:22-139.178.89.65:56418.service - OpenSSH per-connection server daemon (139.178.89.65:56418). Jan 13 21:13:18.391357 sshd[4888]: Accepted publickey for core from 139.178.89.65 port 56418 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:18.393790 sshd[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:18.404642 systemd-logind[1908]: New session 25 of user core. Jan 13 21:13:18.410504 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:13:18.659414 sshd[4888]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:18.665780 systemd[1]: sshd@24-172.31.22.69:22-139.178.89.65:56418.service: Deactivated successfully. Jan 13 21:13:18.669061 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:13:18.673498 systemd-logind[1908]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:13:18.676789 systemd-logind[1908]: Removed session 25. Jan 13 21:13:23.699796 systemd[1]: Started sshd@25-172.31.22.69:22-139.178.89.65:33408.service - OpenSSH per-connection server daemon (139.178.89.65:33408). Jan 13 21:13:23.884045 sshd[4903]: Accepted publickey for core from 139.178.89.65 port 33408 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:23.885201 sshd[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:23.898585 systemd-logind[1908]: New session 26 of user core. Jan 13 21:13:23.902867 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:13:24.168154 sshd[4903]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:24.175794 systemd[1]: sshd@25-172.31.22.69:22-139.178.89.65:33408.service: Deactivated successfully. Jan 13 21:13:24.180768 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:13:24.183342 systemd-logind[1908]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:13:24.186562 systemd-logind[1908]: Removed session 26. Jan 13 21:13:29.212824 systemd[1]: Started sshd@26-172.31.22.69:22-139.178.89.65:33412.service - OpenSSH per-connection server daemon (139.178.89.65:33412). Jan 13 21:13:29.393309 sshd[4918]: Accepted publickey for core from 139.178.89.65 port 33412 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:29.396335 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:29.404432 systemd-logind[1908]: New session 27 of user core. Jan 13 21:13:29.416565 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:13:29.659622 sshd[4918]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:29.668878 systemd[1]: sshd@26-172.31.22.69:22-139.178.89.65:33412.service: Deactivated successfully. Jan 13 21:13:29.675295 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:13:29.678502 systemd-logind[1908]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:13:29.682042 systemd-logind[1908]: Removed session 27. Jan 13 21:13:34.702799 systemd[1]: Started sshd@27-172.31.22.69:22-139.178.89.65:56794.service - OpenSSH per-connection server daemon (139.178.89.65:56794). Jan 13 21:13:34.883098 sshd[4931]: Accepted publickey for core from 139.178.89.65 port 56794 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:34.886143 sshd[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:34.894072 systemd-logind[1908]: New session 28 of user core. Jan 13 21:13:34.903556 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:13:35.153677 sshd[4931]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:35.161645 systemd-logind[1908]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:13:35.162360 systemd[1]: sshd@27-172.31.22.69:22-139.178.89.65:56794.service: Deactivated successfully. Jan 13 21:13:35.167020 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:13:35.169580 systemd-logind[1908]: Removed session 28. Jan 13 21:13:35.201693 systemd[1]: Started sshd@28-172.31.22.69:22-139.178.89.65:56806.service - OpenSSH per-connection server daemon (139.178.89.65:56806). Jan 13 21:13:35.386795 sshd[4943]: Accepted publickey for core from 139.178.89.65 port 56806 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:35.389564 sshd[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:35.398577 systemd-logind[1908]: New session 29 of user core. Jan 13 21:13:35.403525 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 21:13:38.631937 containerd[1943]: time="2025-01-13T21:13:38.631795377Z" level=info msg="StopContainer for \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\" with timeout 30 (s)" Jan 13 21:13:38.634476 containerd[1943]: time="2025-01-13T21:13:38.633486621Z" level=info msg="Stop container \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\" with signal terminated" Jan 13 21:13:38.666150 systemd[1]: cri-containerd-13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11.scope: Deactivated successfully. Jan 13 21:13:38.689908 containerd[1943]: time="2025-01-13T21:13:38.689732001Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:13:38.712322 containerd[1943]: time="2025-01-13T21:13:38.711618597Z" level=info msg="StopContainer for \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\" with timeout 2 (s)" Jan 13 21:13:38.712653 containerd[1943]: time="2025-01-13T21:13:38.712571121Z" level=info msg="Stop container \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\" with signal terminated" Jan 13 21:13:38.735049 systemd-networkd[1846]: lxc_health: Link DOWN Jan 13 21:13:38.735063 systemd-networkd[1846]: lxc_health: Lost carrier Jan 13 21:13:38.755653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11-rootfs.mount: Deactivated successfully. Jan 13 21:13:38.776336 systemd[1]: cri-containerd-3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee.scope: Deactivated successfully. Jan 13 21:13:38.776792 systemd[1]: cri-containerd-3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee.scope: Consumed 15.373s CPU time. Jan 13 21:13:38.788426 containerd[1943]: time="2025-01-13T21:13:38.787461969Z" level=info msg="shim disconnected" id=13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11 namespace=k8s.io Jan 13 21:13:38.788426 containerd[1943]: time="2025-01-13T21:13:38.787871673Z" level=warning msg="cleaning up after shim disconnected" id=13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11 namespace=k8s.io Jan 13 21:13:38.788426 containerd[1943]: time="2025-01-13T21:13:38.787897353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:38.833026 containerd[1943]: time="2025-01-13T21:13:38.832383874Z" level=info msg="StopContainer for \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\" returns successfully" Jan 13 21:13:38.835459 containerd[1943]: time="2025-01-13T21:13:38.834236362Z" level=info msg="StopPodSandbox for \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\"" Jan 13 21:13:38.835737 containerd[1943]: time="2025-01-13T21:13:38.835577314Z" level=info msg="Container to stop \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:38.839292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee-rootfs.mount: Deactivated successfully. Jan 13 21:13:38.847746 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399-shm.mount: Deactivated successfully. Jan 13 21:13:38.859157 containerd[1943]: time="2025-01-13T21:13:38.858828586Z" level=info msg="shim disconnected" id=3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee namespace=k8s.io Jan 13 21:13:38.859157 containerd[1943]: time="2025-01-13T21:13:38.858906754Z" level=warning msg="cleaning up after shim disconnected" id=3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee namespace=k8s.io Jan 13 21:13:38.859157 containerd[1943]: time="2025-01-13T21:13:38.858928018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:38.861446 systemd[1]: cri-containerd-fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399.scope: Deactivated successfully. Jan 13 21:13:38.904944 containerd[1943]: time="2025-01-13T21:13:38.903554854Z" level=info msg="StopContainer for \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\" returns successfully" Jan 13 21:13:38.907118 containerd[1943]: time="2025-01-13T21:13:38.907046902Z" level=info msg="StopPodSandbox for \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\"" Jan 13 21:13:38.907345 containerd[1943]: time="2025-01-13T21:13:38.907156570Z" level=info msg="Container to stop \"0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:38.907345 containerd[1943]: time="2025-01-13T21:13:38.907188286Z" level=info msg="Container to stop \"17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:38.907345 containerd[1943]: time="2025-01-13T21:13:38.907304158Z" level=info msg="Container to stop \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:38.907345 containerd[1943]: time="2025-01-13T21:13:38.907330558Z" level=info msg="Container to stop \"713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:38.908390 containerd[1943]: time="2025-01-13T21:13:38.907352986Z" level=info msg="Container to stop \"0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:38.930930 systemd[1]: cri-containerd-c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e.scope: Deactivated successfully. Jan 13 21:13:38.945311 containerd[1943]: time="2025-01-13T21:13:38.943198426Z" level=info msg="shim disconnected" id=fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399 namespace=k8s.io Jan 13 21:13:38.945311 containerd[1943]: time="2025-01-13T21:13:38.944406298Z" level=warning msg="cleaning up after shim disconnected" id=fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399 namespace=k8s.io Jan 13 21:13:38.945311 containerd[1943]: time="2025-01-13T21:13:38.944593462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:38.981583 containerd[1943]: time="2025-01-13T21:13:38.981491158Z" level=info msg="TearDown network for sandbox \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\" successfully" Jan 13 21:13:38.981583 containerd[1943]: time="2025-01-13T21:13:38.981570430Z" level=info msg="StopPodSandbox for \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\" returns successfully" Jan 13 21:13:38.990163 containerd[1943]: time="2025-01-13T21:13:38.988050490Z" level=info msg="shim disconnected" id=c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e namespace=k8s.io Jan 13 21:13:38.990163 containerd[1943]: time="2025-01-13T21:13:38.988142170Z" level=warning msg="cleaning up after shim disconnected" id=c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e namespace=k8s.io Jan 13 21:13:38.990163 containerd[1943]: time="2025-01-13T21:13:38.988163854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:39.022511 containerd[1943]: time="2025-01-13T21:13:39.022447471Z" level=info msg="TearDown network for sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" successfully" Jan 13 21:13:39.022511 containerd[1943]: time="2025-01-13T21:13:39.022501651Z" level=info msg="StopPodSandbox for \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" returns successfully" Jan 13 21:13:39.037384 kubelet[3311]: I0113 21:13:39.037177 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb49c\" (UniqueName: \"kubernetes.io/projected/f6ec314f-a219-44ac-86c3-1313601fb2d1-kube-api-access-vb49c\") pod \"f6ec314f-a219-44ac-86c3-1313601fb2d1\" (UID: \"f6ec314f-a219-44ac-86c3-1313601fb2d1\") " Jan 13 21:13:39.037384 kubelet[3311]: I0113 21:13:39.037305 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6ec314f-a219-44ac-86c3-1313601fb2d1-cilium-config-path\") pod \"f6ec314f-a219-44ac-86c3-1313601fb2d1\" (UID: \"f6ec314f-a219-44ac-86c3-1313601fb2d1\") " Jan 13 21:13:39.046465 kubelet[3311]: I0113 21:13:39.046027 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ec314f-a219-44ac-86c3-1313601fb2d1-kube-api-access-vb49c" (OuterVolumeSpecName: "kube-api-access-vb49c") pod "f6ec314f-a219-44ac-86c3-1313601fb2d1" (UID: "f6ec314f-a219-44ac-86c3-1313601fb2d1"). InnerVolumeSpecName "kube-api-access-vb49c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:13:39.050784 kubelet[3311]: I0113 21:13:39.050696 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6ec314f-a219-44ac-86c3-1313601fb2d1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f6ec314f-a219-44ac-86c3-1313601fb2d1" (UID: "f6ec314f-a219-44ac-86c3-1313601fb2d1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:13:39.096520 systemd[1]: Removed slice kubepods-besteffort-podf6ec314f_a219_44ac_86c3_1313601fb2d1.slice - libcontainer container kubepods-besteffort-podf6ec314f_a219_44ac_86c3_1313601fb2d1.slice. Jan 13 21:13:39.138392 kubelet[3311]: I0113 21:13:39.137787 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-xtables-lock\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.138392 kubelet[3311]: I0113 21:13:39.137857 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-hostproc\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.138392 kubelet[3311]: I0113 21:13:39.137860 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.138392 kubelet[3311]: I0113 21:13:39.137899 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cilium-run\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.138392 kubelet[3311]: I0113 21:13:39.137929 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-hostproc" (OuterVolumeSpecName: "hostproc") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.138392 kubelet[3311]: I0113 21:13:39.137950 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-clustermesh-secrets\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.139138 kubelet[3311]: I0113 21:13:39.137970 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.139138 kubelet[3311]: I0113 21:13:39.137994 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-host-proc-sys-kernel\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.139138 kubelet[3311]: I0113 21:13:39.138043 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cilium-config-path\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.139138 kubelet[3311]: I0113 21:13:39.138082 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-etc-cni-netd\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.139138 kubelet[3311]: I0113 21:13:39.138119 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-lib-modules\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.139138 kubelet[3311]: I0113 21:13:39.138159 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-host-proc-sys-net\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.139983 kubelet[3311]: I0113 21:13:39.138198 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-bpf-maps\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.139983 kubelet[3311]: I0113 21:13:39.138319 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cni-path\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.139983 kubelet[3311]: I0113 21:13:39.138369 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cilium-cgroup\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.139983 kubelet[3311]: I0113 21:13:39.138420 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw2kq\" (UniqueName: \"kubernetes.io/projected/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-kube-api-access-hw2kq\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.139983 kubelet[3311]: I0113 21:13:39.138468 3311 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-hubble-tls\") pod \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\" (UID: \"575b19ff-95b7-4f56-b6b6-bfb62aaddc3a\") " Jan 13 21:13:39.139983 kubelet[3311]: I0113 21:13:39.138547 3311 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cilium-run\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.139983 kubelet[3311]: I0113 21:13:39.138576 3311 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-xtables-lock\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.140780 kubelet[3311]: I0113 21:13:39.138600 3311 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-hostproc\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.140780 kubelet[3311]: I0113 21:13:39.138630 3311 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6ec314f-a219-44ac-86c3-1313601fb2d1-cilium-config-path\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.140780 kubelet[3311]: I0113 21:13:39.138656 3311 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vb49c\" (UniqueName: \"kubernetes.io/projected/f6ec314f-a219-44ac-86c3-1313601fb2d1-kube-api-access-vb49c\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.142103 kubelet[3311]: I0113 21:13:39.141551 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.142370 kubelet[3311]: I0113 21:13:39.142327 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.143791 kubelet[3311]: I0113 21:13:39.142475 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.144113 kubelet[3311]: I0113 21:13:39.142506 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.144113 kubelet[3311]: I0113 21:13:39.142564 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.144113 kubelet[3311]: I0113 21:13:39.142592 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cni-path" (OuterVolumeSpecName: "cni-path") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.144113 kubelet[3311]: I0113 21:13:39.142655 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.146419 kubelet[3311]: I0113 21:13:39.146356 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:13:39.152429 kubelet[3311]: I0113 21:13:39.152337 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:13:39.153656 kubelet[3311]: I0113 21:13:39.153586 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-kube-api-access-hw2kq" (OuterVolumeSpecName: "kube-api-access-hw2kq") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "kube-api-access-hw2kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:13:39.153780 kubelet[3311]: I0113 21:13:39.153709 3311 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" (UID: "575b19ff-95b7-4f56-b6b6-bfb62aaddc3a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:13:39.239769 kubelet[3311]: I0113 21:13:39.239569 3311 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-host-proc-sys-net\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.239769 kubelet[3311]: I0113 21:13:39.239654 3311 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-bpf-maps\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.239769 kubelet[3311]: I0113 21:13:39.239683 3311 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cni-path\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.239769 kubelet[3311]: I0113 21:13:39.239708 3311 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cilium-cgroup\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.239769 kubelet[3311]: I0113 21:13:39.239738 3311 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hw2kq\" (UniqueName: \"kubernetes.io/projected/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-kube-api-access-hw2kq\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.239769 kubelet[3311]: I0113 21:13:39.239763 3311 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-hubble-tls\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.240165 kubelet[3311]: I0113 21:13:39.239790 3311 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-clustermesh-secrets\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.240165 kubelet[3311]: I0113 21:13:39.239814 3311 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-host-proc-sys-kernel\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.240165 kubelet[3311]: I0113 21:13:39.239840 3311 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-cilium-config-path\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.240165 kubelet[3311]: I0113 21:13:39.239865 3311 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-etc-cni-netd\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.240165 kubelet[3311]: I0113 21:13:39.239888 3311 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a-lib-modules\") on node \"ip-172-31-22-69\" DevicePath \"\"" Jan 13 21:13:39.638741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399-rootfs.mount: Deactivated successfully. Jan 13 21:13:39.638936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e-rootfs.mount: Deactivated successfully. Jan 13 21:13:39.639068 systemd[1]: var-lib-kubelet-pods-f6ec314f\x2da219\x2d44ac\x2d86c3\x2d1313601fb2d1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvb49c.mount: Deactivated successfully. Jan 13 21:13:39.639204 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e-shm.mount: Deactivated successfully. Jan 13 21:13:39.639401 systemd[1]: var-lib-kubelet-pods-575b19ff\x2d95b7\x2d4f56\x2db6b6\x2dbfb62aaddc3a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhw2kq.mount: Deactivated successfully. Jan 13 21:13:39.639536 systemd[1]: var-lib-kubelet-pods-575b19ff\x2d95b7\x2d4f56\x2db6b6\x2dbfb62aaddc3a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:13:39.639696 systemd[1]: var-lib-kubelet-pods-575b19ff\x2d95b7\x2d4f56\x2db6b6\x2dbfb62aaddc3a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:13:39.696333 kubelet[3311]: I0113 21:13:39.695171 3311 scope.go:117] "RemoveContainer" containerID="3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee" Jan 13 21:13:39.701092 containerd[1943]: time="2025-01-13T21:13:39.700898650Z" level=info msg="RemoveContainer for \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\"" Jan 13 21:13:39.715404 containerd[1943]: time="2025-01-13T21:13:39.715088470Z" level=info msg="RemoveContainer for \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\" returns successfully" Jan 13 21:13:39.715922 kubelet[3311]: I0113 21:13:39.715691 3311 scope.go:117] "RemoveContainer" containerID="17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e" Jan 13 21:13:39.718063 systemd[1]: Removed slice kubepods-burstable-pod575b19ff_95b7_4f56_b6b6_bfb62aaddc3a.slice - libcontainer container kubepods-burstable-pod575b19ff_95b7_4f56_b6b6_bfb62aaddc3a.slice. Jan 13 21:13:39.718428 systemd[1]: kubepods-burstable-pod575b19ff_95b7_4f56_b6b6_bfb62aaddc3a.slice: Consumed 15.531s CPU time. Jan 13 21:13:39.722622 containerd[1943]: time="2025-01-13T21:13:39.721931518Z" level=info msg="RemoveContainer for \"17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e\"" Jan 13 21:13:39.731118 containerd[1943]: time="2025-01-13T21:13:39.731035306Z" level=info msg="RemoveContainer for \"17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e\" returns successfully" Jan 13 21:13:39.731570 kubelet[3311]: I0113 21:13:39.731525 3311 scope.go:117] "RemoveContainer" containerID="0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf" Jan 13 21:13:39.736269 containerd[1943]: time="2025-01-13T21:13:39.735718810Z" level=info msg="RemoveContainer for \"0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf\"" Jan 13 21:13:39.752641 containerd[1943]: time="2025-01-13T21:13:39.752486074Z" level=info msg="RemoveContainer for \"0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf\" returns successfully" Jan 13 21:13:39.754214 kubelet[3311]: I0113 21:13:39.753888 3311 scope.go:117] "RemoveContainer" containerID="0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6" Jan 13 21:13:39.761236 containerd[1943]: time="2025-01-13T21:13:39.760676158Z" level=info msg="RemoveContainer for \"0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6\"" Jan 13 21:13:39.767787 containerd[1943]: time="2025-01-13T21:13:39.767724094Z" level=info msg="RemoveContainer for \"0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6\" returns successfully" Jan 13 21:13:39.768650 kubelet[3311]: I0113 21:13:39.768527 3311 scope.go:117] "RemoveContainer" containerID="713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893" Jan 13 21:13:39.772450 containerd[1943]: time="2025-01-13T21:13:39.772399918Z" level=info msg="RemoveContainer for \"713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893\"" Jan 13 21:13:39.780446 containerd[1943]: time="2025-01-13T21:13:39.780372538Z" level=info msg="RemoveContainer for \"713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893\" returns successfully" Jan 13 21:13:39.781115 kubelet[3311]: I0113 21:13:39.780940 3311 scope.go:117] "RemoveContainer" containerID="3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee" Jan 13 21:13:39.781544 containerd[1943]: time="2025-01-13T21:13:39.781453354Z" level=error msg="ContainerStatus for \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\": not found" Jan 13 21:13:39.781974 kubelet[3311]: E0113 21:13:39.781923 3311 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\": not found" containerID="3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee" Jan 13 21:13:39.782598 kubelet[3311]: I0113 21:13:39.782534 3311 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee"} err="failed to get container status \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e9c67594eae2a9cb3408e1184ce778d9d580fca36498320e447e20e684c9dee\": not found" Jan 13 21:13:39.782728 kubelet[3311]: I0113 21:13:39.782606 3311 scope.go:117] "RemoveContainer" containerID="17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e" Jan 13 21:13:39.783507 containerd[1943]: time="2025-01-13T21:13:39.783264286Z" level=error msg="ContainerStatus for \"17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e\": not found" Jan 13 21:13:39.784115 kubelet[3311]: E0113 21:13:39.783938 3311 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e\": not found" containerID="17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e" Jan 13 21:13:39.784115 kubelet[3311]: I0113 21:13:39.784002 3311 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e"} err="failed to get container status \"17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e\": rpc error: code = NotFound desc = an error occurred when try to find container \"17d48d2d419a85ca3d811c94ef05913d316deb88130307e70315ae48d352354e\": not found" Jan 13 21:13:39.784115 kubelet[3311]: I0113 21:13:39.784027 3311 scope.go:117] "RemoveContainer" containerID="0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf" Jan 13 21:13:39.784834 containerd[1943]: time="2025-01-13T21:13:39.784706410Z" level=error msg="ContainerStatus for \"0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf\": not found" Jan 13 21:13:39.784986 kubelet[3311]: E0113 21:13:39.784958 3311 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf\": not found" containerID="0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf" Jan 13 21:13:39.785048 kubelet[3311]: I0113 21:13:39.785013 3311 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf"} err="failed to get container status \"0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf\": rpc error: code = NotFound desc = an error occurred when try to find container \"0037195b527b35b840a0090976dbf1b5ed28ec78d655a4afa955bdac9fb27ccf\": not found" Jan 13 21:13:39.785048 kubelet[3311]: I0113 21:13:39.785039 3311 scope.go:117] "RemoveContainer" containerID="0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6" Jan 13 21:13:39.785691 containerd[1943]: time="2025-01-13T21:13:39.785574946Z" level=error msg="ContainerStatus for \"0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6\": not found" Jan 13 21:13:39.785911 kubelet[3311]: E0113 21:13:39.785879 3311 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6\": not found" containerID="0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6" Jan 13 21:13:39.785999 kubelet[3311]: I0113 21:13:39.785934 3311 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6"} err="failed to get container status \"0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6\": rpc error: code = NotFound desc = an error occurred when try to find container \"0980cb8d1e92a19d5df9202bcd447a220d2a8ac10953d2f9b4362cad0107fdf6\": not found" Jan 13 21:13:39.785999 kubelet[3311]: I0113 21:13:39.785991 3311 scope.go:117] "RemoveContainer" containerID="713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893" Jan 13 21:13:39.786562 containerd[1943]: time="2025-01-13T21:13:39.786452674Z" level=error msg="ContainerStatus for \"713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893\": not found" Jan 13 21:13:39.786746 kubelet[3311]: E0113 21:13:39.786675 3311 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893\": not found" containerID="713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893" Jan 13 21:13:39.786746 kubelet[3311]: I0113 21:13:39.786740 3311 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893"} err="failed to get container status \"713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893\": rpc error: code = NotFound desc = an error occurred when try to find container \"713aaea5de1d2b0dacd0ad354093a1c387fca53f98e5d8de04bd7d8de8ec0893\": not found" Jan 13 21:13:39.786931 kubelet[3311]: I0113 21:13:39.786764 3311 scope.go:117] "RemoveContainer" containerID="13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11" Jan 13 21:13:39.789024 containerd[1943]: time="2025-01-13T21:13:39.788850262Z" level=info msg="RemoveContainer for \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\"" Jan 13 21:13:39.795577 containerd[1943]: time="2025-01-13T21:13:39.795444238Z" level=info msg="RemoveContainer for \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\" returns successfully" Jan 13 21:13:39.795975 kubelet[3311]: I0113 21:13:39.795795 3311 scope.go:117] "RemoveContainer" containerID="13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11" Jan 13 21:13:39.796436 containerd[1943]: time="2025-01-13T21:13:39.796280878Z" level=error msg="ContainerStatus for \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\": not found" Jan 13 21:13:39.796696 kubelet[3311]: E0113 21:13:39.796670 3311 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\": not found" containerID="13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11" Jan 13 21:13:39.797205 kubelet[3311]: I0113 21:13:39.796821 3311 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11"} err="failed to get container status \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\": rpc error: code = NotFound desc = an error occurred when try to find container \"13ffc22640ec16399b3cfbfe2372c5f8b52aabbf595885225cf3f7de22465c11\": not found" Jan 13 21:13:40.547843 sshd[4943]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:40.555693 systemd[1]: sshd@28-172.31.22.69:22-139.178.89.65:56806.service: Deactivated successfully. Jan 13 21:13:40.558991 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 21:13:40.560469 systemd[1]: session-29.scope: Consumed 2.448s CPU time. Jan 13 21:13:40.563493 systemd-logind[1908]: Session 29 logged out. Waiting for processes to exit. Jan 13 21:13:40.566290 systemd-logind[1908]: Removed session 29. Jan 13 21:13:40.588771 systemd[1]: Started sshd@29-172.31.22.69:22-139.178.89.65:56822.service - OpenSSH per-connection server daemon (139.178.89.65:56822). Jan 13 21:13:40.775679 sshd[5105]: Accepted publickey for core from 139.178.89.65 port 56822 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:40.779051 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:40.787870 ntpd[1901]: Deleting interface #12 lxc_health, fe80::642b:dcff:fe7e:8e2b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=64 secs Jan 13 21:13:40.788802 ntpd[1901]: 13 Jan 21:13:40 ntpd[1901]: Deleting interface #12 lxc_health, fe80::642b:dcff:fe7e:8e2b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=64 secs Jan 13 21:13:40.790718 systemd-logind[1908]: New session 30 of user core. Jan 13 21:13:40.795567 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 21:13:41.085895 kubelet[3311]: I0113 21:13:41.085842 3311 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" path="/var/lib/kubelet/pods/575b19ff-95b7-4f56-b6b6-bfb62aaddc3a/volumes" Jan 13 21:13:41.088609 kubelet[3311]: I0113 21:13:41.088515 3311 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f6ec314f-a219-44ac-86c3-1313601fb2d1" path="/var/lib/kubelet/pods/f6ec314f-a219-44ac-86c3-1313601fb2d1/volumes" Jan 13 21:13:41.939777 sshd[5105]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:41.950470 systemd[1]: sshd@29-172.31.22.69:22-139.178.89.65:56822.service: Deactivated successfully. Jan 13 21:13:41.954582 kubelet[3311]: I0113 21:13:41.954495 3311 topology_manager.go:215] "Topology Admit Handler" podUID="baa91f66-7041-4527-b8fb-e470ad99ce06" podNamespace="kube-system" podName="cilium-plpwg" Jan 13 21:13:41.954868 kubelet[3311]: E0113 21:13:41.954634 3311 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" containerName="mount-bpf-fs" Jan 13 21:13:41.954868 kubelet[3311]: E0113 21:13:41.954659 3311 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" containerName="clean-cilium-state" Jan 13 21:13:41.954868 kubelet[3311]: E0113 21:13:41.954677 3311 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" containerName="cilium-agent" Jan 13 21:13:41.954868 kubelet[3311]: E0113 21:13:41.954718 3311 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" containerName="mount-cgroup" Jan 13 21:13:41.954868 kubelet[3311]: E0113 21:13:41.954740 3311 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" containerName="apply-sysctl-overwrites" Jan 13 21:13:41.954868 kubelet[3311]: E0113 21:13:41.954760 3311 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6ec314f-a219-44ac-86c3-1313601fb2d1" containerName="cilium-operator" Jan 13 21:13:41.954868 kubelet[3311]: I0113 21:13:41.954835 3311 memory_manager.go:354] "RemoveStaleState removing state" podUID="575b19ff-95b7-4f56-b6b6-bfb62aaddc3a" containerName="cilium-agent" Jan 13 21:13:41.954868 kubelet[3311]: I0113 21:13:41.954856 3311 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ec314f-a219-44ac-86c3-1313601fb2d1" containerName="cilium-operator" Jan 13 21:13:41.961437 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 21:13:41.966973 systemd-logind[1908]: Session 30 logged out. Waiting for processes to exit. Jan 13 21:13:41.979126 kubelet[3311]: W0113 21:13:41.979004 3311 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-22-69" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-69' and this object Jan 13 21:13:41.979126 kubelet[3311]: E0113 21:13:41.979059 3311 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-22-69" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-69' and this object Jan 13 21:13:41.981881 kubelet[3311]: W0113 21:13:41.980455 3311 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-22-69" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-69' and this object Jan 13 21:13:41.981881 kubelet[3311]: E0113 21:13:41.980570 3311 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-22-69" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-69' and this object Jan 13 21:13:41.982769 kubelet[3311]: W0113 21:13:41.982347 3311 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-22-69" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-69' and this object Jan 13 21:13:41.982769 kubelet[3311]: E0113 21:13:41.982519 3311 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-22-69" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-69' and this object Jan 13 21:13:41.983206 kubelet[3311]: W0113 21:13:41.983117 3311 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-22-69" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-69' and this object Jan 13 21:13:41.983206 kubelet[3311]: E0113 21:13:41.983164 3311 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-22-69" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-69' and this object Jan 13 21:13:42.006738 systemd[1]: Started sshd@30-172.31.22.69:22-139.178.89.65:46276.service - OpenSSH per-connection server daemon (139.178.89.65:46276). Jan 13 21:13:42.016115 systemd-logind[1908]: Removed session 30. Jan 13 21:13:42.029432 systemd[1]: Created slice kubepods-burstable-podbaa91f66_7041_4527_b8fb_e470ad99ce06.slice - libcontainer container kubepods-burstable-podbaa91f66_7041_4527_b8fb_e470ad99ce06.slice. Jan 13 21:13:42.062314 kubelet[3311]: I0113 21:13:42.060547 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/baa91f66-7041-4527-b8fb-e470ad99ce06-cilium-run\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.062314 kubelet[3311]: I0113 21:13:42.060634 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmwdf\" (UniqueName: \"kubernetes.io/projected/baa91f66-7041-4527-b8fb-e470ad99ce06-kube-api-access-pmwdf\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.062314 kubelet[3311]: I0113 21:13:42.060685 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/baa91f66-7041-4527-b8fb-e470ad99ce06-hostproc\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.062314 kubelet[3311]: I0113 21:13:42.060742 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/baa91f66-7041-4527-b8fb-e470ad99ce06-cilium-cgroup\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.062314 kubelet[3311]: I0113 21:13:42.060789 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baa91f66-7041-4527-b8fb-e470ad99ce06-xtables-lock\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.062314 kubelet[3311]: I0113 21:13:42.060836 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/baa91f66-7041-4527-b8fb-e470ad99ce06-cilium-config-path\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.062826 kubelet[3311]: I0113 21:13:42.060881 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/baa91f66-7041-4527-b8fb-e470ad99ce06-cilium-ipsec-secrets\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.062826 kubelet[3311]: I0113 21:13:42.060925 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/baa91f66-7041-4527-b8fb-e470ad99ce06-host-proc-sys-net\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.062826 kubelet[3311]: I0113 21:13:42.060970 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/baa91f66-7041-4527-b8fb-e470ad99ce06-hubble-tls\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.062826 kubelet[3311]: I0113 21:13:42.061016 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/baa91f66-7041-4527-b8fb-e470ad99ce06-clustermesh-secrets\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.062826 kubelet[3311]: I0113 21:13:42.061060 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baa91f66-7041-4527-b8fb-e470ad99ce06-lib-modules\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.062826 kubelet[3311]: I0113 21:13:42.061101 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/baa91f66-7041-4527-b8fb-e470ad99ce06-bpf-maps\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.063119 kubelet[3311]: I0113 21:13:42.061142 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/baa91f66-7041-4527-b8fb-e470ad99ce06-cni-path\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.063119 kubelet[3311]: I0113 21:13:42.061187 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/baa91f66-7041-4527-b8fb-e470ad99ce06-etc-cni-netd\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.063119 kubelet[3311]: I0113 21:13:42.061232 3311 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/baa91f66-7041-4527-b8fb-e470ad99ce06-host-proc-sys-kernel\") pod \"cilium-plpwg\" (UID: \"baa91f66-7041-4527-b8fb-e470ad99ce06\") " pod="kube-system/cilium-plpwg" Jan 13 21:13:42.233166 sshd[5117]: Accepted publickey for core from 139.178.89.65 port 46276 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:42.237705 sshd[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:42.252366 systemd-logind[1908]: New session 31 of user core. Jan 13 21:13:42.263850 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 13 21:13:42.397565 sshd[5117]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:42.404501 systemd[1]: sshd@30-172.31.22.69:22-139.178.89.65:46276.service: Deactivated successfully. Jan 13 21:13:42.409477 systemd[1]: session-31.scope: Deactivated successfully. Jan 13 21:13:42.410973 systemd-logind[1908]: Session 31 logged out. Waiting for processes to exit. Jan 13 21:13:42.413052 systemd-logind[1908]: Removed session 31. Jan 13 21:13:42.440888 systemd[1]: Started sshd@31-172.31.22.69:22-139.178.89.65:46292.service - OpenSSH per-connection server daemon (139.178.89.65:46292). Jan 13 21:13:42.620075 sshd[5126]: Accepted publickey for core from 139.178.89.65 port 46292 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:42.622736 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:42.630721 systemd-logind[1908]: New session 32 of user core. Jan 13 21:13:42.641532 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 13 21:13:43.102533 containerd[1943]: time="2025-01-13T21:13:43.102450779Z" level=info msg="StopPodSandbox for \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\"" Jan 13 21:13:43.104792 containerd[1943]: time="2025-01-13T21:13:43.102594395Z" level=info msg="TearDown network for sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" successfully" Jan 13 21:13:43.104792 containerd[1943]: time="2025-01-13T21:13:43.102620927Z" level=info msg="StopPodSandbox for \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" returns successfully" Jan 13 21:13:43.104792 containerd[1943]: time="2025-01-13T21:13:43.103660499Z" level=info msg="RemovePodSandbox for \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\"" Jan 13 21:13:43.104792 containerd[1943]: time="2025-01-13T21:13:43.103709387Z" level=info msg="Forcibly stopping sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\"" Jan 13 21:13:43.104792 containerd[1943]: time="2025-01-13T21:13:43.103811267Z" level=info msg="TearDown network for sandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" successfully" Jan 13 21:13:43.110954 containerd[1943]: time="2025-01-13T21:13:43.110875955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:13:43.111131 containerd[1943]: time="2025-01-13T21:13:43.110981183Z" level=info msg="RemovePodSandbox \"c0c2266fd5873488104ada5d4af7a2550a0fb05759472605e8fd28590b0c483e\" returns successfully" Jan 13 21:13:43.111893 containerd[1943]: time="2025-01-13T21:13:43.111812759Z" level=info msg="StopPodSandbox for \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\"" Jan 13 21:13:43.112637 containerd[1943]: time="2025-01-13T21:13:43.112504583Z" level=info msg="TearDown network for sandbox \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\" successfully" Jan 13 21:13:43.113229 containerd[1943]: time="2025-01-13T21:13:43.112541735Z" level=info msg="StopPodSandbox for \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\" returns successfully" Jan 13 21:13:43.114334 containerd[1943]: time="2025-01-13T21:13:43.113863787Z" level=info msg="RemovePodSandbox for \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\"" Jan 13 21:13:43.114334 containerd[1943]: time="2025-01-13T21:13:43.113912807Z" level=info msg="Forcibly stopping sandbox \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\"" Jan 13 21:13:43.114334 containerd[1943]: time="2025-01-13T21:13:43.114016235Z" level=info msg="TearDown network for sandbox \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\" successfully" Jan 13 21:13:43.121137 containerd[1943]: time="2025-01-13T21:13:43.121045307Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:13:43.121532 containerd[1943]: time="2025-01-13T21:13:43.121155947Z" level=info msg="RemovePodSandbox \"fbd18f67344946d55bcaa6e43b5039b19a1b675ec4c2bf8ef172456e9623e399\" returns successfully" Jan 13 21:13:43.164324 kubelet[3311]: E0113 21:13:43.164008 3311 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:43.164324 kubelet[3311]: E0113 21:13:43.164038 3311 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:13:43.164324 kubelet[3311]: E0113 21:13:43.164012 3311 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:43.164324 kubelet[3311]: E0113 21:13:43.164116 3311 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/baa91f66-7041-4527-b8fb-e470ad99ce06-cilium-config-path podName:baa91f66-7041-4527-b8fb-e470ad99ce06 nodeName:}" failed. No retries permitted until 2025-01-13 21:13:43.664087255 +0000 UTC m=+120.824862654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/baa91f66-7041-4527-b8fb-e470ad99ce06-cilium-config-path") pod "cilium-plpwg" (UID: "baa91f66-7041-4527-b8fb-e470ad99ce06") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:13:43.164324 kubelet[3311]: E0113 21:13:43.164153 3311 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baa91f66-7041-4527-b8fb-e470ad99ce06-cilium-ipsec-secrets podName:baa91f66-7041-4527-b8fb-e470ad99ce06 nodeName:}" failed. No retries permitted until 2025-01-13 21:13:43.664135771 +0000 UTC m=+120.824911182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/baa91f66-7041-4527-b8fb-e470ad99ce06-cilium-ipsec-secrets") pod "cilium-plpwg" (UID: "baa91f66-7041-4527-b8fb-e470ad99ce06") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:43.166736 kubelet[3311]: E0113 21:13:43.164529 3311 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baa91f66-7041-4527-b8fb-e470ad99ce06-clustermesh-secrets podName:baa91f66-7041-4527-b8fb-e470ad99ce06 nodeName:}" failed. No retries permitted until 2025-01-13 21:13:43.664503427 +0000 UTC m=+120.825278838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/baa91f66-7041-4527-b8fb-e470ad99ce06-clustermesh-secrets") pod "cilium-plpwg" (UID: "baa91f66-7041-4527-b8fb-e470ad99ce06") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:43.167405 kubelet[3311]: E0113 21:13:43.167331 3311 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:43.167405 kubelet[3311]: E0113 21:13:43.167384 3311 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-plpwg: failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:43.167747 kubelet[3311]: E0113 21:13:43.167648 3311 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/baa91f66-7041-4527-b8fb-e470ad99ce06-hubble-tls podName:baa91f66-7041-4527-b8fb-e470ad99ce06 nodeName:}" failed. No retries permitted until 2025-01-13 21:13:43.667570099 +0000 UTC m=+120.828345498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/baa91f66-7041-4527-b8fb-e470ad99ce06-hubble-tls") pod "cilium-plpwg" (UID: "baa91f66-7041-4527-b8fb-e470ad99ce06") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:43.396087 kubelet[3311]: E0113 21:13:43.395931 3311 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:13:43.840299 containerd[1943]: time="2025-01-13T21:13:43.840181563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-plpwg,Uid:baa91f66-7041-4527-b8fb-e470ad99ce06,Namespace:kube-system,Attempt:0,}" Jan 13 21:13:43.899971 containerd[1943]: time="2025-01-13T21:13:43.899599275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:43.899971 containerd[1943]: time="2025-01-13T21:13:43.899858931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:43.902413 containerd[1943]: time="2025-01-13T21:13:43.901860303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:43.902540 containerd[1943]: time="2025-01-13T21:13:43.902421411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:43.950563 systemd[1]: Started cri-containerd-4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6.scope - libcontainer container 4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6. Jan 13 21:13:43.997924 containerd[1943]: time="2025-01-13T21:13:43.997619079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-plpwg,Uid:baa91f66-7041-4527-b8fb-e470ad99ce06,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6\"" Jan 13 21:13:44.004824 containerd[1943]: time="2025-01-13T21:13:44.004654895Z" level=info msg="CreateContainer within sandbox \"4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:13:44.040377 containerd[1943]: time="2025-01-13T21:13:44.040315812Z" level=info msg="CreateContainer within sandbox \"4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"80d4e37813a70dc5eac47f07bb96fcc05277ae1493134a1aa1bef1a90d20914f\"" Jan 13 21:13:44.043153 containerd[1943]: time="2025-01-13T21:13:44.041622468Z" level=info msg="StartContainer for \"80d4e37813a70dc5eac47f07bb96fcc05277ae1493134a1aa1bef1a90d20914f\"" Jan 13 21:13:44.084584 systemd[1]: Started cri-containerd-80d4e37813a70dc5eac47f07bb96fcc05277ae1493134a1aa1bef1a90d20914f.scope - libcontainer container 80d4e37813a70dc5eac47f07bb96fcc05277ae1493134a1aa1bef1a90d20914f. Jan 13 21:13:44.133492 containerd[1943]: time="2025-01-13T21:13:44.133356744Z" level=info msg="StartContainer for \"80d4e37813a70dc5eac47f07bb96fcc05277ae1493134a1aa1bef1a90d20914f\" returns successfully" Jan 13 21:13:44.150008 systemd[1]: cri-containerd-80d4e37813a70dc5eac47f07bb96fcc05277ae1493134a1aa1bef1a90d20914f.scope: Deactivated successfully. Jan 13 21:13:44.212967 containerd[1943]: time="2025-01-13T21:13:44.212841000Z" level=info msg="shim disconnected" id=80d4e37813a70dc5eac47f07bb96fcc05277ae1493134a1aa1bef1a90d20914f namespace=k8s.io Jan 13 21:13:44.212967 containerd[1943]: time="2025-01-13T21:13:44.212952000Z" level=warning msg="cleaning up after shim disconnected" id=80d4e37813a70dc5eac47f07bb96fcc05277ae1493134a1aa1bef1a90d20914f namespace=k8s.io Jan 13 21:13:44.212967 containerd[1943]: time="2025-01-13T21:13:44.212979432Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:44.738368 containerd[1943]: time="2025-01-13T21:13:44.738267651Z" level=info msg="CreateContainer within sandbox \"4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:13:44.774300 containerd[1943]: time="2025-01-13T21:13:44.774145167Z" level=info msg="CreateContainer within sandbox \"4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aa54c7d965ef323e65d0bd65539350d66c4f5d6887d2f93c9f1c1646e45daef2\"" Jan 13 21:13:44.776309 containerd[1943]: time="2025-01-13T21:13:44.775675635Z" level=info msg="StartContainer for \"aa54c7d965ef323e65d0bd65539350d66c4f5d6887d2f93c9f1c1646e45daef2\"" Jan 13 21:13:44.843590 systemd[1]: Started cri-containerd-aa54c7d965ef323e65d0bd65539350d66c4f5d6887d2f93c9f1c1646e45daef2.scope - libcontainer container aa54c7d965ef323e65d0bd65539350d66c4f5d6887d2f93c9f1c1646e45daef2. Jan 13 21:13:44.896979 containerd[1943]: time="2025-01-13T21:13:44.896431168Z" level=info msg="StartContainer for \"aa54c7d965ef323e65d0bd65539350d66c4f5d6887d2f93c9f1c1646e45daef2\" returns successfully" Jan 13 21:13:44.929415 systemd[1]: cri-containerd-aa54c7d965ef323e65d0bd65539350d66c4f5d6887d2f93c9f1c1646e45daef2.scope: Deactivated successfully. Jan 13 21:13:44.993643 containerd[1943]: time="2025-01-13T21:13:44.993423976Z" level=info msg="shim disconnected" id=aa54c7d965ef323e65d0bd65539350d66c4f5d6887d2f93c9f1c1646e45daef2 namespace=k8s.io Jan 13 21:13:44.993643 containerd[1943]: time="2025-01-13T21:13:44.993522712Z" level=warning msg="cleaning up after shim disconnected" id=aa54c7d965ef323e65d0bd65539350d66c4f5d6887d2f93c9f1c1646e45daef2 namespace=k8s.io Jan 13 21:13:44.993643 containerd[1943]: time="2025-01-13T21:13:44.993572032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:45.014129 containerd[1943]: time="2025-01-13T21:13:45.013992792Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:13:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:13:45.493535 kubelet[3311]: I0113 21:13:45.493494 3311 setters.go:568] "Node became not ready" node="ip-172-31-22-69" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:13:45Z","lastTransitionTime":"2025-01-13T21:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:13:45.686660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa54c7d965ef323e65d0bd65539350d66c4f5d6887d2f93c9f1c1646e45daef2-rootfs.mount: Deactivated successfully. Jan 13 21:13:45.740538 containerd[1943]: time="2025-01-13T21:13:45.740472928Z" level=info msg="CreateContainer within sandbox \"4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:13:45.782714 containerd[1943]: time="2025-01-13T21:13:45.781269016Z" level=info msg="CreateContainer within sandbox \"4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dc0bb9aefa843a0b7a2ce2233356612b3507bd908de5d93a0297236db5d33d1c\"" Jan 13 21:13:45.785192 containerd[1943]: time="2025-01-13T21:13:45.783846712Z" level=info msg="StartContainer for \"dc0bb9aefa843a0b7a2ce2233356612b3507bd908de5d93a0297236db5d33d1c\"" Jan 13 21:13:45.849718 systemd[1]: Started cri-containerd-dc0bb9aefa843a0b7a2ce2233356612b3507bd908de5d93a0297236db5d33d1c.scope - libcontainer container dc0bb9aefa843a0b7a2ce2233356612b3507bd908de5d93a0297236db5d33d1c. Jan 13 21:13:45.902444 containerd[1943]: time="2025-01-13T21:13:45.902381033Z" level=info msg="StartContainer for \"dc0bb9aefa843a0b7a2ce2233356612b3507bd908de5d93a0297236db5d33d1c\" returns successfully" Jan 13 21:13:45.909563 systemd[1]: cri-containerd-dc0bb9aefa843a0b7a2ce2233356612b3507bd908de5d93a0297236db5d33d1c.scope: Deactivated successfully. Jan 13 21:13:45.964222 containerd[1943]: time="2025-01-13T21:13:45.964146065Z" level=info msg="shim disconnected" id=dc0bb9aefa843a0b7a2ce2233356612b3507bd908de5d93a0297236db5d33d1c namespace=k8s.io Jan 13 21:13:45.964971 containerd[1943]: time="2025-01-13T21:13:45.964611257Z" level=warning msg="cleaning up after shim disconnected" id=dc0bb9aefa843a0b7a2ce2233356612b3507bd908de5d93a0297236db5d33d1c namespace=k8s.io Jan 13 21:13:45.964971 containerd[1943]: time="2025-01-13T21:13:45.964650869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:46.686803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc0bb9aefa843a0b7a2ce2233356612b3507bd908de5d93a0297236db5d33d1c-rootfs.mount: Deactivated successfully. Jan 13 21:13:46.748388 containerd[1943]: time="2025-01-13T21:13:46.748085201Z" level=info msg="CreateContainer within sandbox \"4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:13:46.781777 containerd[1943]: time="2025-01-13T21:13:46.781679201Z" level=info msg="CreateContainer within sandbox \"4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bf22af51ac9a15cc6341094941e51e96851bf6f74ef487201ba5881524b9e828\"" Jan 13 21:13:46.787354 containerd[1943]: time="2025-01-13T21:13:46.785957309Z" level=info msg="StartContainer for \"bf22af51ac9a15cc6341094941e51e96851bf6f74ef487201ba5881524b9e828\"" Jan 13 21:13:46.880663 systemd[1]: Started cri-containerd-bf22af51ac9a15cc6341094941e51e96851bf6f74ef487201ba5881524b9e828.scope - libcontainer container bf22af51ac9a15cc6341094941e51e96851bf6f74ef487201ba5881524b9e828. Jan 13 21:13:46.924640 systemd[1]: cri-containerd-bf22af51ac9a15cc6341094941e51e96851bf6f74ef487201ba5881524b9e828.scope: Deactivated successfully. Jan 13 21:13:46.933787 containerd[1943]: time="2025-01-13T21:13:46.933534522Z" level=info msg="StartContainer for \"bf22af51ac9a15cc6341094941e51e96851bf6f74ef487201ba5881524b9e828\" returns successfully" Jan 13 21:13:46.987038 containerd[1943]: time="2025-01-13T21:13:46.986559042Z" level=info msg="shim disconnected" id=bf22af51ac9a15cc6341094941e51e96851bf6f74ef487201ba5881524b9e828 namespace=k8s.io Jan 13 21:13:46.987038 containerd[1943]: time="2025-01-13T21:13:46.986647314Z" level=warning msg="cleaning up after shim disconnected" id=bf22af51ac9a15cc6341094941e51e96851bf6f74ef487201ba5881524b9e828 namespace=k8s.io Jan 13 21:13:46.987038 containerd[1943]: time="2025-01-13T21:13:46.986668458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:47.079007 kubelet[3311]: E0113 21:13:47.078631 3311 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2cl29" podUID="1bb81c1e-cdda-45dd-ad96-bc24124a4ac6" Jan 13 21:13:47.686973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf22af51ac9a15cc6341094941e51e96851bf6f74ef487201ba5881524b9e828-rootfs.mount: Deactivated successfully. Jan 13 21:13:47.764168 containerd[1943]: time="2025-01-13T21:13:47.763659486Z" level=info msg="CreateContainer within sandbox \"4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:13:47.799043 containerd[1943]: time="2025-01-13T21:13:47.798960006Z" level=info msg="CreateContainer within sandbox \"4f321cd23f92a185f1a79fa9d18b41789e71f896e9a33f860ad0eb189dc7deb6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a8677dca23fc6f7227572d3655061c2a16a603e61182175b3fb90370bcfcf36c\"" Jan 13 21:13:47.801042 containerd[1943]: time="2025-01-13T21:13:47.800972982Z" level=info msg="StartContainer for \"a8677dca23fc6f7227572d3655061c2a16a603e61182175b3fb90370bcfcf36c\"" Jan 13 21:13:47.864672 systemd[1]: Started cri-containerd-a8677dca23fc6f7227572d3655061c2a16a603e61182175b3fb90370bcfcf36c.scope - libcontainer container a8677dca23fc6f7227572d3655061c2a16a603e61182175b3fb90370bcfcf36c. Jan 13 21:13:47.927443 containerd[1943]: time="2025-01-13T21:13:47.927342595Z" level=info msg="StartContainer for \"a8677dca23fc6f7227572d3655061c2a16a603e61182175b3fb90370bcfcf36c\" returns successfully" Jan 13 21:13:48.744327 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 21:13:48.804757 kubelet[3311]: I0113 21:13:48.804692 3311 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-plpwg" podStartSLOduration=7.8046299470000005 podStartE2EDuration="7.804629947s" podCreationTimestamp="2025-01-13 21:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:13:48.804437371 +0000 UTC m=+125.965212818" watchObservedRunningTime="2025-01-13 21:13:48.804629947 +0000 UTC m=+125.965405370" Jan 13 21:13:49.276853 kubelet[3311]: E0113 21:13:49.276772 3311 upgradeaware.go:439] Error proxying data from backend to client: read tcp 127.0.0.1:54688->127.0.0.1:43575: read: connection reset by peer Jan 13 21:13:53.261693 systemd-networkd[1846]: lxc_health: Link UP Jan 13 21:13:53.265930 (udev-worker)[5966]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:13:53.274152 (udev-worker)[5967]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:13:53.301179 systemd-networkd[1846]: lxc_health: Gained carrier Jan 13 21:13:55.075723 systemd-networkd[1846]: lxc_health: Gained IPv6LL Jan 13 21:13:57.787897 ntpd[1901]: Listen normally on 15 lxc_health [fe80::bc4c:dbff:fefa:4345%14]:123 Jan 13 21:13:57.788648 ntpd[1901]: 13 Jan 21:13:57 ntpd[1901]: Listen normally on 15 lxc_health [fe80::bc4c:dbff:fefa:4345%14]:123 Jan 13 21:13:58.501410 systemd[1]: run-containerd-runc-k8s.io-a8677dca23fc6f7227572d3655061c2a16a603e61182175b3fb90370bcfcf36c-runc.wbUpL2.mount: Deactivated successfully. Jan 13 21:13:58.641918 sshd[5126]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:58.652919 systemd[1]: sshd@31-172.31.22.69:22-139.178.89.65:46292.service: Deactivated successfully. Jan 13 21:13:58.660483 systemd[1]: session-32.scope: Deactivated successfully. Jan 13 21:13:58.666976 systemd-logind[1908]: Session 32 logged out. Waiting for processes to exit. Jan 13 21:13:58.671863 systemd-logind[1908]: Removed session 32. Jan 13 21:14:13.484479 systemd[1]: cri-containerd-b7363ac32885807a5303e41c14e8bad5f06dc07fb7edb5bf4cbbeb891ab79142.scope: Deactivated successfully. Jan 13 21:14:13.486288 systemd[1]: cri-containerd-b7363ac32885807a5303e41c14e8bad5f06dc07fb7edb5bf4cbbeb891ab79142.scope: Consumed 6.476s CPU time, 22.2M memory peak, 0B memory swap peak. Jan 13 21:14:13.533980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7363ac32885807a5303e41c14e8bad5f06dc07fb7edb5bf4cbbeb891ab79142-rootfs.mount: Deactivated successfully. Jan 13 21:14:13.550010 containerd[1943]: time="2025-01-13T21:14:13.549897018Z" level=info msg="shim disconnected" id=b7363ac32885807a5303e41c14e8bad5f06dc07fb7edb5bf4cbbeb891ab79142 namespace=k8s.io Jan 13 21:14:13.550010 containerd[1943]: time="2025-01-13T21:14:13.550000470Z" level=warning msg="cleaning up after shim disconnected" id=b7363ac32885807a5303e41c14e8bad5f06dc07fb7edb5bf4cbbeb891ab79142 namespace=k8s.io Jan 13 21:14:13.551234 containerd[1943]: time="2025-01-13T21:14:13.550023906Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:13.867978 kubelet[3311]: I0113 21:14:13.867813 3311 scope.go:117] "RemoveContainer" containerID="b7363ac32885807a5303e41c14e8bad5f06dc07fb7edb5bf4cbbeb891ab79142" Jan 13 21:14:13.874101 containerd[1943]: time="2025-01-13T21:14:13.874041248Z" level=info msg="CreateContainer within sandbox \"b8a602a3352add36a520df1d4717a8b6f206719ba5eab5aed7217b615d511ebe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 21:14:13.907307 containerd[1943]: time="2025-01-13T21:14:13.906740168Z" level=info msg="CreateContainer within sandbox \"b8a602a3352add36a520df1d4717a8b6f206719ba5eab5aed7217b615d511ebe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f7ba13ffce31cebf9834c7e93702815b954f4a93810e96abaf96ac300b2c6dba\"" Jan 13 21:14:13.908785 containerd[1943]: time="2025-01-13T21:14:13.908730392Z" level=info msg="StartContainer for \"f7ba13ffce31cebf9834c7e93702815b954f4a93810e96abaf96ac300b2c6dba\"" Jan 13 21:14:13.961133 systemd[1]: run-containerd-runc-k8s.io-f7ba13ffce31cebf9834c7e93702815b954f4a93810e96abaf96ac300b2c6dba-runc.hDFM6J.mount: Deactivated successfully. Jan 13 21:14:13.972633 systemd[1]: Started cri-containerd-f7ba13ffce31cebf9834c7e93702815b954f4a93810e96abaf96ac300b2c6dba.scope - libcontainer container f7ba13ffce31cebf9834c7e93702815b954f4a93810e96abaf96ac300b2c6dba. Jan 13 21:14:14.057274 containerd[1943]: time="2025-01-13T21:14:14.054014045Z" level=info msg="StartContainer for \"f7ba13ffce31cebf9834c7e93702815b954f4a93810e96abaf96ac300b2c6dba\" returns successfully" Jan 13 21:14:16.305274 kubelet[3311]: E0113 21:14:16.302359 3311 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-69?timeout=10s\": context deadline exceeded" Jan 13 21:14:18.865744 systemd[1]: cri-containerd-fa919c108277839929b4cf248d8fc5768878d1f936f3a287b5eaa420a7086673.scope: Deactivated successfully. Jan 13 21:14:18.866311 systemd[1]: cri-containerd-fa919c108277839929b4cf248d8fc5768878d1f936f3a287b5eaa420a7086673.scope: Consumed 2.701s CPU time, 15.9M memory peak, 0B memory swap peak. Jan 13 21:14:18.916387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa919c108277839929b4cf248d8fc5768878d1f936f3a287b5eaa420a7086673-rootfs.mount: Deactivated successfully. Jan 13 21:14:18.932307 containerd[1943]: time="2025-01-13T21:14:18.932174677Z" level=info msg="shim disconnected" id=fa919c108277839929b4cf248d8fc5768878d1f936f3a287b5eaa420a7086673 namespace=k8s.io Jan 13 21:14:18.932307 containerd[1943]: time="2025-01-13T21:14:18.932280901Z" level=warning msg="cleaning up after shim disconnected" id=fa919c108277839929b4cf248d8fc5768878d1f936f3a287b5eaa420a7086673 namespace=k8s.io Jan 13 21:14:18.932307 containerd[1943]: time="2025-01-13T21:14:18.932306329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:19.895285 kubelet[3311]: I0113 21:14:19.894934 3311 scope.go:117] "RemoveContainer" containerID="fa919c108277839929b4cf248d8fc5768878d1f936f3a287b5eaa420a7086673" Jan 13 21:14:19.900672 containerd[1943]: time="2025-01-13T21:14:19.900443126Z" level=info msg="CreateContainer within sandbox \"ee9ed06fa75479089ad54adece6b5c8e7c29e71d72af88e63ed92ca6ff550108\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 21:14:19.932532 containerd[1943]: time="2025-01-13T21:14:19.932398778Z" level=info msg="CreateContainer within sandbox \"ee9ed06fa75479089ad54adece6b5c8e7c29e71d72af88e63ed92ca6ff550108\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c7515980eeec6fb4450b139eddf160bceedd49452a1977627e8b6a86f09c87b8\"" Jan 13 21:14:19.933484 containerd[1943]: time="2025-01-13T21:14:19.933423794Z" level=info msg="StartContainer for \"c7515980eeec6fb4450b139eddf160bceedd49452a1977627e8b6a86f09c87b8\"" Jan 13 21:14:19.989572 systemd[1]: Started cri-containerd-c7515980eeec6fb4450b139eddf160bceedd49452a1977627e8b6a86f09c87b8.scope - libcontainer container c7515980eeec6fb4450b139eddf160bceedd49452a1977627e8b6a86f09c87b8. Jan 13 21:14:20.071565 containerd[1943]: time="2025-01-13T21:14:20.071178899Z" level=info msg="StartContainer for \"c7515980eeec6fb4450b139eddf160bceedd49452a1977627e8b6a86f09c87b8\" returns successfully" Jan 13 21:14:26.304052 kubelet[3311]: E0113 21:14:26.303981 3311 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-69?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"