Sep 4 17:16:25.192479 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 4 17:16:25.192540 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Sep 4 15:58:01 -00 2024 Sep 4 17:16:25.192569 kernel: KASLR disabled due to lack of seed Sep 4 17:16:25.192586 kernel: efi: EFI v2.7 by EDK II Sep 4 17:16:25.192603 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Sep 4 17:16:25.192619 kernel: ACPI: Early table checksum verification disabled Sep 4 17:16:25.192637 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 4 17:16:25.192653 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 17:16:25.192669 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:16:25.192685 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 4 17:16:25.192706 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:16:25.192722 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 4 17:16:25.192738 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 4 17:16:25.192754 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 4 17:16:25.192773 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:16:25.192794 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 4 17:16:25.192811 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 4 17:16:25.192828 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 4 17:16:25.192845 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 4 17:16:25.192861 kernel: printk: bootconsole [uart0] enabled Sep 4 17:16:25.192878 kernel: NUMA: Failed to initialise from firmware Sep 4 17:16:25.192895 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:16:25.192912 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 4 17:16:25.192929 kernel: Zone ranges: Sep 4 17:16:25.192946 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 17:16:25.192962 kernel: DMA32 empty Sep 4 17:16:25.192983 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 4 17:16:25.193000 kernel: Movable zone start for each node Sep 4 17:16:25.193016 kernel: Early memory node ranges Sep 4 17:16:25.193033 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 4 17:16:25.193049 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 4 17:16:25.193066 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 4 17:16:25.193114 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 4 17:16:25.193132 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 4 17:16:25.193150 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 4 17:16:25.193167 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 4 17:16:25.193184 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 4 17:16:25.193202 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:16:25.193227 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 4 17:16:25.193246 kernel: psci: probing for conduit method from ACPI. Sep 4 17:16:25.193271 kernel: psci: PSCIv1.0 detected in firmware. Sep 4 17:16:25.193292 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:16:25.193310 kernel: psci: Trusted OS migration not required Sep 4 17:16:25.193334 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:16:25.193353 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:16:25.193374 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:16:25.193392 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 17:16:25.193411 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:16:25.193430 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:16:25.193448 kernel: CPU features: detected: Spectre-v2 Sep 4 17:16:25.193466 kernel: CPU features: detected: Spectre-v3a Sep 4 17:16:25.193483 kernel: CPU features: detected: Spectre-BHB Sep 4 17:16:25.193501 kernel: CPU features: detected: ARM erratum 1742098 Sep 4 17:16:25.193519 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 4 17:16:25.193542 kernel: alternatives: applying boot alternatives Sep 4 17:16:25.193562 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:16:25.193582 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:16:25.193600 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:16:25.193620 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:16:25.193639 kernel: Fallback order for Node 0: 0 Sep 4 17:16:25.193657 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 4 17:16:25.193676 kernel: Policy zone: Normal Sep 4 17:16:25.193696 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:16:25.193714 kernel: software IO TLB: area num 2. Sep 4 17:16:25.193732 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 4 17:16:25.193759 kernel: Memory: 3820280K/4030464K available (10240K kernel code, 2184K rwdata, 8084K rodata, 39296K init, 897K bss, 210184K reserved, 0K cma-reserved) Sep 4 17:16:25.193778 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:16:25.193796 kernel: trace event string verifier disabled Sep 4 17:16:25.193815 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:16:25.193835 kernel: rcu: RCU event tracing is enabled. Sep 4 17:16:25.193854 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:16:25.193873 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:16:25.193891 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:16:25.193909 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:16:25.193927 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:16:25.193945 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:16:25.193967 kernel: GICv3: 96 SPIs implemented Sep 4 17:16:25.193985 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:16:25.194002 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:16:25.194019 kernel: GICv3: GICv3 features: 16 PPIs Sep 4 17:16:25.194037 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 4 17:16:25.194055 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 4 17:16:25.194111 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:16:25.194137 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:16:25.194155 kernel: GICv3: using LPI property table @0x00000004000e0000 Sep 4 17:16:25.194172 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 4 17:16:25.194190 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Sep 4 17:16:25.194208 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:16:25.194233 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 4 17:16:25.194251 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 4 17:16:25.194270 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 4 17:16:25.194287 kernel: Console: colour dummy device 80x25 Sep 4 17:16:25.194306 kernel: printk: console [tty1] enabled Sep 4 17:16:25.194324 kernel: ACPI: Core revision 20230628 Sep 4 17:16:25.194342 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 4 17:16:25.194361 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:16:25.194379 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:16:25.194397 kernel: landlock: Up and running. Sep 4 17:16:25.194419 kernel: SELinux: Initializing. Sep 4 17:16:25.194438 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:16:25.194456 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:16:25.194476 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:16:25.194496 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:16:25.194516 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:16:25.194536 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:16:25.194556 kernel: Platform MSI: ITS@0x10080000 domain created Sep 4 17:16:25.194575 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 4 17:16:25.194599 kernel: Remapping and enabling EFI services. Sep 4 17:16:25.194617 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:16:25.194636 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:16:25.194654 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 4 17:16:25.194673 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Sep 4 17:16:25.194691 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 4 17:16:25.194709 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:16:25.194728 kernel: SMP: Total of 2 processors activated. Sep 4 17:16:25.194746 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:16:25.194769 kernel: CPU features: detected: 32-bit EL1 Support Sep 4 17:16:25.194787 kernel: CPU features: detected: CRC32 instructions Sep 4 17:16:25.194817 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:16:25.194840 kernel: alternatives: applying system-wide alternatives Sep 4 17:16:25.194860 kernel: devtmpfs: initialized Sep 4 17:16:25.194879 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:16:25.194898 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:16:25.194917 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:16:25.194936 kernel: SMBIOS 3.0.0 present. Sep 4 17:16:25.194960 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 4 17:16:25.194979 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:16:25.194998 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:16:25.195017 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:16:25.195056 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:16:25.195154 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:16:25.195176 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Sep 4 17:16:25.195203 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:16:25.195222 kernel: cpuidle: using governor menu Sep 4 17:16:25.195242 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:16:25.195260 kernel: ASID allocator initialised with 65536 entries Sep 4 17:16:25.195280 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:16:25.195299 kernel: Serial: AMBA PL011 UART driver Sep 4 17:16:25.195318 kernel: Modules: 17536 pages in range for non-PLT usage Sep 4 17:16:25.195337 kernel: Modules: 509056 pages in range for PLT usage Sep 4 17:16:25.195355 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:16:25.195379 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:16:25.195398 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:16:25.195417 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:16:25.195437 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:16:25.195456 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:16:25.195475 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:16:25.195494 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:16:25.195513 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:16:25.195532 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:16:25.195555 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:16:25.195574 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:16:25.195593 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:16:25.195612 kernel: ACPI: Interpreter enabled Sep 4 17:16:25.195631 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:16:25.195649 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:16:25.195668 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 4 17:16:25.195994 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:16:25.196258 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:16:25.196461 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:16:25.196659 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 4 17:16:25.196857 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 4 17:16:25.196883 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 4 17:16:25.196903 kernel: acpiphp: Slot [1] registered Sep 4 17:16:25.196922 kernel: acpiphp: Slot [2] registered Sep 4 17:16:25.196941 kernel: acpiphp: Slot [3] registered Sep 4 17:16:25.196959 kernel: acpiphp: Slot [4] registered Sep 4 17:16:25.196985 kernel: acpiphp: Slot [5] registered Sep 4 17:16:25.197003 kernel: acpiphp: Slot [6] registered Sep 4 17:16:25.197022 kernel: acpiphp: Slot [7] registered Sep 4 17:16:25.197041 kernel: acpiphp: Slot [8] registered Sep 4 17:16:25.197059 kernel: acpiphp: Slot [9] registered Sep 4 17:16:25.197126 kernel: acpiphp: Slot [10] registered Sep 4 17:16:25.197148 kernel: acpiphp: Slot [11] registered Sep 4 17:16:25.197167 kernel: acpiphp: Slot [12] registered Sep 4 17:16:25.197186 kernel: acpiphp: Slot [13] registered Sep 4 17:16:25.197211 kernel: acpiphp: Slot [14] registered Sep 4 17:16:25.197231 kernel: acpiphp: Slot [15] registered Sep 4 17:16:25.197249 kernel: acpiphp: Slot [16] registered Sep 4 17:16:25.197268 kernel: acpiphp: Slot [17] registered Sep 4 17:16:25.197287 kernel: acpiphp: Slot [18] registered Sep 4 17:16:25.197305 kernel: acpiphp: Slot [19] registered Sep 4 17:16:25.197324 kernel: acpiphp: Slot [20] registered Sep 4 17:16:25.197343 kernel: acpiphp: Slot [21] registered Sep 4 17:16:25.197361 kernel: acpiphp: Slot [22] registered Sep 4 17:16:25.197381 kernel: acpiphp: Slot [23] registered Sep 4 17:16:25.197405 kernel: acpiphp: Slot [24] registered Sep 4 17:16:25.197423 kernel: acpiphp: Slot [25] registered Sep 4 17:16:25.197442 kernel: acpiphp: Slot [26] registered Sep 4 17:16:25.197461 kernel: acpiphp: Slot [27] registered Sep 4 17:16:25.197480 kernel: acpiphp: Slot [28] registered Sep 4 17:16:25.197498 kernel: acpiphp: Slot [29] registered Sep 4 17:16:25.197517 kernel: acpiphp: Slot [30] registered Sep 4 17:16:25.197535 kernel: acpiphp: Slot [31] registered Sep 4 17:16:25.197554 kernel: PCI host bridge to bus 0000:00 Sep 4 17:16:25.197778 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 4 17:16:25.197971 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:16:25.199866 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 4 17:16:25.200163 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 4 17:16:25.200413 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 4 17:16:25.200636 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 4 17:16:25.200853 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 4 17:16:25.201092 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:16:25.201306 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 4 17:16:25.201517 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:16:25.201739 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:16:25.201948 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 4 17:16:25.202179 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 4 17:16:25.202400 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 4 17:16:25.202612 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:16:25.203322 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 4 17:16:25.203544 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 4 17:16:25.203756 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 4 17:16:25.203968 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 4 17:16:25.204238 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 4 17:16:25.204443 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 4 17:16:25.204629 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:16:25.204815 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 4 17:16:25.204841 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:16:25.204860 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:16:25.204880 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:16:25.204899 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:16:25.204918 kernel: iommu: Default domain type: Translated Sep 4 17:16:25.204943 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:16:25.204962 kernel: efivars: Registered efivars operations Sep 4 17:16:25.204981 kernel: vgaarb: loaded Sep 4 17:16:25.205000 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:16:25.205019 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:16:25.205037 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:16:25.205056 kernel: pnp: PnP ACPI init Sep 4 17:16:25.207410 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 4 17:16:25.207461 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:16:25.207483 kernel: NET: Registered PF_INET protocol family Sep 4 17:16:25.207541 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:16:25.207574 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:16:25.207595 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:16:25.207615 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:16:25.207634 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:16:25.207654 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:16:25.207673 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:16:25.207699 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:16:25.207718 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:16:25.207737 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:16:25.207756 kernel: kvm [1]: HYP mode not available Sep 4 17:16:25.207775 kernel: Initialise system trusted keyrings Sep 4 17:16:25.207794 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:16:25.207813 kernel: Key type asymmetric registered Sep 4 17:16:25.207831 kernel: Asymmetric key parser 'x509' registered Sep 4 17:16:25.207850 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:16:25.207873 kernel: io scheduler mq-deadline registered Sep 4 17:16:25.207892 kernel: io scheduler kyber registered Sep 4 17:16:25.207911 kernel: io scheduler bfq registered Sep 4 17:16:25.208151 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 4 17:16:25.208180 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:16:25.208200 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:16:25.208219 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 4 17:16:25.208238 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 17:16:25.208257 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:16:25.208283 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 17:16:25.208495 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 4 17:16:25.208523 kernel: printk: console [ttyS0] disabled Sep 4 17:16:25.208544 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 4 17:16:25.208563 kernel: printk: console [ttyS0] enabled Sep 4 17:16:25.208582 kernel: printk: bootconsole [uart0] disabled Sep 4 17:16:25.208602 kernel: thunder_xcv, ver 1.0 Sep 4 17:16:25.208620 kernel: thunder_bgx, ver 1.0 Sep 4 17:16:25.208639 kernel: nicpf, ver 1.0 Sep 4 17:16:25.208664 kernel: nicvf, ver 1.0 Sep 4 17:16:25.208890 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:16:25.211189 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:16:24 UTC (1725470184) Sep 4 17:16:25.211242 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:16:25.211264 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 4 17:16:25.211284 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:16:25.211306 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:16:25.211326 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:16:25.211354 kernel: Segment Routing with IPv6 Sep 4 17:16:25.211373 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:16:25.211392 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:16:25.211411 kernel: Key type dns_resolver registered Sep 4 17:16:25.211430 kernel: registered taskstats version 1 Sep 4 17:16:25.211450 kernel: Loading compiled-in X.509 certificates Sep 4 17:16:25.211469 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 6782952639b29daf968f5d0c3e73fb25e5af1d5e' Sep 4 17:16:25.211487 kernel: Key type .fscrypt registered Sep 4 17:16:25.211506 kernel: Key type fscrypt-provisioning registered Sep 4 17:16:25.211529 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:16:25.211549 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:16:25.211568 kernel: ima: No architecture policies found Sep 4 17:16:25.211586 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:16:25.211605 kernel: clk: Disabling unused clocks Sep 4 17:16:25.211624 kernel: Freeing unused kernel memory: 39296K Sep 4 17:16:25.211643 kernel: Run /init as init process Sep 4 17:16:25.211662 kernel: with arguments: Sep 4 17:16:25.211681 kernel: /init Sep 4 17:16:25.211703 kernel: with environment: Sep 4 17:16:25.211722 kernel: HOME=/ Sep 4 17:16:25.211741 kernel: TERM=linux Sep 4 17:16:25.211760 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:16:25.211783 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:16:25.211808 systemd[1]: Detected virtualization amazon. Sep 4 17:16:25.211830 systemd[1]: Detected architecture arm64. Sep 4 17:16:25.211850 systemd[1]: Running in initrd. Sep 4 17:16:25.211933 systemd[1]: No hostname configured, using default hostname. Sep 4 17:16:25.211960 systemd[1]: Hostname set to . Sep 4 17:16:25.212015 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:16:25.212038 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:16:25.212059 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:16:25.214526 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:16:25.214561 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:16:25.214583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:16:25.214615 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:16:25.214637 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:16:25.214661 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:16:25.214683 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:16:25.214704 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:16:25.214725 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:16:25.214750 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:16:25.214771 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:16:25.214791 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:16:25.214811 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:16:25.214832 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:16:25.214853 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:16:25.214874 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:16:25.214895 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:16:25.214916 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:16:25.214941 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:16:25.214962 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:16:25.214982 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:16:25.215003 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:16:25.215024 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:16:25.215066 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:16:25.215121 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:16:25.215143 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:16:25.215164 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:16:25.215192 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:16:25.215213 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:16:25.215278 systemd-journald[251]: Collecting audit messages is disabled. Sep 4 17:16:25.215329 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:16:25.215351 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:16:25.215373 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:16:25.215392 kernel: Bridge firewalling registered Sep 4 17:16:25.215413 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:16:25.215438 systemd-journald[251]: Journal started Sep 4 17:16:25.215476 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2fffad6f1f34cb675774406e8f3f76) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:16:25.175859 systemd-modules-load[252]: Inserted module 'overlay' Sep 4 17:16:25.209173 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 4 17:16:25.255196 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:16:25.227590 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:16:25.228640 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:16:25.235339 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:16:25.239493 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:16:25.244363 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:16:25.286161 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:25.303641 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:16:25.310739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:16:25.317128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:16:25.322213 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:16:25.343005 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:16:25.371235 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:16:25.392478 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:16:25.427484 systemd-resolved[278]: Positive Trust Anchors: Sep 4 17:16:25.427968 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:16:25.428034 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:16:25.462585 dracut-cmdline[288]: dracut-dracut-053 Sep 4 17:16:25.469835 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:16:25.623191 kernel: SCSI subsystem initialized Sep 4 17:16:25.630197 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:16:25.643202 kernel: iscsi: registered transport (tcp) Sep 4 17:16:25.664656 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:16:25.664727 kernel: QLogic iSCSI HBA Driver Sep 4 17:16:25.720114 kernel: random: crng init done Sep 4 17:16:25.720484 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 4 17:16:25.724743 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:16:25.732819 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:16:25.753431 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:16:25.767442 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:16:25.797650 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:16:25.797727 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:16:25.797754 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:16:25.865116 kernel: raid6: neonx8 gen() 6714 MB/s Sep 4 17:16:25.882104 kernel: raid6: neonx4 gen() 6537 MB/s Sep 4 17:16:25.899104 kernel: raid6: neonx2 gen() 5472 MB/s Sep 4 17:16:25.916104 kernel: raid6: neonx1 gen() 3958 MB/s Sep 4 17:16:25.933103 kernel: raid6: int64x8 gen() 3801 MB/s Sep 4 17:16:25.950103 kernel: raid6: int64x4 gen() 3723 MB/s Sep 4 17:16:25.967103 kernel: raid6: int64x2 gen() 3600 MB/s Sep 4 17:16:25.984878 kernel: raid6: int64x1 gen() 2774 MB/s Sep 4 17:16:25.984916 kernel: raid6: using algorithm neonx8 gen() 6714 MB/s Sep 4 17:16:26.002865 kernel: raid6: .... xor() 4883 MB/s, rmw enabled Sep 4 17:16:26.002906 kernel: raid6: using neon recovery algorithm Sep 4 17:16:26.011831 kernel: xor: measuring software checksum speed Sep 4 17:16:26.011881 kernel: 8regs : 11029 MB/sec Sep 4 17:16:26.014104 kernel: 32regs : 11925 MB/sec Sep 4 17:16:26.016314 kernel: arm64_neon : 9633 MB/sec Sep 4 17:16:26.016350 kernel: xor: using function: 32regs (11925 MB/sec) Sep 4 17:16:26.099121 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:16:26.118506 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:16:26.132443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:16:26.175233 systemd-udevd[469]: Using default interface naming scheme 'v255'. Sep 4 17:16:26.184328 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:16:26.206392 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:16:26.235210 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Sep 4 17:16:26.292480 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:16:26.305390 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:16:26.430468 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:16:26.444770 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:16:26.490111 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:16:26.499616 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:16:26.503681 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:16:26.513206 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:16:26.532600 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:16:26.563157 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:16:26.656946 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:16:26.657013 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 4 17:16:26.659576 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:16:26.670630 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:16:26.670951 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:16:26.659815 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:16:26.670833 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:16:26.687916 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:76:35:07:eb:87 Sep 4 17:16:26.677300 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:16:26.685298 (udev-worker)[529]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:16:26.688181 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:26.696937 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:16:26.723323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:16:26.724235 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 17:16:26.727109 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:16:26.738600 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:16:26.748100 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:16:26.748165 kernel: GPT:9289727 != 16777215 Sep 4 17:16:26.748191 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:16:26.748216 kernel: GPT:9289727 != 16777215 Sep 4 17:16:26.748240 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:16:26.748264 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:26.763407 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:26.776368 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:16:26.830122 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:16:26.891261 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (517) Sep 4 17:16:26.917009 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:16:26.930272 kernel: BTRFS: device fsid 3e706a0f-a579-4862-bc52-e66e95e66d87 devid 1 transid 42 /dev/nvme0n1p3 scanned by (udev-worker) (518) Sep 4 17:16:26.992283 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:16:27.009965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:16:27.037995 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:16:27.041484 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:16:27.067489 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:16:27.083362 disk-uuid[662]: Primary Header is updated. Sep 4 17:16:27.083362 disk-uuid[662]: Secondary Entries is updated. Sep 4 17:16:27.083362 disk-uuid[662]: Secondary Header is updated. Sep 4 17:16:27.092611 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:27.102138 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:27.109118 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:28.111113 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:28.112866 disk-uuid[663]: The operation has completed successfully. Sep 4 17:16:28.299666 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:16:28.299883 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:16:28.337395 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:16:28.353968 sh[1006]: Success Sep 4 17:16:28.373245 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:16:28.478779 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:16:28.487250 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:16:28.501154 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:16:28.525154 kernel: BTRFS info (device dm-0): first mount of filesystem 3e706a0f-a579-4862-bc52-e66e95e66d87 Sep 4 17:16:28.525215 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:16:28.526963 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:16:28.528246 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:16:28.529313 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:16:28.630104 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:16:28.662544 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:16:28.663059 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:16:28.680433 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:16:28.686588 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:16:28.724087 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:28.724162 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:16:28.724190 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:16:28.731168 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:16:28.747797 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:16:28.752445 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:28.765656 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:16:28.776904 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:16:28.869211 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:16:28.883441 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:16:28.942890 systemd-networkd[1210]: lo: Link UP Sep 4 17:16:28.942912 systemd-networkd[1210]: lo: Gained carrier Sep 4 17:16:28.945767 systemd-networkd[1210]: Enumeration completed Sep 4 17:16:28.946206 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:16:28.947032 systemd-networkd[1210]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:28.947040 systemd-networkd[1210]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:16:28.951865 systemd[1]: Reached target network.target - Network. Sep 4 17:16:28.960880 systemd-networkd[1210]: eth0: Link UP Sep 4 17:16:28.960888 systemd-networkd[1210]: eth0: Gained carrier Sep 4 17:16:28.960905 systemd-networkd[1210]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:29.000160 systemd-networkd[1210]: eth0: DHCPv4 address 172.31.17.160/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:16:29.114172 ignition[1143]: Ignition 2.19.0 Sep 4 17:16:29.115957 ignition[1143]: Stage: fetch-offline Sep 4 17:16:29.116514 ignition[1143]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:29.120572 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:16:29.116539 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:29.117010 ignition[1143]: Ignition finished successfully Sep 4 17:16:29.144469 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:16:29.172433 ignition[1222]: Ignition 2.19.0 Sep 4 17:16:29.172456 ignition[1222]: Stage: fetch Sep 4 17:16:29.173039 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:29.173064 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:29.173245 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:29.186361 ignition[1222]: PUT result: OK Sep 4 17:16:29.195797 ignition[1222]: parsed url from cmdline: "" Sep 4 17:16:29.195813 ignition[1222]: no config URL provided Sep 4 17:16:29.195830 ignition[1222]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:16:29.195855 ignition[1222]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:16:29.195889 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:29.197981 ignition[1222]: PUT result: OK Sep 4 17:16:29.198055 ignition[1222]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:16:29.209141 ignition[1222]: GET result: OK Sep 4 17:16:29.209425 ignition[1222]: parsing config with SHA512: 7693b8d2d8a7a5062e87a5ba559cd69ea190a1e82e61e49dc30f16824d57e96b26f2586b6aa4532ee5bd0fecc21685a790013523eac293998b50b68ef692899a Sep 4 17:16:29.222318 unknown[1222]: fetched base config from "system" Sep 4 17:16:29.222351 unknown[1222]: fetched base config from "system" Sep 4 17:16:29.224672 ignition[1222]: fetch: fetch complete Sep 4 17:16:29.222366 unknown[1222]: fetched user config from "aws" Sep 4 17:16:29.224686 ignition[1222]: fetch: fetch passed Sep 4 17:16:29.224785 ignition[1222]: Ignition finished successfully Sep 4 17:16:29.234860 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:16:29.262463 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:16:29.286973 ignition[1228]: Ignition 2.19.0 Sep 4 17:16:29.287502 ignition[1228]: Stage: kargs Sep 4 17:16:29.288152 ignition[1228]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:29.288177 ignition[1228]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:29.288346 ignition[1228]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:29.293143 ignition[1228]: PUT result: OK Sep 4 17:16:29.302674 ignition[1228]: kargs: kargs passed Sep 4 17:16:29.302991 ignition[1228]: Ignition finished successfully Sep 4 17:16:29.309503 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:16:29.328532 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:16:29.355153 ignition[1235]: Ignition 2.19.0 Sep 4 17:16:29.355173 ignition[1235]: Stage: disks Sep 4 17:16:29.355815 ignition[1235]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:29.355840 ignition[1235]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:29.355996 ignition[1235]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:29.359451 ignition[1235]: PUT result: OK Sep 4 17:16:29.368899 ignition[1235]: disks: disks passed Sep 4 17:16:29.371886 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:16:29.368993 ignition[1235]: Ignition finished successfully Sep 4 17:16:29.381478 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:16:29.384187 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:16:29.387083 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:16:29.389514 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:16:29.391939 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:16:29.412015 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:16:29.459303 systemd-fsck[1243]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:16:29.465872 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:16:29.479589 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:16:29.568090 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 901d46b0-2319-4536-8a6d-46889db73e8c r/w with ordered data mode. Quota mode: none. Sep 4 17:16:29.569540 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:16:29.570409 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:16:29.591364 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:16:29.595363 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:16:29.605765 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:16:29.605867 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:16:29.605917 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:16:29.632385 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1262) Sep 4 17:16:29.619118 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:16:29.651545 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:29.651605 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:16:29.651633 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:16:29.636179 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:16:29.669106 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:16:29.671420 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:16:29.971673 initrd-setup-root[1286]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:16:29.989432 initrd-setup-root[1293]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:16:29.999727 initrd-setup-root[1300]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:16:30.008132 initrd-setup-root[1307]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:16:30.104250 systemd-networkd[1210]: eth0: Gained IPv6LL Sep 4 17:16:30.295683 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:16:30.307286 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:16:30.314323 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:16:30.342894 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:16:30.346601 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:30.373954 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:16:30.391744 ignition[1376]: INFO : Ignition 2.19.0 Sep 4 17:16:30.394720 ignition[1376]: INFO : Stage: mount Sep 4 17:16:30.396661 ignition[1376]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:30.396661 ignition[1376]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:30.396661 ignition[1376]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:30.405061 ignition[1376]: INFO : PUT result: OK Sep 4 17:16:30.410570 ignition[1376]: INFO : mount: mount passed Sep 4 17:16:30.412492 ignition[1376]: INFO : Ignition finished successfully Sep 4 17:16:30.418137 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:16:30.436391 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:16:30.577413 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:16:30.619117 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1387) Sep 4 17:16:30.623289 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:30.623335 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:16:30.623362 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:16:30.630111 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:16:30.633459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:16:30.682129 ignition[1404]: INFO : Ignition 2.19.0 Sep 4 17:16:30.682129 ignition[1404]: INFO : Stage: files Sep 4 17:16:30.682129 ignition[1404]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:30.682129 ignition[1404]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:30.682129 ignition[1404]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:30.694574 ignition[1404]: INFO : PUT result: OK Sep 4 17:16:30.705640 ignition[1404]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:16:30.708814 ignition[1404]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:16:30.708814 ignition[1404]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:16:30.728513 ignition[1404]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:16:30.731979 ignition[1404]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:16:30.735644 unknown[1404]: wrote ssh authorized keys file for user: core Sep 4 17:16:30.738195 ignition[1404]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:16:30.743839 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:16:30.743839 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:16:30.802777 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:16:30.889466 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:16:30.889466 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:16:30.897896 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 17:16:31.353569 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:16:31.490602 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:16:31.490602 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:16:31.500147 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Sep 4 17:16:31.869601 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:16:32.221958 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:16:32.221958 ignition[1404]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 17:16:32.232615 ignition[1404]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:16:32.232615 ignition[1404]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:16:32.232615 ignition[1404]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 17:16:32.232615 ignition[1404]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:16:32.232615 ignition[1404]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:16:32.232615 ignition[1404]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:16:32.232615 ignition[1404]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:16:32.232615 ignition[1404]: INFO : files: files passed Sep 4 17:16:32.232615 ignition[1404]: INFO : Ignition finished successfully Sep 4 17:16:32.250764 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:16:32.286530 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:16:32.293481 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:16:32.303380 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:16:32.306374 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:16:32.334575 initrd-setup-root-after-ignition[1433]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:16:32.334575 initrd-setup-root-after-ignition[1433]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:16:32.342908 initrd-setup-root-after-ignition[1437]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:16:32.341893 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:16:32.345691 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:16:32.364344 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:16:32.430587 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:16:32.432133 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:16:32.437513 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:16:32.439630 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:16:32.441764 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:16:32.445348 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:16:32.488574 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:16:32.507499 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:16:32.533998 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:16:32.540635 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:16:32.543987 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:16:32.549092 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:16:32.549808 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:16:32.556541 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:16:32.559746 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:16:32.562459 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:16:32.567912 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:16:32.575003 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:16:32.577945 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:16:32.587615 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:16:32.590573 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:16:32.593218 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:16:32.595794 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:16:32.606599 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:16:32.606829 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:16:32.609742 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:16:32.612489 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:16:32.615473 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:16:32.626825 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:16:32.629308 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:16:32.629526 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:16:32.632093 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:16:32.632666 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:16:32.638771 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:16:32.639021 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:16:32.661509 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:16:32.675781 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:16:32.682265 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:16:32.684414 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:16:32.691496 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:16:32.691732 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:16:32.713796 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:16:32.714751 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:16:32.726767 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:16:32.735605 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:16:32.736156 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:16:32.744118 ignition[1457]: INFO : Ignition 2.19.0 Sep 4 17:16:32.744118 ignition[1457]: INFO : Stage: umount Sep 4 17:16:32.744118 ignition[1457]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:32.744118 ignition[1457]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:32.753620 ignition[1457]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:32.753620 ignition[1457]: INFO : PUT result: OK Sep 4 17:16:32.761543 ignition[1457]: INFO : umount: umount passed Sep 4 17:16:32.761543 ignition[1457]: INFO : Ignition finished successfully Sep 4 17:16:32.764419 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:16:32.766361 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:16:32.770859 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:16:32.771042 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:16:32.774088 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:16:32.774179 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:16:32.786910 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:16:32.787043 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:16:32.793704 systemd[1]: Stopped target network.target - Network. Sep 4 17:16:32.795778 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:16:32.795867 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:16:32.798633 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:16:32.800712 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:16:32.812153 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:16:32.814538 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:16:32.816310 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:16:32.818192 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:16:32.818272 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:16:32.820243 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:16:32.820312 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:16:32.822312 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:16:32.822392 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:16:32.824390 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:16:32.824469 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:16:32.826548 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:16:32.826622 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:16:32.829674 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:16:32.832926 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:16:32.874204 systemd-networkd[1210]: eth0: DHCPv6 lease lost Sep 4 17:16:32.884345 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:16:32.884570 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:16:32.887174 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:16:32.887246 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:16:32.904619 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:16:32.907102 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:16:32.907217 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:16:32.910341 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:16:32.913462 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:16:32.913667 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:16:32.939059 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:16:32.940776 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:16:32.946547 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:16:32.946649 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:16:32.949321 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:16:32.949402 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:16:32.972517 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:16:32.974155 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:16:32.978817 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:16:32.980197 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:16:32.984602 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:16:32.984723 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:16:32.988311 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:16:32.988384 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:16:33.003266 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:16:33.003361 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:16:33.005993 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:16:33.006090 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:16:33.017342 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:16:33.017430 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:16:33.031348 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:16:33.035134 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:16:33.035251 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:16:33.038592 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:16:33.038675 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:16:33.041843 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:16:33.041921 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:16:33.045022 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:16:33.045122 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:33.085798 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:16:33.086227 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:16:33.092690 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:16:33.109316 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:16:33.127129 systemd[1]: Switching root. Sep 4 17:16:33.174624 systemd-journald[251]: Journal stopped Sep 4 17:16:35.405602 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 4 17:16:35.405733 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:16:35.405777 kernel: SELinux: policy capability open_perms=1 Sep 4 17:16:35.405809 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:16:35.405849 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:16:35.405884 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:16:35.405918 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:16:35.405949 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:16:35.405977 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:16:35.406006 kernel: audit: type=1403 audit(1725470193.775:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:16:35.406048 systemd[1]: Successfully loaded SELinux policy in 48.790ms. Sep 4 17:16:35.406121 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.295ms. Sep 4 17:16:35.406164 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:16:35.406198 systemd[1]: Detected virtualization amazon. Sep 4 17:16:35.406235 systemd[1]: Detected architecture arm64. Sep 4 17:16:35.406267 systemd[1]: Detected first boot. Sep 4 17:16:35.406300 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:16:35.406331 zram_generator::config[1498]: No configuration found. Sep 4 17:16:35.406366 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:16:35.406398 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:16:35.406431 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:16:35.406463 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:16:35.406509 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:16:35.406540 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:16:35.406572 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:16:35.406603 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:16:35.406637 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:16:35.406668 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:16:35.406698 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:16:35.406729 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:16:35.406764 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:16:35.406796 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:16:35.406827 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:16:35.406859 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:16:35.406891 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:16:35.406921 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:16:35.406955 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:16:35.407008 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:16:35.407041 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:16:35.407825 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:16:35.407889 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:16:35.407923 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:16:35.407955 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:16:35.407987 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:16:35.408021 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:16:35.408054 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:16:35.408152 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:16:35.408195 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:16:35.408229 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:16:35.408259 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:16:35.408291 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:16:35.408322 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:16:35.408362 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:16:35.408395 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:16:35.408430 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:16:35.408461 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:16:35.408497 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:16:35.408531 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:16:35.408569 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:16:35.408600 systemd[1]: Reached target machines.target - Containers. Sep 4 17:16:35.408631 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:16:35.408666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:16:35.408698 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:16:35.408729 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:16:35.408765 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:16:35.408797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:16:35.408828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:16:35.408859 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:16:35.408890 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:16:35.408924 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:16:35.408954 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:16:35.408990 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:16:35.409020 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:16:35.409064 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:16:35.409131 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:16:35.409162 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:16:35.409192 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:16:35.409222 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:16:35.409254 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:16:35.409284 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:16:35.409313 systemd[1]: Stopped verity-setup.service. Sep 4 17:16:35.409341 kernel: fuse: init (API version 7.39) Sep 4 17:16:35.409379 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:16:35.409409 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:16:35.409441 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:16:35.409470 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:16:35.409502 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:16:35.409531 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:16:35.409562 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:16:35.409597 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:16:35.409627 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:16:35.409656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:16:35.409688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:16:35.409718 kernel: loop: module loaded Sep 4 17:16:35.409752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:16:35.409788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:16:35.409817 kernel: ACPI: bus type drm_connector registered Sep 4 17:16:35.409845 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:16:35.409877 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:16:35.409908 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:16:35.409938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:16:35.409976 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:16:35.410006 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:16:35.410036 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:16:35.410065 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:16:35.410129 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:16:35.410161 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:16:35.410245 systemd-journald[1583]: Collecting audit messages is disabled. Sep 4 17:16:35.410302 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:16:35.410333 systemd-journald[1583]: Journal started Sep 4 17:16:35.410382 systemd-journald[1583]: Runtime Journal (/run/log/journal/ec2fffad6f1f34cb675774406e8f3f76) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:16:35.423934 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:16:34.754274 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:16:34.776650 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:16:34.777458 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:16:35.433824 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:16:35.433899 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:16:35.447153 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:16:35.477107 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:16:35.496470 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:16:35.496580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:16:35.496620 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:16:35.496667 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:16:35.496705 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:16:35.496746 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:16:35.501109 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:16:35.516117 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:16:35.541124 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:16:35.551533 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:16:35.556696 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:16:35.562425 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:16:35.567781 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:16:35.572977 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:16:35.579535 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:16:35.593846 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:16:35.638249 kernel: loop0: detected capacity change from 0 to 65520 Sep 4 17:16:35.646946 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:16:35.667579 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:16:35.678553 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:16:35.698365 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:16:35.705017 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:16:35.716634 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:16:35.716362 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Sep 4 17:16:35.716387 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Sep 4 17:16:35.740436 systemd-journald[1583]: Time spent on flushing to /var/log/journal/ec2fffad6f1f34cb675774406e8f3f76 is 133.020ms for 923 entries. Sep 4 17:16:35.740436 systemd-journald[1583]: System Journal (/var/log/journal/ec2fffad6f1f34cb675774406e8f3f76) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:16:35.895728 systemd-journald[1583]: Received client request to flush runtime journal. Sep 4 17:16:35.895803 kernel: loop1: detected capacity change from 0 to 52536 Sep 4 17:16:35.751485 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:16:35.770464 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:16:35.783635 udevadm[1640]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:16:35.808531 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:16:35.820402 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:16:35.902141 kernel: loop2: detected capacity change from 0 to 114288 Sep 4 17:16:35.904578 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:16:35.924267 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:16:35.957702 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:16:35.988144 kernel: loop3: detected capacity change from 0 to 193208 Sep 4 17:16:36.025832 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Sep 4 17:16:36.025865 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Sep 4 17:16:36.041904 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:16:36.237593 kernel: loop4: detected capacity change from 0 to 65520 Sep 4 17:16:36.267106 kernel: loop5: detected capacity change from 0 to 52536 Sep 4 17:16:36.281234 kernel: loop6: detected capacity change from 0 to 114288 Sep 4 17:16:36.306509 kernel: loop7: detected capacity change from 0 to 193208 Sep 4 17:16:36.339526 (sd-merge)[1656]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:16:36.340530 (sd-merge)[1656]: Merged extensions into '/usr'. Sep 4 17:16:36.352799 systemd[1]: Reloading requested from client PID 1608 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:16:36.352831 systemd[1]: Reloading... Sep 4 17:16:36.479625 ldconfig[1604]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:16:36.538105 zram_generator::config[1677]: No configuration found. Sep 4 17:16:36.819258 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:16:36.928746 systemd[1]: Reloading finished in 574 ms. Sep 4 17:16:36.967536 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:16:36.971563 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:16:36.976207 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:16:36.991355 systemd[1]: Starting ensure-sysext.service... Sep 4 17:16:36.998311 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:16:37.012408 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:16:37.024437 systemd[1]: Reloading requested from client PID 1733 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:16:37.024472 systemd[1]: Reloading... Sep 4 17:16:37.056299 systemd-tmpfiles[1734]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:16:37.056945 systemd-tmpfiles[1734]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:16:37.065605 systemd-tmpfiles[1734]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:16:37.066205 systemd-tmpfiles[1734]: ACLs are not supported, ignoring. Sep 4 17:16:37.066370 systemd-tmpfiles[1734]: ACLs are not supported, ignoring. Sep 4 17:16:37.085999 systemd-tmpfiles[1734]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:16:37.086021 systemd-tmpfiles[1734]: Skipping /boot Sep 4 17:16:37.116633 systemd-udevd[1735]: Using default interface naming scheme 'v255'. Sep 4 17:16:37.125607 systemd-tmpfiles[1734]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:16:37.127141 systemd-tmpfiles[1734]: Skipping /boot Sep 4 17:16:37.251114 zram_generator::config[1773]: No configuration found. Sep 4 17:16:37.360314 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1765) Sep 4 17:16:37.367125 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1765) Sep 4 17:16:37.399338 (udev-worker)[1777]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:16:37.578295 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1777) Sep 4 17:16:37.655759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:16:37.802230 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:16:37.803562 systemd[1]: Reloading finished in 778 ms. Sep 4 17:16:37.825641 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:16:37.830871 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:16:37.922467 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:16:37.928136 systemd[1]: Finished ensure-sysext.service. Sep 4 17:16:37.937004 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:16:37.951470 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:16:37.961434 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:16:37.966614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:16:37.976521 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:16:37.985404 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:16:37.995418 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:16:38.005399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:16:38.015817 lvm[1931]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:16:38.020535 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:16:38.025020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:16:38.030426 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:16:38.050373 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:16:38.063389 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:16:38.087920 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:16:38.092448 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:16:38.104422 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:16:38.116404 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:16:38.127650 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:16:38.131132 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:16:38.144227 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:16:38.157357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:16:38.160208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:16:38.172227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:16:38.172565 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:16:38.181675 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:16:38.182214 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:16:38.189409 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:16:38.195057 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:16:38.208621 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:16:38.235664 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:16:38.241687 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:16:38.249690 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:16:38.256486 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:16:38.256603 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:16:38.259787 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:16:38.266604 augenrules[1968]: No rules Sep 4 17:16:38.271691 lvm[1966]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:16:38.274795 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:16:38.278974 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:16:38.281179 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:16:38.307541 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:16:38.333414 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:16:38.373226 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:16:38.384206 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:38.486853 systemd-networkd[1943]: lo: Link UP Sep 4 17:16:38.487557 systemd-networkd[1943]: lo: Gained carrier Sep 4 17:16:38.490586 systemd-networkd[1943]: Enumeration completed Sep 4 17:16:38.490994 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:16:38.495861 systemd-resolved[1945]: Positive Trust Anchors: Sep 4 17:16:38.495884 systemd-resolved[1945]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:16:38.495948 systemd-resolved[1945]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:16:38.497476 systemd-networkd[1943]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:38.497483 systemd-networkd[1943]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:16:38.501831 systemd-networkd[1943]: eth0: Link UP Sep 4 17:16:38.502324 systemd-networkd[1943]: eth0: Gained carrier Sep 4 17:16:38.502360 systemd-networkd[1943]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:38.503527 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:16:38.507821 systemd-resolved[1945]: Defaulting to hostname 'linux'. Sep 4 17:16:38.511182 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:16:38.514008 systemd[1]: Reached target network.target - Network. Sep 4 17:16:38.517231 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:16:38.522539 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:16:38.525384 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:16:38.528286 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:16:38.531463 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:16:38.534044 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:16:38.536792 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:16:38.539687 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:16:38.539735 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:16:38.541824 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:16:38.545870 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:16:38.545951 systemd-networkd[1943]: eth0: DHCPv4 address 172.31.17.160/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:16:38.556295 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:16:38.564687 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:16:38.569368 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:16:38.572722 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:16:38.575046 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:16:38.577288 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:16:38.577338 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:16:38.584359 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:16:38.594404 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:16:38.601234 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:16:38.606860 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:16:38.612370 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:16:38.616252 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:16:38.629418 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:16:38.644438 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:16:38.652354 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:16:38.661334 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:16:38.667748 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:16:38.675140 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:16:38.684222 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:16:38.687502 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:16:38.688577 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:16:38.702366 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:16:38.709232 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:16:38.769841 jq[2007]: true Sep 4 17:16:38.783196 jq[1994]: false Sep 4 17:16:38.791676 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:16:38.796887 jq[2014]: true Sep 4 17:16:38.793286 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:16:38.815425 (ntainerd)[2020]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:16:38.834293 extend-filesystems[1995]: Found loop4 Sep 4 17:16:38.834293 extend-filesystems[1995]: Found loop5 Sep 4 17:16:38.847490 extend-filesystems[1995]: Found loop6 Sep 4 17:16:38.847490 extend-filesystems[1995]: Found loop7 Sep 4 17:16:38.847490 extend-filesystems[1995]: Found nvme0n1 Sep 4 17:16:38.847490 extend-filesystems[1995]: Found nvme0n1p1 Sep 4 17:16:38.847490 extend-filesystems[1995]: Found nvme0n1p2 Sep 4 17:16:38.847490 extend-filesystems[1995]: Found nvme0n1p3 Sep 4 17:16:38.847490 extend-filesystems[1995]: Found usr Sep 4 17:16:38.847490 extend-filesystems[1995]: Found nvme0n1p4 Sep 4 17:16:38.847490 extend-filesystems[1995]: Found nvme0n1p6 Sep 4 17:16:38.847490 extend-filesystems[1995]: Found nvme0n1p7 Sep 4 17:16:38.847490 extend-filesystems[1995]: Found nvme0n1p9 Sep 4 17:16:38.847490 extend-filesystems[1995]: Checking size of /dev/nvme0n1p9 Sep 4 17:16:38.878125 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:16:38.970328 extend-filesystems[1995]: Resized partition /dev/nvme0n1p9 Sep 4 17:16:38.991841 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:16:38.877809 dbus-daemon[1993]: [system] SELinux support is enabled Sep 4 17:16:38.898577 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:16:39.024262 extend-filesystems[2047]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:38 ntpd[1997]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:18:26 UTC 2024 (1): Starting Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:38 ntpd[1997]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:38 ntpd[1997]: ---------------------------------------------------- Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:38 ntpd[1997]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:38 ntpd[1997]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:38 ntpd[1997]: corporation. Support and training for ntp-4 are Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:38 ntpd[1997]: available at https://www.nwtime.org/support Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:38 ntpd[1997]: ---------------------------------------------------- Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:38 ntpd[1997]: proto: precision = 0.096 usec (-23) Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:38 ntpd[1997]: basedate set to 2024-08-23 Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:38 ntpd[1997]: gps base set to 2024-08-25 (week 2329) Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: Listen normally on 3 eth0 172.31.17.160:123 Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: Listen normally on 4 lo [::1]:123 Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: bind(21) AF_INET6 fe80::476:35ff:fe07:eb87%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: unable to create socket on eth0 (5) for fe80::476:35ff:fe07:eb87%2#123 Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: failed to init interface for address fe80::476:35ff:fe07:eb87%2 Sep 4 17:16:39.036797 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: Listening on routing socket on fd #21 for interface updates Sep 4 17:16:38.906740 dbus-daemon[1993]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1943 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:16:38.899004 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:16:39.051770 update_engine[2005]: I0904 17:16:39.039173 2005 main.cc:92] Flatcar Update Engine starting Sep 4 17:16:38.939179 dbus-daemon[1993]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:16:38.922033 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:16:39.071861 tar[2017]: linux-arm64/helm Sep 4 17:16:39.089184 update_engine[2005]: I0904 17:16:39.055294 2005 update_check_scheduler.cc:74] Next update check in 3m3s Sep 4 17:16:38.976413 ntpd[1997]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:18:26 UTC 2024 (1): Starting Sep 4 17:16:38.922113 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:16:39.098384 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:16:39.098384 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:16:38.976458 ntpd[1997]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:16:38.927318 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:16:38.976480 ntpd[1997]: ---------------------------------------------------- Sep 4 17:16:38.927360 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:16:38.976499 ntpd[1997]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:16:38.956777 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:16:38.976518 ntpd[1997]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:16:38.957154 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:16:38.976537 ntpd[1997]: corporation. Support and training for ntp-4 are Sep 4 17:16:38.999398 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:16:38.976556 ntpd[1997]: available at https://www.nwtime.org/support Sep 4 17:16:39.053381 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:16:38.976575 ntpd[1997]: ---------------------------------------------------- Sep 4 17:16:39.075451 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:16:39.000385 ntpd[1997]: proto: precision = 0.096 usec (-23) Sep 4 17:16:39.082129 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:16:39.000796 ntpd[1997]: basedate set to 2024-08-23 Sep 4 17:16:39.000822 ntpd[1997]: gps base set to 2024-08-25 (week 2329) Sep 4 17:16:39.022752 ntpd[1997]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:16:39.022837 ntpd[1997]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:16:39.029486 ntpd[1997]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:16:39.029559 ntpd[1997]: Listen normally on 3 eth0 172.31.17.160:123 Sep 4 17:16:39.029628 ntpd[1997]: Listen normally on 4 lo [::1]:123 Sep 4 17:16:39.029706 ntpd[1997]: bind(21) AF_INET6 fe80::476:35ff:fe07:eb87%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:16:39.122341 coreos-metadata[1992]: Sep 04 17:16:39.107 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:16:39.122341 coreos-metadata[1992]: Sep 04 17:16:39.108 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:16:39.122341 coreos-metadata[1992]: Sep 04 17:16:39.117 INFO Fetch successful Sep 4 17:16:39.122341 coreos-metadata[1992]: Sep 04 17:16:39.117 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:16:39.122341 coreos-metadata[1992]: Sep 04 17:16:39.121 INFO Fetch successful Sep 4 17:16:39.122341 coreos-metadata[1992]: Sep 04 17:16:39.121 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:16:39.029745 ntpd[1997]: unable to create socket on eth0 (5) for fe80::476:35ff:fe07:eb87%2#123 Sep 4 17:16:39.029778 ntpd[1997]: failed to init interface for address fe80::476:35ff:fe07:eb87%2 Sep 4 17:16:39.029833 ntpd[1997]: Listening on routing socket on fd #21 for interface updates Sep 4 17:16:39.073491 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:16:39.073545 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:16:39.137096 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:16:39.137197 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1779) Sep 4 17:16:39.137234 coreos-metadata[1992]: Sep 04 17:16:39.123 INFO Fetch successful Sep 4 17:16:39.137234 coreos-metadata[1992]: Sep 04 17:16:39.123 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:16:39.137234 coreos-metadata[1992]: Sep 04 17:16:39.127 INFO Fetch successful Sep 4 17:16:39.137234 coreos-metadata[1992]: Sep 04 17:16:39.133 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:16:39.137234 coreos-metadata[1992]: Sep 04 17:16:39.136 INFO Fetch failed with 404: resource not found Sep 4 17:16:39.137234 coreos-metadata[1992]: Sep 04 17:16:39.136 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:16:39.154757 coreos-metadata[1992]: Sep 04 17:16:39.140 INFO Fetch successful Sep 4 17:16:39.154757 coreos-metadata[1992]: Sep 04 17:16:39.140 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:16:39.154757 coreos-metadata[1992]: Sep 04 17:16:39.148 INFO Fetch successful Sep 4 17:16:39.154757 coreos-metadata[1992]: Sep 04 17:16:39.148 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:16:39.154757 coreos-metadata[1992]: Sep 04 17:16:39.152 INFO Fetch successful Sep 4 17:16:39.154757 coreos-metadata[1992]: Sep 04 17:16:39.152 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:16:39.154757 coreos-metadata[1992]: Sep 04 17:16:39.153 INFO Fetch successful Sep 4 17:16:39.154757 coreos-metadata[1992]: Sep 04 17:16:39.153 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:16:39.157505 extend-filesystems[2047]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:16:39.157505 extend-filesystems[2047]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:16:39.157505 extend-filesystems[2047]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:16:39.179019 extend-filesystems[1995]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:16:39.183380 coreos-metadata[1992]: Sep 04 17:16:39.169 INFO Fetch successful Sep 4 17:16:39.191796 bash[2067]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:16:39.203895 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:16:39.204282 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:16:39.212951 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:16:39.232441 systemd[1]: Starting sshkeys.service... Sep 4 17:16:39.259288 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:16:39.264466 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:16:39.293278 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:16:39.306602 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:16:39.310363 systemd-logind[2004]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:16:39.310448 systemd-logind[2004]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 4 17:16:39.310806 systemd-logind[2004]: New seat seat0. Sep 4 17:16:39.317231 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:16:39.486271 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:16:39.537204 containerd[2020]: time="2024-09-04T17:16:39.537051670Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:16:39.607865 locksmithd[2066]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:16:39.618582 containerd[2020]: time="2024-09-04T17:16:39.618518339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:39.634905 containerd[2020]: time="2024-09-04T17:16:39.634812191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:39.634905 containerd[2020]: time="2024-09-04T17:16:39.634894415Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:16:39.635097 containerd[2020]: time="2024-09-04T17:16:39.634958807Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:16:39.636181 containerd[2020]: time="2024-09-04T17:16:39.635298635Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:16:39.636181 containerd[2020]: time="2024-09-04T17:16:39.635346611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:39.636181 containerd[2020]: time="2024-09-04T17:16:39.635606615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:39.636181 containerd[2020]: time="2024-09-04T17:16:39.635640287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:39.639193 coreos-metadata[2096]: Sep 04 17:16:39.639 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:16:39.639686 containerd[2020]: time="2024-09-04T17:16:39.639304883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:39.639686 containerd[2020]: time="2024-09-04T17:16:39.639384299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:39.639686 containerd[2020]: time="2024-09-04T17:16:39.639443963Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:39.639686 containerd[2020]: time="2024-09-04T17:16:39.639473219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:39.639870 containerd[2020]: time="2024-09-04T17:16:39.639797771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:39.644868 coreos-metadata[2096]: Sep 04 17:16:39.644 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:16:39.644988 containerd[2020]: time="2024-09-04T17:16:39.644246855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:39.645623 coreos-metadata[2096]: Sep 04 17:16:39.645 INFO Fetch successful Sep 4 17:16:39.645718 coreos-metadata[2096]: Sep 04 17:16:39.645 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:16:39.647522 coreos-metadata[2096]: Sep 04 17:16:39.646 INFO Fetch successful Sep 4 17:16:39.648437 containerd[2020]: time="2024-09-04T17:16:39.648338591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:39.648540 containerd[2020]: time="2024-09-04T17:16:39.648425615Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:16:39.651445 containerd[2020]: time="2024-09-04T17:16:39.651351095Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:16:39.651643 containerd[2020]: time="2024-09-04T17:16:39.651590471Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:16:39.654229 unknown[2096]: wrote ssh authorized keys file for user: core Sep 4 17:16:39.661308 containerd[2020]: time="2024-09-04T17:16:39.661135559Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:16:39.661308 containerd[2020]: time="2024-09-04T17:16:39.661235843Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:16:39.661308 containerd[2020]: time="2024-09-04T17:16:39.661276895Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:16:39.661477 containerd[2020]: time="2024-09-04T17:16:39.661313639Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:16:39.661477 containerd[2020]: time="2024-09-04T17:16:39.661347395Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:16:39.662621 containerd[2020]: time="2024-09-04T17:16:39.661625651Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:16:39.662621 containerd[2020]: time="2024-09-04T17:16:39.662020199Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664334903Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664400675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664444307Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664485299Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664516787Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664549019Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664594535Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664628603Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664659215Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664687799Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664714499Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664754447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.667126 containerd[2020]: time="2024-09-04T17:16:39.664785635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.668805 containerd[2020]: time="2024-09-04T17:16:39.668732771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.668925 containerd[2020]: time="2024-09-04T17:16:39.668803739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.668925 containerd[2020]: time="2024-09-04T17:16:39.668838707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.668925 containerd[2020]: time="2024-09-04T17:16:39.668876771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.669062 containerd[2020]: time="2024-09-04T17:16:39.668920379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.669062 containerd[2020]: time="2024-09-04T17:16:39.668952263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.669062 containerd[2020]: time="2024-09-04T17:16:39.668983367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.669062 containerd[2020]: time="2024-09-04T17:16:39.669017939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.669062 containerd[2020]: time="2024-09-04T17:16:39.669048863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.669301 containerd[2020]: time="2024-09-04T17:16:39.669097379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.669301 containerd[2020]: time="2024-09-04T17:16:39.669133919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.669301 containerd[2020]: time="2024-09-04T17:16:39.669170831Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:16:39.669301 containerd[2020]: time="2024-09-04T17:16:39.669227159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.669301 containerd[2020]: time="2024-09-04T17:16:39.669257207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.669502 containerd[2020]: time="2024-09-04T17:16:39.669310343Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:16:39.674315 containerd[2020]: time="2024-09-04T17:16:39.671888771Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:16:39.674315 containerd[2020]: time="2024-09-04T17:16:39.671974163Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:16:39.674315 containerd[2020]: time="2024-09-04T17:16:39.672003047Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:16:39.674315 containerd[2020]: time="2024-09-04T17:16:39.672035951Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:16:39.677040 containerd[2020]: time="2024-09-04T17:16:39.672063767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.677040 containerd[2020]: time="2024-09-04T17:16:39.676790975Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:16:39.677040 containerd[2020]: time="2024-09-04T17:16:39.676826891Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:16:39.677040 containerd[2020]: time="2024-09-04T17:16:39.676861211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:16:39.677952 containerd[2020]: time="2024-09-04T17:16:39.677401931Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:16:39.677952 containerd[2020]: time="2024-09-04T17:16:39.677537447Z" level=info msg="Connect containerd service" Sep 4 17:16:39.677952 containerd[2020]: time="2024-09-04T17:16:39.677601083Z" level=info msg="using legacy CRI server" Sep 4 17:16:39.677952 containerd[2020]: time="2024-09-04T17:16:39.677618831Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:16:39.677952 containerd[2020]: time="2024-09-04T17:16:39.677767679Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:16:39.690170 containerd[2020]: time="2024-09-04T17:16:39.689163635Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:16:39.690170 containerd[2020]: time="2024-09-04T17:16:39.689794595Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:16:39.690170 containerd[2020]: time="2024-09-04T17:16:39.689891087Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:16:39.690170 containerd[2020]: time="2024-09-04T17:16:39.689979455Z" level=info msg="Start subscribing containerd event" Sep 4 17:16:39.690170 containerd[2020]: time="2024-09-04T17:16:39.690041891Z" level=info msg="Start recovering state" Sep 4 17:16:39.690517 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:16:39.691090 containerd[2020]: time="2024-09-04T17:16:39.690190703Z" level=info msg="Start event monitor" Sep 4 17:16:39.691090 containerd[2020]: time="2024-09-04T17:16:39.690216467Z" level=info msg="Start snapshots syncer" Sep 4 17:16:39.691090 containerd[2020]: time="2024-09-04T17:16:39.690242099Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:16:39.691090 containerd[2020]: time="2024-09-04T17:16:39.690263963Z" level=info msg="Start streaming server" Sep 4 17:16:39.691090 containerd[2020]: time="2024-09-04T17:16:39.690397307Z" level=info msg="containerd successfully booted in 0.160814s" Sep 4 17:16:39.714974 update-ssh-keys[2167]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:16:39.717156 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:16:39.736309 systemd[1]: Finished sshkeys.service. Sep 4 17:16:39.741777 dbus-daemon[1993]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:16:39.751662 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:16:39.753774 dbus-daemon[1993]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2055 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:16:39.773643 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:16:39.837672 polkitd[2176]: Started polkitd version 121 Sep 4 17:16:39.854029 polkitd[2176]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:16:39.854176 polkitd[2176]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:16:39.862001 polkitd[2176]: Finished loading, compiling and executing 2 rules Sep 4 17:16:39.871306 dbus-daemon[1993]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:16:39.877816 polkitd[2176]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:16:39.881064 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:16:39.950175 systemd-resolved[1945]: System hostname changed to 'ip-172-31-17-160'. Sep 4 17:16:39.950716 systemd-hostnamed[2055]: Hostname set to (transient) Sep 4 17:16:39.977161 ntpd[1997]: bind(24) AF_INET6 fe80::476:35ff:fe07:eb87%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:16:39.978949 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: bind(24) AF_INET6 fe80::476:35ff:fe07:eb87%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:16:39.978949 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: unable to create socket on eth0 (6) for fe80::476:35ff:fe07:eb87%2#123 Sep 4 17:16:39.978949 ntpd[1997]: 4 Sep 17:16:39 ntpd[1997]: failed to init interface for address fe80::476:35ff:fe07:eb87%2 Sep 4 17:16:39.977222 ntpd[1997]: unable to create socket on eth0 (6) for fe80::476:35ff:fe07:eb87%2#123 Sep 4 17:16:39.977252 ntpd[1997]: failed to init interface for address fe80::476:35ff:fe07:eb87%2 Sep 4 17:16:40.409302 tar[2017]: linux-arm64/LICENSE Sep 4 17:16:40.410657 tar[2017]: linux-arm64/README.md Sep 4 17:16:40.429455 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:16:40.536263 systemd-networkd[1943]: eth0: Gained IPv6LL Sep 4 17:16:40.541485 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:16:40.547837 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:16:40.560669 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:16:40.573769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:16:40.586993 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:16:40.672026 amazon-ssm-agent[2201]: Initializing new seelog logger Sep 4 17:16:40.672782 amazon-ssm-agent[2201]: New Seelog Logger Creation Complete Sep 4 17:16:40.673975 amazon-ssm-agent[2201]: 2024/09/04 17:16:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:40.673975 amazon-ssm-agent[2201]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:40.673975 amazon-ssm-agent[2201]: 2024/09/04 17:16:40 processing appconfig overrides Sep 4 17:16:40.674975 amazon-ssm-agent[2201]: 2024/09/04 17:16:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:40.675114 amazon-ssm-agent[2201]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:40.675322 amazon-ssm-agent[2201]: 2024/09/04 17:16:40 processing appconfig overrides Sep 4 17:16:40.675723 amazon-ssm-agent[2201]: 2024/09/04 17:16:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:40.675819 amazon-ssm-agent[2201]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:40.676019 amazon-ssm-agent[2201]: 2024/09/04 17:16:40 processing appconfig overrides Sep 4 17:16:40.676928 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO Proxy environment variables: Sep 4 17:16:40.680249 amazon-ssm-agent[2201]: 2024/09/04 17:16:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:40.681617 amazon-ssm-agent[2201]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:40.681617 amazon-ssm-agent[2201]: 2024/09/04 17:16:40 processing appconfig overrides Sep 4 17:16:40.687818 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:16:40.776671 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO https_proxy: Sep 4 17:16:40.842574 sshd_keygen[2045]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:16:40.875179 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO http_proxy: Sep 4 17:16:40.889159 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:16:40.903727 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:16:40.915573 systemd[1]: Started sshd@0-172.31.17.160:22-139.178.89.65:39774.service - OpenSSH per-connection server daemon (139.178.89.65:39774). Sep 4 17:16:40.953331 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:16:40.953710 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:16:40.970574 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:16:40.974434 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO no_proxy: Sep 4 17:16:41.008766 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:16:41.026640 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:16:41.032954 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:16:41.037970 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:16:41.073158 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:16:41.159197 sshd[2226]: Accepted publickey for core from 139.178.89.65 port 39774 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:41.161806 sshd[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:41.173762 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:16:41.183161 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:16:41.195986 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:16:41.206678 systemd-logind[2004]: New session 1 of user core. Sep 4 17:16:41.246965 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:16:41.263565 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:16:41.274263 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO Agent will take identity from EC2 Sep 4 17:16:41.281462 (systemd)[2238]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:16:41.371927 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:16:41.473543 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:16:41.539975 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:16:41.539975 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:16:41.539975 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 4 17:16:41.539975 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:16:41.539975 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:16:41.539975 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO [Registrar] Starting registrar module Sep 4 17:16:41.539975 amazon-ssm-agent[2201]: 2024-09-04 17:16:40 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:16:41.539975 amazon-ssm-agent[2201]: 2024-09-04 17:16:41 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:16:41.539975 amazon-ssm-agent[2201]: 2024-09-04 17:16:41 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:16:41.539975 amazon-ssm-agent[2201]: 2024-09-04 17:16:41 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:16:41.539975 amazon-ssm-agent[2201]: 2024-09-04 17:16:41 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:16:41.542465 systemd[2238]: Queued start job for default target default.target. Sep 4 17:16:41.554544 systemd[2238]: Created slice app.slice - User Application Slice. Sep 4 17:16:41.554601 systemd[2238]: Reached target paths.target - Paths. Sep 4 17:16:41.554634 systemd[2238]: Reached target timers.target - Timers. Sep 4 17:16:41.557180 systemd[2238]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:16:41.572536 amazon-ssm-agent[2201]: 2024-09-04 17:16:41 INFO [CredentialRefresher] Next credential rotation will be in 31.516656207666667 minutes Sep 4 17:16:41.594786 systemd[2238]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:16:41.595045 systemd[2238]: Reached target sockets.target - Sockets. Sep 4 17:16:41.595422 systemd[2238]: Reached target basic.target - Basic System. Sep 4 17:16:41.595634 systemd[2238]: Reached target default.target - Main User Target. Sep 4 17:16:41.595808 systemd[2238]: Startup finished in 299ms. Sep 4 17:16:41.596146 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:16:41.610339 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:16:41.776789 systemd[1]: Started sshd@1-172.31.17.160:22-139.178.89.65:39784.service - OpenSSH per-connection server daemon (139.178.89.65:39784). Sep 4 17:16:41.968812 sshd[2251]: Accepted publickey for core from 139.178.89.65 port 39784 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:41.971786 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:41.981284 systemd-logind[2004]: New session 2 of user core. Sep 4 17:16:41.984338 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:16:42.115193 sshd[2251]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:42.122633 systemd-logind[2004]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:16:42.123124 systemd[1]: sshd@1-172.31.17.160:22-139.178.89.65:39784.service: Deactivated successfully. Sep 4 17:16:42.126149 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:16:42.129708 systemd-logind[2004]: Removed session 2. Sep 4 17:16:42.156583 systemd[1]: Started sshd@2-172.31.17.160:22-139.178.89.65:39792.service - OpenSSH per-connection server daemon (139.178.89.65:39792). Sep 4 17:16:42.328797 sshd[2258]: Accepted publickey for core from 139.178.89.65 port 39792 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:42.331320 sshd[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:42.339411 systemd-logind[2004]: New session 3 of user core. Sep 4 17:16:42.350336 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:16:42.486421 sshd[2258]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:42.491618 systemd-logind[2004]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:16:42.492550 systemd[1]: sshd@2-172.31.17.160:22-139.178.89.65:39792.service: Deactivated successfully. Sep 4 17:16:42.496523 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:16:42.499827 systemd-logind[2004]: Removed session 3. Sep 4 17:16:42.568403 amazon-ssm-agent[2201]: 2024-09-04 17:16:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:16:42.669288 amazon-ssm-agent[2201]: 2024-09-04 17:16:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2265) started Sep 4 17:16:42.769616 amazon-ssm-agent[2201]: 2024-09-04 17:16:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:16:42.977165 ntpd[1997]: Listen normally on 7 eth0 [fe80::476:35ff:fe07:eb87%2]:123 Sep 4 17:16:42.977705 ntpd[1997]: 4 Sep 17:16:42 ntpd[1997]: Listen normally on 7 eth0 [fe80::476:35ff:fe07:eb87%2]:123 Sep 4 17:16:43.218353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:16:43.222161 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:16:43.222738 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:16:43.232202 systemd[1]: Startup finished in 1.139s (kernel) + 8.979s (initrd) + 9.503s (userspace) = 19.623s. Sep 4 17:16:44.794320 kubelet[2279]: E0904 17:16:44.794163 2279 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:16:44.799438 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:16:44.800233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:16:44.801037 systemd[1]: kubelet.service: Consumed 1.308s CPU time. Sep 4 17:16:46.350275 systemd-resolved[1945]: Clock change detected. Flushing caches. Sep 4 17:16:52.901565 systemd[1]: Started sshd@3-172.31.17.160:22-139.178.89.65:50650.service - OpenSSH per-connection server daemon (139.178.89.65:50650). Sep 4 17:16:53.070969 sshd[2293]: Accepted publickey for core from 139.178.89.65 port 50650 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:53.073538 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:53.081447 systemd-logind[2004]: New session 4 of user core. Sep 4 17:16:53.089399 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:16:53.216528 sshd[2293]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:53.221274 systemd-logind[2004]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:16:53.221993 systemd[1]: sshd@3-172.31.17.160:22-139.178.89.65:50650.service: Deactivated successfully. Sep 4 17:16:53.225879 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:16:53.230263 systemd-logind[2004]: Removed session 4. Sep 4 17:16:53.257637 systemd[1]: Started sshd@4-172.31.17.160:22-139.178.89.65:50664.service - OpenSSH per-connection server daemon (139.178.89.65:50664). Sep 4 17:16:53.424306 sshd[2300]: Accepted publickey for core from 139.178.89.65 port 50664 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:53.426851 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:53.434097 systemd-logind[2004]: New session 5 of user core. Sep 4 17:16:53.443662 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:16:53.561062 sshd[2300]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:53.567618 systemd[1]: sshd@4-172.31.17.160:22-139.178.89.65:50664.service: Deactivated successfully. Sep 4 17:16:53.570878 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:16:53.572703 systemd-logind[2004]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:16:53.574406 systemd-logind[2004]: Removed session 5. Sep 4 17:16:53.606846 systemd[1]: Started sshd@5-172.31.17.160:22-139.178.89.65:50678.service - OpenSSH per-connection server daemon (139.178.89.65:50678). Sep 4 17:16:53.774368 sshd[2307]: Accepted publickey for core from 139.178.89.65 port 50678 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:53.776914 sshd[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:53.785385 systemd-logind[2004]: New session 6 of user core. Sep 4 17:16:53.794377 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:16:53.920002 sshd[2307]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:53.925100 systemd[1]: sshd@5-172.31.17.160:22-139.178.89.65:50678.service: Deactivated successfully. Sep 4 17:16:53.928229 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:16:53.931088 systemd-logind[2004]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:16:53.933079 systemd-logind[2004]: Removed session 6. Sep 4 17:16:53.962657 systemd[1]: Started sshd@6-172.31.17.160:22-139.178.89.65:50692.service - OpenSSH per-connection server daemon (139.178.89.65:50692). Sep 4 17:16:54.142799 sshd[2314]: Accepted publickey for core from 139.178.89.65 port 50692 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:54.145453 sshd[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:54.154449 systemd-logind[2004]: New session 7 of user core. Sep 4 17:16:54.159401 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:16:54.275094 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:16:54.275835 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:16:54.290647 sudo[2317]: pam_unix(sudo:session): session closed for user root Sep 4 17:16:54.314919 sshd[2314]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:54.321553 systemd-logind[2004]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:16:54.323435 systemd[1]: sshd@6-172.31.17.160:22-139.178.89.65:50692.service: Deactivated successfully. Sep 4 17:16:54.329214 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:16:54.330837 systemd-logind[2004]: Removed session 7. Sep 4 17:16:54.358622 systemd[1]: Started sshd@7-172.31.17.160:22-139.178.89.65:50698.service - OpenSSH per-connection server daemon (139.178.89.65:50698). Sep 4 17:16:54.534949 sshd[2322]: Accepted publickey for core from 139.178.89.65 port 50698 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:54.537185 sshd[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:54.544410 systemd-logind[2004]: New session 8 of user core. Sep 4 17:16:54.556429 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:16:54.662288 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:16:54.663428 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:16:54.669731 sudo[2326]: pam_unix(sudo:session): session closed for user root Sep 4 17:16:54.679402 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:16:54.680024 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:16:54.705619 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:16:54.709185 auditctl[2329]: No rules Sep 4 17:16:54.709502 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:16:54.709838 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:16:54.723747 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:16:54.764064 augenrules[2347]: No rules Sep 4 17:16:54.766428 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:16:54.768718 sudo[2325]: pam_unix(sudo:session): session closed for user root Sep 4 17:16:54.794452 sshd[2322]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:54.801091 systemd[1]: sshd@7-172.31.17.160:22-139.178.89.65:50698.service: Deactivated successfully. Sep 4 17:16:54.804175 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:16:54.805419 systemd-logind[2004]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:16:54.807021 systemd-logind[2004]: Removed session 8. Sep 4 17:16:54.839600 systemd[1]: Started sshd@8-172.31.17.160:22-139.178.89.65:50710.service - OpenSSH per-connection server daemon (139.178.89.65:50710). Sep 4 17:16:55.010408 sshd[2355]: Accepted publickey for core from 139.178.89.65 port 50710 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:55.012907 sshd[2355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:55.021436 systemd-logind[2004]: New session 9 of user core. Sep 4 17:16:55.029382 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:16:55.134337 sudo[2358]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:16:55.134970 sudo[2358]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:16:55.181420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:16:55.194349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:16:55.332614 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:16:55.333940 (dockerd)[2371]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:16:55.605089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:16:55.619212 (kubelet)[2381]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:16:55.757886 dockerd[2371]: time="2024-09-04T17:16:55.757790414Z" level=info msg="Starting up" Sep 4 17:16:55.759917 kubelet[2381]: E0904 17:16:55.759816 2381 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:16:55.769099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:16:55.769470 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:16:55.874875 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3080290019-merged.mount: Deactivated successfully. Sep 4 17:16:55.906744 dockerd[2371]: time="2024-09-04T17:16:55.906680355Z" level=info msg="Loading containers: start." Sep 4 17:16:56.066180 kernel: Initializing XFRM netlink socket Sep 4 17:16:56.097162 (udev-worker)[2407]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:16:56.177607 systemd-networkd[1943]: docker0: Link UP Sep 4 17:16:56.203437 dockerd[2371]: time="2024-09-04T17:16:56.203388228Z" level=info msg="Loading containers: done." Sep 4 17:16:56.229251 dockerd[2371]: time="2024-09-04T17:16:56.229041265Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:16:56.229251 dockerd[2371]: time="2024-09-04T17:16:56.229224121Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:16:56.229530 dockerd[2371]: time="2024-09-04T17:16:56.229409437Z" level=info msg="Daemon has completed initialization" Sep 4 17:16:56.291633 dockerd[2371]: time="2024-09-04T17:16:56.290490709Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:16:56.290757 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:16:57.348056 containerd[2020]: time="2024-09-04T17:16:57.347312966Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:16:57.985491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount677719348.mount: Deactivated successfully. Sep 4 17:16:59.550832 containerd[2020]: time="2024-09-04T17:16:59.550752713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:16:59.554637 containerd[2020]: time="2024-09-04T17:16:59.554591141Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=31599022" Sep 4 17:16:59.557190 containerd[2020]: time="2024-09-04T17:16:59.556967033Z" level=info msg="ImageCreate event name:\"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:16:59.561862 containerd[2020]: time="2024-09-04T17:16:59.561761225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:16:59.564816 containerd[2020]: time="2024-09-04T17:16:59.564206261Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"31595822\" in 2.216802251s" Sep 4 17:16:59.564816 containerd[2020]: time="2024-09-04T17:16:59.564266861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\"" Sep 4 17:16:59.605644 containerd[2020]: time="2024-09-04T17:16:59.605539073Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:17:01.343213 containerd[2020]: time="2024-09-04T17:17:01.342728514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:01.344518 containerd[2020]: time="2024-09-04T17:17:01.344443674Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=29019496" Sep 4 17:17:01.345737 containerd[2020]: time="2024-09-04T17:17:01.345654258Z" level=info msg="ImageCreate event name:\"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:01.351610 containerd[2020]: time="2024-09-04T17:17:01.351549990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:01.355987 containerd[2020]: time="2024-09-04T17:17:01.355837638Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"30506763\" in 1.750232097s" Sep 4 17:17:01.355987 containerd[2020]: time="2024-09-04T17:17:01.355909410Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\"" Sep 4 17:17:01.394832 containerd[2020]: time="2024-09-04T17:17:01.394719726Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:17:02.572303 containerd[2020]: time="2024-09-04T17:17:02.572239076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:02.574311 containerd[2020]: time="2024-09-04T17:17:02.574260416Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=15533681" Sep 4 17:17:02.576145 containerd[2020]: time="2024-09-04T17:17:02.576040328Z" level=info msg="ImageCreate event name:\"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:02.581723 containerd[2020]: time="2024-09-04T17:17:02.581620076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:02.589083 containerd[2020]: time="2024-09-04T17:17:02.587984444Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"17020966\" in 1.193205318s" Sep 4 17:17:02.589083 containerd[2020]: time="2024-09-04T17:17:02.588059504Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\"" Sep 4 17:17:02.627605 containerd[2020]: time="2024-09-04T17:17:02.627537092Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:17:03.923899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount552706838.mount: Deactivated successfully. Sep 4 17:17:04.415649 containerd[2020]: time="2024-09-04T17:17:04.415582365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:04.417119 containerd[2020]: time="2024-09-04T17:17:04.417049881Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=24977930" Sep 4 17:17:04.418575 containerd[2020]: time="2024-09-04T17:17:04.418506993Z" level=info msg="ImageCreate event name:\"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:04.423028 containerd[2020]: time="2024-09-04T17:17:04.422962257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:04.424240 containerd[2020]: time="2024-09-04T17:17:04.424192953Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"24976949\" in 1.796592285s" Sep 4 17:17:04.424329 containerd[2020]: time="2024-09-04T17:17:04.424245213Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\"" Sep 4 17:17:04.465266 containerd[2020]: time="2024-09-04T17:17:04.465208833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:17:04.941466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370174026.mount: Deactivated successfully. Sep 4 17:17:04.950312 containerd[2020]: time="2024-09-04T17:17:04.950238384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:04.952036 containerd[2020]: time="2024-09-04T17:17:04.951968508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Sep 4 17:17:04.953744 containerd[2020]: time="2024-09-04T17:17:04.953668620Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:04.958643 containerd[2020]: time="2024-09-04T17:17:04.958545000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:04.960646 containerd[2020]: time="2024-09-04T17:17:04.960467040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 495.198459ms" Sep 4 17:17:04.960646 containerd[2020]: time="2024-09-04T17:17:04.960518892Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:17:05.000486 containerd[2020]: time="2024-09-04T17:17:05.000413540Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:17:05.555190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112093944.mount: Deactivated successfully. Sep 4 17:17:05.932030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:17:05.951632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:06.416741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:06.431692 (kubelet)[2649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:17:06.538081 kubelet[2649]: E0904 17:17:06.537986 2649 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:17:06.543432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:17:06.543905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:17:07.878908 containerd[2020]: time="2024-09-04T17:17:07.878848766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:07.882458 containerd[2020]: time="2024-09-04T17:17:07.882402242Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Sep 4 17:17:07.883170 containerd[2020]: time="2024-09-04T17:17:07.882954758Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:07.889475 containerd[2020]: time="2024-09-04T17:17:07.889391534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:07.892601 containerd[2020]: time="2024-09-04T17:17:07.891960374Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.891486834s" Sep 4 17:17:07.892601 containerd[2020]: time="2024-09-04T17:17:07.892019606Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Sep 4 17:17:07.930353 containerd[2020]: time="2024-09-04T17:17:07.930023163Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:17:08.529370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608280379.mount: Deactivated successfully. Sep 4 17:17:09.026193 containerd[2020]: time="2024-09-04T17:17:09.025390812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:09.027410 containerd[2020]: time="2024-09-04T17:17:09.027343980Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Sep 4 17:17:09.029200 containerd[2020]: time="2024-09-04T17:17:09.029104512Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:09.033338 containerd[2020]: time="2024-09-04T17:17:09.033243540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:09.035267 containerd[2020]: time="2024-09-04T17:17:09.035015616Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.104935813s" Sep 4 17:17:09.035267 containerd[2020]: time="2024-09-04T17:17:09.035076936Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Sep 4 17:17:10.362249 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:17:14.499862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:14.511628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:14.554456 systemd[1]: Reloading requested from client PID 2768 ('systemctl') (unit session-9.scope)... Sep 4 17:17:14.554491 systemd[1]: Reloading... Sep 4 17:17:14.774168 zram_generator::config[2806]: No configuration found. Sep 4 17:17:15.001780 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:15.168751 systemd[1]: Reloading finished in 613 ms. Sep 4 17:17:15.260170 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:17:15.260426 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:17:15.260998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:15.271940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:15.609259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:15.624732 (kubelet)[2870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:17:15.703834 kubelet[2870]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:15.703834 kubelet[2870]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:17:15.703834 kubelet[2870]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:15.704440 kubelet[2870]: I0904 17:17:15.703934 2870 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:17:17.705384 kubelet[2870]: I0904 17:17:17.705323 2870 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:17:17.705384 kubelet[2870]: I0904 17:17:17.705372 2870 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:17:17.706070 kubelet[2870]: I0904 17:17:17.705705 2870 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:17:17.734945 kubelet[2870]: I0904 17:17:17.734431 2870 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:17:17.739263 kubelet[2870]: E0904 17:17:17.739220 2870 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:17.753517 kubelet[2870]: W0904 17:17:17.753481 2870 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:17:17.754912 kubelet[2870]: I0904 17:17:17.754863 2870 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:17:17.756190 kubelet[2870]: I0904 17:17:17.755503 2870 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:17:17.756190 kubelet[2870]: I0904 17:17:17.755813 2870 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:17:17.756190 kubelet[2870]: I0904 17:17:17.755855 2870 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:17:17.756190 kubelet[2870]: I0904 17:17:17.755875 2870 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:17:17.756190 kubelet[2870]: I0904 17:17:17.756065 2870 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:17.758421 kubelet[2870]: I0904 17:17:17.758366 2870 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:17:17.758587 kubelet[2870]: I0904 17:17:17.758546 2870 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:17:17.758658 kubelet[2870]: I0904 17:17:17.758630 2870 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:17:17.758658 kubelet[2870]: I0904 17:17:17.758655 2870 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:17:17.760244 kubelet[2870]: W0904 17:17:17.760150 2870 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.17.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-160&limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:17.760369 kubelet[2870]: E0904 17:17:17.760250 2870 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-160&limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:17.762158 kubelet[2870]: W0904 17:17:17.761904 2870 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.17.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:17.762158 kubelet[2870]: E0904 17:17:17.762000 2870 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:17.762874 kubelet[2870]: I0904 17:17:17.762418 2870 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:17:17.766033 kubelet[2870]: W0904 17:17:17.765995 2870 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:17:17.767247 kubelet[2870]: I0904 17:17:17.767211 2870 server.go:1232] "Started kubelet" Sep 4 17:17:17.769255 kubelet[2870]: I0904 17:17:17.768526 2870 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:17:17.769871 kubelet[2870]: I0904 17:17:17.769830 2870 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:17:17.773241 kubelet[2870]: I0904 17:17:17.773194 2870 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:17:17.773798 kubelet[2870]: I0904 17:17:17.773771 2870 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:17:17.774821 kubelet[2870]: E0904 17:17:17.774666 2870 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-17-160.17f21a0a109f018c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-17-160", UID:"ip-172-31-17-160", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-17-160"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 17, 17, 767172492, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 17, 17, 767172492, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-17-160"}': 'Post "https://172.31.17.160:6443/api/v1/namespaces/default/events": dial tcp 172.31.17.160:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:17:17.775741 kubelet[2870]: I0904 17:17:17.775683 2870 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:17:17.777555 kubelet[2870]: E0904 17:17:17.777003 2870 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:17:17.777555 kubelet[2870]: E0904 17:17:17.777056 2870 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:17:17.781086 kubelet[2870]: E0904 17:17:17.781055 2870 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-17-160\" not found" Sep 4 17:17:17.781447 kubelet[2870]: I0904 17:17:17.781425 2870 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:17:17.781882 kubelet[2870]: I0904 17:17:17.781835 2870 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:17:17.782193 kubelet[2870]: I0904 17:17:17.782172 2870 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:17:17.783187 kubelet[2870]: W0904 17:17:17.783092 2870 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.17.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:17.783415 kubelet[2870]: E0904 17:17:17.783393 2870 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:17.785427 kubelet[2870]: E0904 17:17:17.784980 2870 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-160?timeout=10s\": dial tcp 172.31.17.160:6443: connect: connection refused" interval="200ms" Sep 4 17:17:17.823906 kubelet[2870]: I0904 17:17:17.823851 2870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:17:17.829873 kubelet[2870]: I0904 17:17:17.829807 2870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:17:17.829873 kubelet[2870]: I0904 17:17:17.829867 2870 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:17:17.830067 kubelet[2870]: I0904 17:17:17.829898 2870 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:17:17.830067 kubelet[2870]: E0904 17:17:17.829973 2870 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:17:17.836318 kubelet[2870]: W0904 17:17:17.836213 2870 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.17.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:17.836478 kubelet[2870]: E0904 17:17:17.836362 2870 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:17.862986 kubelet[2870]: I0904 17:17:17.862950 2870 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:17:17.863286 kubelet[2870]: I0904 17:17:17.863266 2870 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:17:17.863441 kubelet[2870]: I0904 17:17:17.863423 2870 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:17.866688 kubelet[2870]: I0904 17:17:17.866658 2870 policy_none.go:49] "None policy: Start" Sep 4 17:17:17.868177 kubelet[2870]: I0904 17:17:17.868147 2870 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:17:17.868802 kubelet[2870]: I0904 17:17:17.868372 2870 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:17:17.878331 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:17:17.884857 kubelet[2870]: I0904 17:17:17.884786 2870 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-160" Sep 4 17:17:17.885410 kubelet[2870]: E0904 17:17:17.885344 2870 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.17.160:6443/api/v1/nodes\": dial tcp 172.31.17.160:6443: connect: connection refused" node="ip-172-31-17-160" Sep 4 17:17:17.895866 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:17:17.909846 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:17:17.913215 kubelet[2870]: I0904 17:17:17.912507 2870 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:17:17.913215 kubelet[2870]: I0904 17:17:17.912901 2870 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:17:17.914538 kubelet[2870]: E0904 17:17:17.914219 2870 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-160\" not found" Sep 4 17:17:17.930420 kubelet[2870]: I0904 17:17:17.930266 2870 topology_manager.go:215] "Topology Admit Handler" podUID="d0300d24d0cc8b8d3322a352d5c7897c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-160" Sep 4 17:17:17.933065 kubelet[2870]: I0904 17:17:17.932839 2870 topology_manager.go:215] "Topology Admit Handler" podUID="61052014591654bb039c42745c0d9edd" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-160" Sep 4 17:17:17.935414 kubelet[2870]: I0904 17:17:17.935006 2870 topology_manager.go:215] "Topology Admit Handler" podUID="cd8c6aa91a7b9353fa15d53c4191792c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-160" Sep 4 17:17:17.947873 systemd[1]: Created slice kubepods-burstable-podd0300d24d0cc8b8d3322a352d5c7897c.slice - libcontainer container kubepods-burstable-podd0300d24d0cc8b8d3322a352d5c7897c.slice. Sep 4 17:17:17.964307 systemd[1]: Created slice kubepods-burstable-pod61052014591654bb039c42745c0d9edd.slice - libcontainer container kubepods-burstable-pod61052014591654bb039c42745c0d9edd.slice. Sep 4 17:17:17.982728 systemd[1]: Created slice kubepods-burstable-podcd8c6aa91a7b9353fa15d53c4191792c.slice - libcontainer container kubepods-burstable-podcd8c6aa91a7b9353fa15d53c4191792c.slice. Sep 4 17:17:17.986918 kubelet[2870]: E0904 17:17:17.986331 2870 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-160?timeout=10s\": dial tcp 172.31.17.160:6443: connect: connection refused" interval="400ms" Sep 4 17:17:18.082847 kubelet[2870]: I0904 17:17:18.082723 2870 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd8c6aa91a7b9353fa15d53c4191792c-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-160\" (UID: \"cd8c6aa91a7b9353fa15d53c4191792c\") " pod="kube-system/kube-apiserver-ip-172-31-17-160" Sep 4 17:17:18.082847 kubelet[2870]: I0904 17:17:18.082801 2870 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d0300d24d0cc8b8d3322a352d5c7897c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-160\" (UID: \"d0300d24d0cc8b8d3322a352d5c7897c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-160" Sep 4 17:17:18.082847 kubelet[2870]: I0904 17:17:18.082852 2870 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0300d24d0cc8b8d3322a352d5c7897c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-160\" (UID: \"d0300d24d0cc8b8d3322a352d5c7897c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-160" Sep 4 17:17:18.083154 kubelet[2870]: I0904 17:17:18.082898 2870 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61052014591654bb039c42745c0d9edd-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-160\" (UID: \"61052014591654bb039c42745c0d9edd\") " pod="kube-system/kube-scheduler-ip-172-31-17-160" Sep 4 17:17:18.083154 kubelet[2870]: I0904 17:17:18.082943 2870 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd8c6aa91a7b9353fa15d53c4191792c-ca-certs\") pod \"kube-apiserver-ip-172-31-17-160\" (UID: \"cd8c6aa91a7b9353fa15d53c4191792c\") " pod="kube-system/kube-apiserver-ip-172-31-17-160" Sep 4 17:17:18.083154 kubelet[2870]: I0904 17:17:18.083002 2870 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0300d24d0cc8b8d3322a352d5c7897c-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-160\" (UID: \"d0300d24d0cc8b8d3322a352d5c7897c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-160" Sep 4 17:17:18.083154 kubelet[2870]: I0904 17:17:18.083047 2870 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0300d24d0cc8b8d3322a352d5c7897c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-160\" (UID: \"d0300d24d0cc8b8d3322a352d5c7897c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-160" Sep 4 17:17:18.083154 kubelet[2870]: I0904 17:17:18.083096 2870 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0300d24d0cc8b8d3322a352d5c7897c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-160\" (UID: \"d0300d24d0cc8b8d3322a352d5c7897c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-160" Sep 4 17:17:18.083427 kubelet[2870]: I0904 17:17:18.083173 2870 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd8c6aa91a7b9353fa15d53c4191792c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-160\" (UID: \"cd8c6aa91a7b9353fa15d53c4191792c\") " pod="kube-system/kube-apiserver-ip-172-31-17-160" Sep 4 17:17:18.087566 kubelet[2870]: I0904 17:17:18.087516 2870 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-160" Sep 4 17:17:18.088099 kubelet[2870]: E0904 17:17:18.088066 2870 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.17.160:6443/api/v1/nodes\": dial tcp 172.31.17.160:6443: connect: connection refused" node="ip-172-31-17-160" Sep 4 17:17:18.260180 containerd[2020]: time="2024-09-04T17:17:18.259798630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-160,Uid:d0300d24d0cc8b8d3322a352d5c7897c,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:18.277831 containerd[2020]: time="2024-09-04T17:17:18.277769506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-160,Uid:61052014591654bb039c42745c0d9edd,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:18.296021 containerd[2020]: time="2024-09-04T17:17:18.295785298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-160,Uid:cd8c6aa91a7b9353fa15d53c4191792c,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:18.387839 kubelet[2870]: E0904 17:17:18.387782 2870 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-160?timeout=10s\": dial tcp 172.31.17.160:6443: connect: connection refused" interval="800ms" Sep 4 17:17:18.491188 kubelet[2870]: I0904 17:17:18.490784 2870 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-160" Sep 4 17:17:18.491358 kubelet[2870]: E0904 17:17:18.491315 2870 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.17.160:6443/api/v1/nodes\": dial tcp 172.31.17.160:6443: connect: connection refused" node="ip-172-31-17-160" Sep 4 17:17:18.747083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033184611.mount: Deactivated successfully. Sep 4 17:17:18.755583 containerd[2020]: time="2024-09-04T17:17:18.755513520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:18.757593 containerd[2020]: time="2024-09-04T17:17:18.757526736Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:18.759435 containerd[2020]: time="2024-09-04T17:17:18.759314328Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 17:17:18.759753 containerd[2020]: time="2024-09-04T17:17:18.759677340Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:17:18.760668 containerd[2020]: time="2024-09-04T17:17:18.760566492Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:18.762303 containerd[2020]: time="2024-09-04T17:17:18.762250704Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:18.762869 containerd[2020]: time="2024-09-04T17:17:18.762812544Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:17:18.768399 containerd[2020]: time="2024-09-04T17:17:18.768257665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:18.772461 containerd[2020]: time="2024-09-04T17:17:18.771862933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 475.969575ms" Sep 4 17:17:18.779803 containerd[2020]: time="2024-09-04T17:17:18.779633065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 519.711795ms" Sep 4 17:17:18.779993 containerd[2020]: time="2024-09-04T17:17:18.779942485Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.063035ms" Sep 4 17:17:18.926169 kubelet[2870]: W0904 17:17:18.925809 2870 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.17.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-160&limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:18.926169 kubelet[2870]: E0904 17:17:18.925915 2870 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-160&limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:18.931500 kubelet[2870]: W0904 17:17:18.931328 2870 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.17.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:18.931500 kubelet[2870]: E0904 17:17:18.931457 2870 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:18.980531 containerd[2020]: time="2024-09-04T17:17:18.980362094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:18.982520 containerd[2020]: time="2024-09-04T17:17:18.980484650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:18.982666 containerd[2020]: time="2024-09-04T17:17:18.982519514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:18.982890 containerd[2020]: time="2024-09-04T17:17:18.982699190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:18.994236 containerd[2020]: time="2024-09-04T17:17:18.994031714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:18.994506 containerd[2020]: time="2024-09-04T17:17:18.994114982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:18.995899 containerd[2020]: time="2024-09-04T17:17:18.994628570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:18.996062 containerd[2020]: time="2024-09-04T17:17:18.995862014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:18.998219 containerd[2020]: time="2024-09-04T17:17:18.997210070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:18.998219 containerd[2020]: time="2024-09-04T17:17:18.997311278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:18.998219 containerd[2020]: time="2024-09-04T17:17:18.997366118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:19.001176 containerd[2020]: time="2024-09-04T17:17:18.997541630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:19.030479 systemd[1]: Started cri-containerd-037885901ba9342d3851f74c16bcc443f73496de92a3770649679947cc51fb8d.scope - libcontainer container 037885901ba9342d3851f74c16bcc443f73496de92a3770649679947cc51fb8d. Sep 4 17:17:19.052460 systemd[1]: Started cri-containerd-3b37f15693dcb6fd5b19d04ec3aaa497d33dab856c3a1e26ffd61bc775b3d48a.scope - libcontainer container 3b37f15693dcb6fd5b19d04ec3aaa497d33dab856c3a1e26ffd61bc775b3d48a. Sep 4 17:17:19.069798 systemd[1]: Started cri-containerd-59a44922ee1dd3eb3b571a0276e31178ab136b9e04455dbd4a295985839a3bf4.scope - libcontainer container 59a44922ee1dd3eb3b571a0276e31178ab136b9e04455dbd4a295985839a3bf4. Sep 4 17:17:19.175506 containerd[2020]: time="2024-09-04T17:17:19.175309175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-160,Uid:cd8c6aa91a7b9353fa15d53c4191792c,Namespace:kube-system,Attempt:0,} returns sandbox id \"037885901ba9342d3851f74c16bcc443f73496de92a3770649679947cc51fb8d\"" Sep 4 17:17:19.188197 containerd[2020]: time="2024-09-04T17:17:19.186291515Z" level=info msg="CreateContainer within sandbox \"037885901ba9342d3851f74c16bcc443f73496de92a3770649679947cc51fb8d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:17:19.188793 kubelet[2870]: E0904 17:17:19.188656 2870 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-160?timeout=10s\": dial tcp 172.31.17.160:6443: connect: connection refused" interval="1.6s" Sep 4 17:17:19.194414 containerd[2020]: time="2024-09-04T17:17:19.194358059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-160,Uid:d0300d24d0cc8b8d3322a352d5c7897c,Namespace:kube-system,Attempt:0,} returns sandbox id \"59a44922ee1dd3eb3b571a0276e31178ab136b9e04455dbd4a295985839a3bf4\"" Sep 4 17:17:19.201652 containerd[2020]: time="2024-09-04T17:17:19.200955779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-160,Uid:61052014591654bb039c42745c0d9edd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b37f15693dcb6fd5b19d04ec3aaa497d33dab856c3a1e26ffd61bc775b3d48a\"" Sep 4 17:17:19.203166 containerd[2020]: time="2024-09-04T17:17:19.202952063Z" level=info msg="CreateContainer within sandbox \"59a44922ee1dd3eb3b571a0276e31178ab136b9e04455dbd4a295985839a3bf4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:17:19.206732 containerd[2020]: time="2024-09-04T17:17:19.206622671Z" level=info msg="CreateContainer within sandbox \"3b37f15693dcb6fd5b19d04ec3aaa497d33dab856c3a1e26ffd61bc775b3d48a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:17:19.224114 containerd[2020]: time="2024-09-04T17:17:19.224050427Z" level=info msg="CreateContainer within sandbox \"037885901ba9342d3851f74c16bcc443f73496de92a3770649679947cc51fb8d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"34fbdcb0642762abde7b4a75d679181349fa5ed59ad53536c0663999b491039d\"" Sep 4 17:17:19.225078 containerd[2020]: time="2024-09-04T17:17:19.225028223Z" level=info msg="StartContainer for \"34fbdcb0642762abde7b4a75d679181349fa5ed59ad53536c0663999b491039d\"" Sep 4 17:17:19.233232 kubelet[2870]: W0904 17:17:19.233090 2870 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.17.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:19.233232 kubelet[2870]: E0904 17:17:19.233223 2870 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:19.234641 containerd[2020]: time="2024-09-04T17:17:19.234395891Z" level=info msg="CreateContainer within sandbox \"59a44922ee1dd3eb3b571a0276e31178ab136b9e04455dbd4a295985839a3bf4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d08573d28ba007652ee08b6df0cfbc87330ad5aafff2011a78760de23932ab8a\"" Sep 4 17:17:19.235449 containerd[2020]: time="2024-09-04T17:17:19.235216139Z" level=info msg="StartContainer for \"d08573d28ba007652ee08b6df0cfbc87330ad5aafff2011a78760de23932ab8a\"" Sep 4 17:17:19.237812 containerd[2020]: time="2024-09-04T17:17:19.237660011Z" level=info msg="CreateContainer within sandbox \"3b37f15693dcb6fd5b19d04ec3aaa497d33dab856c3a1e26ffd61bc775b3d48a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f5fddc45a8411d97ffdde6e33ddcf07442179b778076ee882e02a47c96dd694e\"" Sep 4 17:17:19.239875 containerd[2020]: time="2024-09-04T17:17:19.239658911Z" level=info msg="StartContainer for \"f5fddc45a8411d97ffdde6e33ddcf07442179b778076ee882e02a47c96dd694e\"" Sep 4 17:17:19.251435 kubelet[2870]: W0904 17:17:19.250099 2870 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.17.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:19.251435 kubelet[2870]: E0904 17:17:19.250189 2870 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.160:6443: connect: connection refused Sep 4 17:17:19.288502 systemd[1]: Started cri-containerd-34fbdcb0642762abde7b4a75d679181349fa5ed59ad53536c0663999b491039d.scope - libcontainer container 34fbdcb0642762abde7b4a75d679181349fa5ed59ad53536c0663999b491039d. Sep 4 17:17:19.295295 kubelet[2870]: I0904 17:17:19.295243 2870 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-160" Sep 4 17:17:19.298202 kubelet[2870]: E0904 17:17:19.298027 2870 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.17.160:6443/api/v1/nodes\": dial tcp 172.31.17.160:6443: connect: connection refused" node="ip-172-31-17-160" Sep 4 17:17:19.321599 systemd[1]: Started cri-containerd-f5fddc45a8411d97ffdde6e33ddcf07442179b778076ee882e02a47c96dd694e.scope - libcontainer container f5fddc45a8411d97ffdde6e33ddcf07442179b778076ee882e02a47c96dd694e. Sep 4 17:17:19.343407 systemd[1]: Started cri-containerd-d08573d28ba007652ee08b6df0cfbc87330ad5aafff2011a78760de23932ab8a.scope - libcontainer container d08573d28ba007652ee08b6df0cfbc87330ad5aafff2011a78760de23932ab8a. Sep 4 17:17:19.441347 containerd[2020]: time="2024-09-04T17:17:19.441197712Z" level=info msg="StartContainer for \"34fbdcb0642762abde7b4a75d679181349fa5ed59ad53536c0663999b491039d\" returns successfully" Sep 4 17:17:19.465776 containerd[2020]: time="2024-09-04T17:17:19.465678348Z" level=info msg="StartContainer for \"f5fddc45a8411d97ffdde6e33ddcf07442179b778076ee882e02a47c96dd694e\" returns successfully" Sep 4 17:17:19.494764 containerd[2020]: time="2024-09-04T17:17:19.494596872Z" level=info msg="StartContainer for \"d08573d28ba007652ee08b6df0cfbc87330ad5aafff2011a78760de23932ab8a\" returns successfully" Sep 4 17:17:20.900621 kubelet[2870]: I0904 17:17:20.900575 2870 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-160" Sep 4 17:17:23.763919 kubelet[2870]: I0904 17:17:23.763783 2870 apiserver.go:52] "Watching apiserver" Sep 4 17:17:23.771561 kubelet[2870]: E0904 17:17:23.771435 2870 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-160\" not found" node="ip-172-31-17-160" Sep 4 17:17:23.782974 kubelet[2870]: I0904 17:17:23.782891 2870 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:17:23.870329 kubelet[2870]: I0904 17:17:23.868859 2870 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-17-160" Sep 4 17:17:24.989177 update_engine[2005]: I0904 17:17:24.988178 2005 update_attempter.cc:509] Updating boot flags... Sep 4 17:17:25.148859 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3164) Sep 4 17:17:25.661709 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3166) Sep 4 17:17:26.808868 systemd[1]: Reloading requested from client PID 3334 ('systemctl') (unit session-9.scope)... Sep 4 17:17:26.808902 systemd[1]: Reloading... Sep 4 17:17:26.952232 zram_generator::config[3370]: No configuration found. Sep 4 17:17:27.281404 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:27.507599 systemd[1]: Reloading finished in 698 ms. Sep 4 17:17:27.607093 kubelet[2870]: I0904 17:17:27.606752 2870 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:17:27.607103 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:27.626592 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:17:27.627186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:27.627274 systemd[1]: kubelet.service: Consumed 2.958s CPU time, 116.3M memory peak, 0B memory swap peak. Sep 4 17:17:27.637803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:27.993445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:28.011244 (kubelet)[3432]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:17:28.136652 kubelet[3432]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:28.136652 kubelet[3432]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:17:28.136652 kubelet[3432]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:28.137279 kubelet[3432]: I0904 17:17:28.136734 3432 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:17:28.147061 kubelet[3432]: I0904 17:17:28.146499 3432 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:17:28.147061 kubelet[3432]: I0904 17:17:28.146553 3432 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:17:28.147061 kubelet[3432]: I0904 17:17:28.146961 3432 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:17:28.150877 kubelet[3432]: I0904 17:17:28.150827 3432 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:17:28.153596 kubelet[3432]: I0904 17:17:28.152621 3432 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:17:28.158714 sudo[3446]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 17:17:28.159439 sudo[3446]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 17:17:28.173493 kubelet[3432]: W0904 17:17:28.172929 3432 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:17:28.175173 kubelet[3432]: I0904 17:17:28.174723 3432 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:17:28.175343 kubelet[3432]: I0904 17:17:28.175319 3432 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:17:28.175878 kubelet[3432]: I0904 17:17:28.175844 3432 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:17:28.176112 kubelet[3432]: I0904 17:17:28.176091 3432 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:17:28.176266 kubelet[3432]: I0904 17:17:28.176247 3432 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:17:28.176438 kubelet[3432]: I0904 17:17:28.176417 3432 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:28.176693 kubelet[3432]: I0904 17:17:28.176672 3432 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:17:28.176858 kubelet[3432]: I0904 17:17:28.176838 3432 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:17:28.176997 kubelet[3432]: I0904 17:17:28.176977 3432 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:17:28.177115 kubelet[3432]: I0904 17:17:28.177095 3432 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:17:28.179058 kubelet[3432]: I0904 17:17:28.179002 3432 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:17:28.179980 kubelet[3432]: I0904 17:17:28.179930 3432 server.go:1232] "Started kubelet" Sep 4 17:17:28.194177 kubelet[3432]: I0904 17:17:28.190946 3432 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:17:28.206931 kubelet[3432]: I0904 17:17:28.206887 3432 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:17:28.210919 kubelet[3432]: I0904 17:17:28.210851 3432 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:17:28.220153 kubelet[3432]: I0904 17:17:28.219015 3432 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:17:28.220717 kubelet[3432]: I0904 17:17:28.220691 3432 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:17:28.224697 kubelet[3432]: E0904 17:17:28.224656 3432 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:17:28.224911 kubelet[3432]: E0904 17:17:28.224891 3432 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:17:28.235808 kubelet[3432]: I0904 17:17:28.235750 3432 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:17:28.240346 kubelet[3432]: I0904 17:17:28.238828 3432 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:17:28.280726 kubelet[3432]: I0904 17:17:28.241980 3432 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:17:28.301850 kubelet[3432]: I0904 17:17:28.301528 3432 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:17:28.317718 kubelet[3432]: I0904 17:17:28.317582 3432 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:17:28.317718 kubelet[3432]: I0904 17:17:28.317650 3432 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:17:28.317718 kubelet[3432]: I0904 17:17:28.317683 3432 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:17:28.318859 kubelet[3432]: E0904 17:17:28.318531 3432 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:17:28.365621 kubelet[3432]: I0904 17:17:28.365477 3432 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-160" Sep 4 17:17:28.389535 kubelet[3432]: I0904 17:17:28.387914 3432 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-17-160" Sep 4 17:17:28.389535 kubelet[3432]: I0904 17:17:28.388064 3432 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-17-160" Sep 4 17:17:28.418998 kubelet[3432]: E0904 17:17:28.418589 3432 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:17:28.534739 kubelet[3432]: I0904 17:17:28.534373 3432 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:17:28.534739 kubelet[3432]: I0904 17:17:28.534437 3432 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:17:28.534739 kubelet[3432]: I0904 17:17:28.534497 3432 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:28.535577 kubelet[3432]: I0904 17:17:28.535011 3432 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:17:28.535577 kubelet[3432]: I0904 17:17:28.535086 3432 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:17:28.535577 kubelet[3432]: I0904 17:17:28.535118 3432 policy_none.go:49] "None policy: Start" Sep 4 17:17:28.539156 kubelet[3432]: I0904 17:17:28.538093 3432 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:17:28.539156 kubelet[3432]: I0904 17:17:28.538174 3432 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:17:28.539156 kubelet[3432]: I0904 17:17:28.538613 3432 state_mem.go:75] "Updated machine memory state" Sep 4 17:17:28.554111 kubelet[3432]: I0904 17:17:28.554057 3432 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:17:28.556826 kubelet[3432]: I0904 17:17:28.555926 3432 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:17:28.619152 kubelet[3432]: I0904 17:17:28.619084 3432 topology_manager.go:215] "Topology Admit Handler" podUID="61052014591654bb039c42745c0d9edd" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-160" Sep 4 17:17:28.619320 kubelet[3432]: I0904 17:17:28.619282 3432 topology_manager.go:215] "Topology Admit Handler" podUID="cd8c6aa91a7b9353fa15d53c4191792c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-160" Sep 4 17:17:28.619418 kubelet[3432]: I0904 17:17:28.619377 3432 topology_manager.go:215] "Topology Admit Handler" podUID="d0300d24d0cc8b8d3322a352d5c7897c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-160" Sep 4 17:17:28.634854 kubelet[3432]: E0904 17:17:28.634782 3432 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-160\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-160" Sep 4 17:17:28.690153 kubelet[3432]: I0904 17:17:28.688921 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd8c6aa91a7b9353fa15d53c4191792c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-160\" (UID: \"cd8c6aa91a7b9353fa15d53c4191792c\") " pod="kube-system/kube-apiserver-ip-172-31-17-160" Sep 4 17:17:28.690153 kubelet[3432]: I0904 17:17:28.689013 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0300d24d0cc8b8d3322a352d5c7897c-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-160\" (UID: \"d0300d24d0cc8b8d3322a352d5c7897c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-160" Sep 4 17:17:28.690153 kubelet[3432]: I0904 17:17:28.689063 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d0300d24d0cc8b8d3322a352d5c7897c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-160\" (UID: \"d0300d24d0cc8b8d3322a352d5c7897c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-160" Sep 4 17:17:28.690153 kubelet[3432]: I0904 17:17:28.689106 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0300d24d0cc8b8d3322a352d5c7897c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-160\" (UID: \"d0300d24d0cc8b8d3322a352d5c7897c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-160" Sep 4 17:17:28.690153 kubelet[3432]: I0904 17:17:28.689735 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0300d24d0cc8b8d3322a352d5c7897c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-160\" (UID: \"d0300d24d0cc8b8d3322a352d5c7897c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-160" Sep 4 17:17:28.690540 kubelet[3432]: I0904 17:17:28.689793 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61052014591654bb039c42745c0d9edd-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-160\" (UID: \"61052014591654bb039c42745c0d9edd\") " pod="kube-system/kube-scheduler-ip-172-31-17-160" Sep 4 17:17:28.690540 kubelet[3432]: I0904 17:17:28.689836 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd8c6aa91a7b9353fa15d53c4191792c-ca-certs\") pod \"kube-apiserver-ip-172-31-17-160\" (UID: \"cd8c6aa91a7b9353fa15d53c4191792c\") " pod="kube-system/kube-apiserver-ip-172-31-17-160" Sep 4 17:17:28.690540 kubelet[3432]: I0904 17:17:28.689879 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd8c6aa91a7b9353fa15d53c4191792c-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-160\" (UID: \"cd8c6aa91a7b9353fa15d53c4191792c\") " pod="kube-system/kube-apiserver-ip-172-31-17-160" Sep 4 17:17:28.690540 kubelet[3432]: I0904 17:17:28.689931 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0300d24d0cc8b8d3322a352d5c7897c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-160\" (UID: \"d0300d24d0cc8b8d3322a352d5c7897c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-160" Sep 4 17:17:29.154686 sudo[3446]: pam_unix(sudo:session): session closed for user root Sep 4 17:17:29.178515 kubelet[3432]: I0904 17:17:29.178452 3432 apiserver.go:52] "Watching apiserver" Sep 4 17:17:29.254932 kubelet[3432]: I0904 17:17:29.254869 3432 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:17:29.408770 kubelet[3432]: I0904 17:17:29.408633 3432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-160" podStartSLOduration=2.408146577 podCreationTimestamp="2024-09-04 17:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:17:29.387310305 +0000 UTC m=+1.364251208" watchObservedRunningTime="2024-09-04 17:17:29.408146577 +0000 UTC m=+1.385087528" Sep 4 17:17:29.450853 kubelet[3432]: I0904 17:17:29.450799 3432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-160" podStartSLOduration=1.450745306 podCreationTimestamp="2024-09-04 17:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:17:29.414341457 +0000 UTC m=+1.391282360" watchObservedRunningTime="2024-09-04 17:17:29.450745306 +0000 UTC m=+1.427686233" Sep 4 17:17:29.491210 kubelet[3432]: I0904 17:17:29.490813 3432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-160" podStartSLOduration=1.490753018 podCreationTimestamp="2024-09-04 17:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:17:29.451799422 +0000 UTC m=+1.428740337" watchObservedRunningTime="2024-09-04 17:17:29.490753018 +0000 UTC m=+1.467693945" Sep 4 17:17:31.247512 sudo[2358]: pam_unix(sudo:session): session closed for user root Sep 4 17:17:31.272007 sshd[2355]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:31.277799 systemd[1]: sshd@8-172.31.17.160:22-139.178.89.65:50710.service: Deactivated successfully. Sep 4 17:17:31.282388 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:17:31.282769 systemd[1]: session-9.scope: Consumed 8.733s CPU time, 133.6M memory peak, 0B memory swap peak. Sep 4 17:17:31.286077 systemd-logind[2004]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:17:31.287939 systemd-logind[2004]: Removed session 9. Sep 4 17:17:40.470388 kubelet[3432]: I0904 17:17:40.470331 3432 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:17:40.471645 containerd[2020]: time="2024-09-04T17:17:40.471485456Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:17:40.472432 kubelet[3432]: I0904 17:17:40.472100 3432 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:17:41.359557 kubelet[3432]: I0904 17:17:41.358475 3432 topology_manager.go:215] "Topology Admit Handler" podUID="5e4c530f-23f8-4b06-a55c-446dfdc004e7" podNamespace="kube-system" podName="kube-proxy-sbccd" Sep 4 17:17:41.381863 systemd[1]: Created slice kubepods-besteffort-pod5e4c530f_23f8_4b06_a55c_446dfdc004e7.slice - libcontainer container kubepods-besteffort-pod5e4c530f_23f8_4b06_a55c_446dfdc004e7.slice. Sep 4 17:17:41.413924 kubelet[3432]: I0904 17:17:41.413860 3432 topology_manager.go:215] "Topology Admit Handler" podUID="7251b004-1131-48df-ae96-89ad940c0e77" podNamespace="kube-system" podName="cilium-269sd" Sep 4 17:17:41.436888 systemd[1]: Created slice kubepods-burstable-pod7251b004_1131_48df_ae96_89ad940c0e77.slice - libcontainer container kubepods-burstable-pod7251b004_1131_48df_ae96_89ad940c0e77.slice. Sep 4 17:17:41.474441 kubelet[3432]: I0904 17:17:41.474097 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-cni-path\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.474994 kubelet[3432]: I0904 17:17:41.474490 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-bpf-maps\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.474994 kubelet[3432]: I0904 17:17:41.474561 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-etc-cni-netd\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.474994 kubelet[3432]: I0904 17:17:41.474624 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7251b004-1131-48df-ae96-89ad940c0e77-clustermesh-secrets\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.474994 kubelet[3432]: I0904 17:17:41.474681 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e4c530f-23f8-4b06-a55c-446dfdc004e7-kube-proxy\") pod \"kube-proxy-sbccd\" (UID: \"5e4c530f-23f8-4b06-a55c-446dfdc004e7\") " pod="kube-system/kube-proxy-sbccd" Sep 4 17:17:41.474994 kubelet[3432]: I0904 17:17:41.474746 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e4c530f-23f8-4b06-a55c-446dfdc004e7-xtables-lock\") pod \"kube-proxy-sbccd\" (UID: \"5e4c530f-23f8-4b06-a55c-446dfdc004e7\") " pod="kube-system/kube-proxy-sbccd" Sep 4 17:17:41.476114 kubelet[3432]: I0904 17:17:41.474808 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pzsc\" (UniqueName: \"kubernetes.io/projected/5e4c530f-23f8-4b06-a55c-446dfdc004e7-kube-api-access-7pzsc\") pod \"kube-proxy-sbccd\" (UID: \"5e4c530f-23f8-4b06-a55c-446dfdc004e7\") " pod="kube-system/kube-proxy-sbccd" Sep 4 17:17:41.476114 kubelet[3432]: I0904 17:17:41.474863 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-hostproc\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.476114 kubelet[3432]: I0904 17:17:41.474916 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7251b004-1131-48df-ae96-89ad940c0e77-hubble-tls\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.476114 kubelet[3432]: I0904 17:17:41.474978 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-cilium-cgroup\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.476114 kubelet[3432]: I0904 17:17:41.475023 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-lib-modules\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.476114 kubelet[3432]: I0904 17:17:41.475109 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wwgq\" (UniqueName: \"kubernetes.io/projected/7251b004-1131-48df-ae96-89ad940c0e77-kube-api-access-2wwgq\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.476466 kubelet[3432]: I0904 17:17:41.475746 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e4c530f-23f8-4b06-a55c-446dfdc004e7-lib-modules\") pod \"kube-proxy-sbccd\" (UID: \"5e4c530f-23f8-4b06-a55c-446dfdc004e7\") " pod="kube-system/kube-proxy-sbccd" Sep 4 17:17:41.476466 kubelet[3432]: I0904 17:17:41.475840 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-xtables-lock\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.476466 kubelet[3432]: I0904 17:17:41.475889 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7251b004-1131-48df-ae96-89ad940c0e77-cilium-config-path\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.476466 kubelet[3432]: I0904 17:17:41.475934 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-host-proc-sys-net\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.476466 kubelet[3432]: I0904 17:17:41.475977 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-host-proc-sys-kernel\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.476722 kubelet[3432]: I0904 17:17:41.476021 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-cilium-run\") pod \"cilium-269sd\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " pod="kube-system/cilium-269sd" Sep 4 17:17:41.572585 kubelet[3432]: I0904 17:17:41.572507 3432 topology_manager.go:215] "Topology Admit Handler" podUID="20e2d879-d36a-413e-b9db-f2ac5adddfca" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-rmzzx" Sep 4 17:17:41.596708 systemd[1]: Created slice kubepods-besteffort-pod20e2d879_d36a_413e_b9db_f2ac5adddfca.slice - libcontainer container kubepods-besteffort-pod20e2d879_d36a_413e_b9db_f2ac5adddfca.slice. Sep 4 17:17:41.695833 kubelet[3432]: I0904 17:17:41.695796 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20e2d879-d36a-413e-b9db-f2ac5adddfca-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-rmzzx\" (UID: \"20e2d879-d36a-413e-b9db-f2ac5adddfca\") " pod="kube-system/cilium-operator-6bc8ccdb58-rmzzx" Sep 4 17:17:41.696247 kubelet[3432]: I0904 17:17:41.696212 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlzpp\" (UniqueName: \"kubernetes.io/projected/20e2d879-d36a-413e-b9db-f2ac5adddfca-kube-api-access-wlzpp\") pod \"cilium-operator-6bc8ccdb58-rmzzx\" (UID: \"20e2d879-d36a-413e-b9db-f2ac5adddfca\") " pod="kube-system/cilium-operator-6bc8ccdb58-rmzzx" Sep 4 17:17:41.745329 containerd[2020]: time="2024-09-04T17:17:41.745251491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-269sd,Uid:7251b004-1131-48df-ae96-89ad940c0e77,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:41.787292 containerd[2020]: time="2024-09-04T17:17:41.786743591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:41.787292 containerd[2020]: time="2024-09-04T17:17:41.786940415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:41.787823 containerd[2020]: time="2024-09-04T17:17:41.787721267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:41.788291 containerd[2020]: time="2024-09-04T17:17:41.787950935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:41.835487 systemd[1]: Started cri-containerd-2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5.scope - libcontainer container 2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5. Sep 4 17:17:41.882921 containerd[2020]: time="2024-09-04T17:17:41.882823823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-269sd,Uid:7251b004-1131-48df-ae96-89ad940c0e77,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\"" Sep 4 17:17:41.886874 containerd[2020]: time="2024-09-04T17:17:41.886752563Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 17:17:41.942458 containerd[2020]: time="2024-09-04T17:17:41.941878404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-rmzzx,Uid:20e2d879-d36a-413e-b9db-f2ac5adddfca,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:41.982814 containerd[2020]: time="2024-09-04T17:17:41.982417224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:41.982814 containerd[2020]: time="2024-09-04T17:17:41.982571256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:41.982814 containerd[2020]: time="2024-09-04T17:17:41.982620816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:41.983694 containerd[2020]: time="2024-09-04T17:17:41.983459424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:41.997785 containerd[2020]: time="2024-09-04T17:17:41.997712616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sbccd,Uid:5e4c530f-23f8-4b06-a55c-446dfdc004e7,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:42.022470 systemd[1]: Started cri-containerd-0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802.scope - libcontainer container 0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802. Sep 4 17:17:42.081952 containerd[2020]: time="2024-09-04T17:17:42.080819540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:42.081952 containerd[2020]: time="2024-09-04T17:17:42.080918492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:42.081952 containerd[2020]: time="2024-09-04T17:17:42.080946164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:42.082718 containerd[2020]: time="2024-09-04T17:17:42.082596536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:42.131391 containerd[2020]: time="2024-09-04T17:17:42.131267661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-rmzzx,Uid:20e2d879-d36a-413e-b9db-f2ac5adddfca,Namespace:kube-system,Attempt:0,} returns sandbox id \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\"" Sep 4 17:17:42.136506 systemd[1]: Started cri-containerd-72001274eef241599af39ad2e5fc20d39f3d469751a02bfd0b7113ce93fe191e.scope - libcontainer container 72001274eef241599af39ad2e5fc20d39f3d469751a02bfd0b7113ce93fe191e. Sep 4 17:17:42.187073 containerd[2020]: time="2024-09-04T17:17:42.187008909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sbccd,Uid:5e4c530f-23f8-4b06-a55c-446dfdc004e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"72001274eef241599af39ad2e5fc20d39f3d469751a02bfd0b7113ce93fe191e\"" Sep 4 17:17:42.194783 containerd[2020]: time="2024-09-04T17:17:42.194720889Z" level=info msg="CreateContainer within sandbox \"72001274eef241599af39ad2e5fc20d39f3d469751a02bfd0b7113ce93fe191e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:17:42.222351 containerd[2020]: time="2024-09-04T17:17:42.222277701Z" level=info msg="CreateContainer within sandbox \"72001274eef241599af39ad2e5fc20d39f3d469751a02bfd0b7113ce93fe191e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d1af068f8f6a26104cd23640a9761b40174b7ac551e941914b925ed7cd59d2ae\"" Sep 4 17:17:42.224187 containerd[2020]: time="2024-09-04T17:17:42.224085765Z" level=info msg="StartContainer for \"d1af068f8f6a26104cd23640a9761b40174b7ac551e941914b925ed7cd59d2ae\"" Sep 4 17:17:42.274470 systemd[1]: Started cri-containerd-d1af068f8f6a26104cd23640a9761b40174b7ac551e941914b925ed7cd59d2ae.scope - libcontainer container d1af068f8f6a26104cd23640a9761b40174b7ac551e941914b925ed7cd59d2ae. Sep 4 17:17:42.344849 containerd[2020]: time="2024-09-04T17:17:42.344506594Z" level=info msg="StartContainer for \"d1af068f8f6a26104cd23640a9761b40174b7ac551e941914b925ed7cd59d2ae\" returns successfully" Sep 4 17:17:47.291642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430907396.mount: Deactivated successfully. Sep 4 17:17:48.355064 kubelet[3432]: I0904 17:17:48.354985 3432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sbccd" podStartSLOduration=7.354837711 podCreationTimestamp="2024-09-04 17:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:17:42.476011294 +0000 UTC m=+14.452952221" watchObservedRunningTime="2024-09-04 17:17:48.354837711 +0000 UTC m=+20.331778650" Sep 4 17:17:49.939050 containerd[2020]: time="2024-09-04T17:17:49.938980363Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:49.940817 containerd[2020]: time="2024-09-04T17:17:49.940762015Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651594" Sep 4 17:17:49.941715 containerd[2020]: time="2024-09-04T17:17:49.941627563Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:49.947359 containerd[2020]: time="2024-09-04T17:17:49.947292127Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.060471128s" Sep 4 17:17:49.947359 containerd[2020]: time="2024-09-04T17:17:49.947353483Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 17:17:49.949211 containerd[2020]: time="2024-09-04T17:17:49.949054723Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 17:17:49.952533 containerd[2020]: time="2024-09-04T17:17:49.952247167Z" level=info msg="CreateContainer within sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:17:49.973038 containerd[2020]: time="2024-09-04T17:17:49.972978932Z" level=info msg="CreateContainer within sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232\"" Sep 4 17:17:49.974575 containerd[2020]: time="2024-09-04T17:17:49.974505260Z" level=info msg="StartContainer for \"b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232\"" Sep 4 17:17:50.034472 systemd[1]: Started cri-containerd-b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232.scope - libcontainer container b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232. Sep 4 17:17:50.092157 containerd[2020]: time="2024-09-04T17:17:50.091032700Z" level=info msg="StartContainer for \"b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232\" returns successfully" Sep 4 17:17:50.114934 systemd[1]: cri-containerd-b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232.scope: Deactivated successfully. Sep 4 17:17:50.965242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232-rootfs.mount: Deactivated successfully. Sep 4 17:17:51.329447 containerd[2020]: time="2024-09-04T17:17:51.328837374Z" level=info msg="shim disconnected" id=b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232 namespace=k8s.io Sep 4 17:17:51.329447 containerd[2020]: time="2024-09-04T17:17:51.329055450Z" level=warning msg="cleaning up after shim disconnected" id=b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232 namespace=k8s.io Sep 4 17:17:51.329447 containerd[2020]: time="2024-09-04T17:17:51.329078526Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:17:51.355054 containerd[2020]: time="2024-09-04T17:17:51.354822162Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:17:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:17:51.504856 containerd[2020]: time="2024-09-04T17:17:51.504028819Z" level=info msg="CreateContainer within sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:17:51.536468 containerd[2020]: time="2024-09-04T17:17:51.536062951Z" level=info msg="CreateContainer within sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b\"" Sep 4 17:17:51.536101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4031890893.mount: Deactivated successfully. Sep 4 17:17:51.546221 containerd[2020]: time="2024-09-04T17:17:51.545985475Z" level=info msg="StartContainer for \"55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b\"" Sep 4 17:17:51.603734 systemd[1]: Started cri-containerd-55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b.scope - libcontainer container 55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b. Sep 4 17:17:51.655215 containerd[2020]: time="2024-09-04T17:17:51.654598748Z" level=info msg="StartContainer for \"55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b\" returns successfully" Sep 4 17:17:51.684668 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:17:51.685282 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:17:51.685413 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:17:51.697658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:17:51.698696 systemd[1]: cri-containerd-55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b.scope: Deactivated successfully. Sep 4 17:17:51.741976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:17:51.746047 containerd[2020]: time="2024-09-04T17:17:51.745952744Z" level=info msg="shim disconnected" id=55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b namespace=k8s.io Sep 4 17:17:51.746047 containerd[2020]: time="2024-09-04T17:17:51.746030228Z" level=warning msg="cleaning up after shim disconnected" id=55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b namespace=k8s.io Sep 4 17:17:51.746324 containerd[2020]: time="2024-09-04T17:17:51.746052608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:17:51.966295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b-rootfs.mount: Deactivated successfully. Sep 4 17:17:52.528319 containerd[2020]: time="2024-09-04T17:17:52.528031964Z" level=info msg="CreateContainer within sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:17:52.571563 containerd[2020]: time="2024-09-04T17:17:52.570469148Z" level=info msg="CreateContainer within sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f\"" Sep 4 17:17:52.572712 containerd[2020]: time="2024-09-04T17:17:52.572448080Z" level=info msg="StartContainer for \"996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f\"" Sep 4 17:17:52.650790 systemd[1]: Started cri-containerd-996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f.scope - libcontainer container 996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f. Sep 4 17:17:52.727332 containerd[2020]: time="2024-09-04T17:17:52.727050189Z" level=info msg="StartContainer for \"996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f\" returns successfully" Sep 4 17:17:52.744018 systemd[1]: cri-containerd-996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f.scope: Deactivated successfully. Sep 4 17:17:52.919723 containerd[2020]: time="2024-09-04T17:17:52.919078978Z" level=info msg="shim disconnected" id=996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f namespace=k8s.io Sep 4 17:17:52.919723 containerd[2020]: time="2024-09-04T17:17:52.919172698Z" level=warning msg="cleaning up after shim disconnected" id=996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f namespace=k8s.io Sep 4 17:17:52.919723 containerd[2020]: time="2024-09-04T17:17:52.919209118Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:17:52.967611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f-rootfs.mount: Deactivated successfully. Sep 4 17:17:53.019013 containerd[2020]: time="2024-09-04T17:17:53.018933211Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:53.020793 containerd[2020]: time="2024-09-04T17:17:53.020737159Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138310" Sep 4 17:17:53.021259 containerd[2020]: time="2024-09-04T17:17:53.021187927Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:53.024599 containerd[2020]: time="2024-09-04T17:17:53.024422731Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.075291856s" Sep 4 17:17:53.024599 containerd[2020]: time="2024-09-04T17:17:53.024482323Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 17:17:53.028492 containerd[2020]: time="2024-09-04T17:17:53.028032007Z" level=info msg="CreateContainer within sandbox \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 17:17:53.057409 containerd[2020]: time="2024-09-04T17:17:53.057066151Z" level=info msg="CreateContainer within sandbox \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\"" Sep 4 17:17:53.058170 containerd[2020]: time="2024-09-04T17:17:53.057929359Z" level=info msg="StartContainer for \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\"" Sep 4 17:17:53.107478 systemd[1]: Started cri-containerd-d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24.scope - libcontainer container d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24. Sep 4 17:17:53.162549 containerd[2020]: time="2024-09-04T17:17:53.162374275Z" level=info msg="StartContainer for \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\" returns successfully" Sep 4 17:17:53.538835 containerd[2020]: time="2024-09-04T17:17:53.538084077Z" level=info msg="CreateContainer within sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:17:53.559071 containerd[2020]: time="2024-09-04T17:17:53.559005345Z" level=info msg="CreateContainer within sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c\"" Sep 4 17:17:53.561266 containerd[2020]: time="2024-09-04T17:17:53.559953585Z" level=info msg="StartContainer for \"1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c\"" Sep 4 17:17:53.635456 systemd[1]: Started cri-containerd-1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c.scope - libcontainer container 1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c. Sep 4 17:17:53.750683 containerd[2020]: time="2024-09-04T17:17:53.750624106Z" level=info msg="StartContainer for \"1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c\" returns successfully" Sep 4 17:17:53.752722 systemd[1]: cri-containerd-1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c.scope: Deactivated successfully. Sep 4 17:17:53.846154 containerd[2020]: time="2024-09-04T17:17:53.845931455Z" level=info msg="shim disconnected" id=1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c namespace=k8s.io Sep 4 17:17:53.846154 containerd[2020]: time="2024-09-04T17:17:53.846009143Z" level=warning msg="cleaning up after shim disconnected" id=1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c namespace=k8s.io Sep 4 17:17:53.846154 containerd[2020]: time="2024-09-04T17:17:53.846034247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:17:54.557446 containerd[2020]: time="2024-09-04T17:17:54.557367838Z" level=info msg="CreateContainer within sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:17:54.593035 containerd[2020]: time="2024-09-04T17:17:54.592950274Z" level=info msg="CreateContainer within sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\"" Sep 4 17:17:54.593900 containerd[2020]: time="2024-09-04T17:17:54.593843590Z" level=info msg="StartContainer for \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\"" Sep 4 17:17:54.678869 systemd[1]: run-containerd-runc-k8s.io-933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0-runc.ERRFvp.mount: Deactivated successfully. Sep 4 17:17:54.691745 systemd[1]: Started cri-containerd-933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0.scope - libcontainer container 933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0. Sep 4 17:17:54.706380 kubelet[3432]: I0904 17:17:54.706288 3432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-rmzzx" podStartSLOduration=2.815106565 podCreationTimestamp="2024-09-04 17:17:41 +0000 UTC" firstStartedPulling="2024-09-04 17:17:42.133885773 +0000 UTC m=+14.110826676" lastFinishedPulling="2024-09-04 17:17:53.024982447 +0000 UTC m=+25.001923362" observedRunningTime="2024-09-04 17:17:53.746503918 +0000 UTC m=+25.723444869" watchObservedRunningTime="2024-09-04 17:17:54.706203251 +0000 UTC m=+26.683144190" Sep 4 17:17:54.818205 containerd[2020]: time="2024-09-04T17:17:54.816680520Z" level=info msg="StartContainer for \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\" returns successfully" Sep 4 17:17:55.048677 kubelet[3432]: I0904 17:17:55.048525 3432 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:17:55.085456 kubelet[3432]: I0904 17:17:55.085288 3432 topology_manager.go:215] "Topology Admit Handler" podUID="203fca53-6bdc-4e3b-b279-3d9511e150d6" podNamespace="kube-system" podName="coredns-5dd5756b68-snwzs" Sep 4 17:17:55.093463 kubelet[3432]: I0904 17:17:55.092903 3432 topology_manager.go:215] "Topology Admit Handler" podUID="6f689089-f0f9-4e7e-a740-937e13ca382e" podNamespace="kube-system" podName="coredns-5dd5756b68-zrrq7" Sep 4 17:17:55.109897 systemd[1]: Created slice kubepods-burstable-pod203fca53_6bdc_4e3b_b279_3d9511e150d6.slice - libcontainer container kubepods-burstable-pod203fca53_6bdc_4e3b_b279_3d9511e150d6.slice. Sep 4 17:17:55.127658 systemd[1]: Created slice kubepods-burstable-pod6f689089_f0f9_4e7e_a740_937e13ca382e.slice - libcontainer container kubepods-burstable-pod6f689089_f0f9_4e7e_a740_937e13ca382e.slice. Sep 4 17:17:55.211853 kubelet[3432]: I0904 17:17:55.211446 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96ggf\" (UniqueName: \"kubernetes.io/projected/203fca53-6bdc-4e3b-b279-3d9511e150d6-kube-api-access-96ggf\") pod \"coredns-5dd5756b68-snwzs\" (UID: \"203fca53-6bdc-4e3b-b279-3d9511e150d6\") " pod="kube-system/coredns-5dd5756b68-snwzs" Sep 4 17:17:55.211853 kubelet[3432]: I0904 17:17:55.211571 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f689089-f0f9-4e7e-a740-937e13ca382e-config-volume\") pod \"coredns-5dd5756b68-zrrq7\" (UID: \"6f689089-f0f9-4e7e-a740-937e13ca382e\") " pod="kube-system/coredns-5dd5756b68-zrrq7" Sep 4 17:17:55.211853 kubelet[3432]: I0904 17:17:55.211676 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69zrt\" (UniqueName: \"kubernetes.io/projected/6f689089-f0f9-4e7e-a740-937e13ca382e-kube-api-access-69zrt\") pod \"coredns-5dd5756b68-zrrq7\" (UID: \"6f689089-f0f9-4e7e-a740-937e13ca382e\") " pod="kube-system/coredns-5dd5756b68-zrrq7" Sep 4 17:17:55.211853 kubelet[3432]: I0904 17:17:55.211736 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/203fca53-6bdc-4e3b-b279-3d9511e150d6-config-volume\") pod \"coredns-5dd5756b68-snwzs\" (UID: \"203fca53-6bdc-4e3b-b279-3d9511e150d6\") " pod="kube-system/coredns-5dd5756b68-snwzs" Sep 4 17:17:55.421382 containerd[2020]: time="2024-09-04T17:17:55.421213103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-snwzs,Uid:203fca53-6bdc-4e3b-b279-3d9511e150d6,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:55.455611 containerd[2020]: time="2024-09-04T17:17:55.453705539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zrrq7,Uid:6f689089-f0f9-4e7e-a740-937e13ca382e,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:55.624877 kubelet[3432]: I0904 17:17:55.624813 3432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-269sd" podStartSLOduration=6.5620620039999995 podCreationTimestamp="2024-09-04 17:17:41 +0000 UTC" firstStartedPulling="2024-09-04 17:17:41.885195863 +0000 UTC m=+13.862136778" lastFinishedPulling="2024-09-04 17:17:49.947864359 +0000 UTC m=+21.924805286" observedRunningTime="2024-09-04 17:17:55.618661236 +0000 UTC m=+27.595602163" watchObservedRunningTime="2024-09-04 17:17:55.624730512 +0000 UTC m=+27.601671439" Sep 4 17:17:57.857524 systemd-networkd[1943]: cilium_host: Link UP Sep 4 17:17:57.859216 systemd-networkd[1943]: cilium_net: Link UP Sep 4 17:17:57.859461 (udev-worker)[4224]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:17:57.860308 systemd-networkd[1943]: cilium_net: Gained carrier Sep 4 17:17:57.860690 systemd-networkd[1943]: cilium_host: Gained carrier Sep 4 17:17:57.862613 (udev-worker)[4226]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:17:58.039615 systemd-networkd[1943]: cilium_vxlan: Link UP Sep 4 17:17:58.039634 systemd-networkd[1943]: cilium_vxlan: Gained carrier Sep 4 17:17:58.245354 systemd-networkd[1943]: cilium_host: Gained IPv6LL Sep 4 17:17:58.413936 systemd-networkd[1943]: cilium_net: Gained IPv6LL Sep 4 17:17:58.523200 kernel: NET: Registered PF_ALG protocol family Sep 4 17:17:59.858420 systemd-networkd[1943]: lxc_health: Link UP Sep 4 17:17:59.868450 systemd-networkd[1943]: lxc_health: Gained carrier Sep 4 17:17:59.887087 systemd-networkd[1943]: cilium_vxlan: Gained IPv6LL Sep 4 17:18:00.505163 systemd-networkd[1943]: lxce63405ccee2b: Link UP Sep 4 17:18:00.510234 kernel: eth0: renamed from tmp7c61f Sep 4 17:18:00.520836 systemd-networkd[1943]: lxce63405ccee2b: Gained carrier Sep 4 17:18:00.585109 systemd-networkd[1943]: lxc95f7f3b8ed79: Link UP Sep 4 17:18:00.593237 kernel: eth0: renamed from tmp6fdfd Sep 4 17:18:00.599816 systemd-networkd[1943]: lxc95f7f3b8ed79: Gained carrier Sep 4 17:18:01.549414 systemd-networkd[1943]: lxc_health: Gained IPv6LL Sep 4 17:18:02.125400 systemd-networkd[1943]: lxc95f7f3b8ed79: Gained IPv6LL Sep 4 17:18:02.573425 systemd-networkd[1943]: lxce63405ccee2b: Gained IPv6LL Sep 4 17:18:05.060807 systemd[1]: Started sshd@9-172.31.17.160:22-139.178.89.65:35066.service - OpenSSH per-connection server daemon (139.178.89.65:35066). Sep 4 17:18:05.249479 sshd[4627]: Accepted publickey for core from 139.178.89.65 port 35066 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:05.253329 sshd[4627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:05.262407 systemd-logind[2004]: New session 10 of user core. Sep 4 17:18:05.272435 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:18:05.350311 ntpd[1997]: Listen normally on 8 cilium_host 192.168.0.145:123 Sep 4 17:18:05.352053 ntpd[1997]: 4 Sep 17:18:05 ntpd[1997]: Listen normally on 8 cilium_host 192.168.0.145:123 Sep 4 17:18:05.352053 ntpd[1997]: 4 Sep 17:18:05 ntpd[1997]: Listen normally on 9 cilium_net [fe80::d444:b6ff:fee3:e2fd%4]:123 Sep 4 17:18:05.352053 ntpd[1997]: 4 Sep 17:18:05 ntpd[1997]: Listen normally on 10 cilium_host [fe80::e073:b4ff:fe79:cb0a%5]:123 Sep 4 17:18:05.352053 ntpd[1997]: 4 Sep 17:18:05 ntpd[1997]: Listen normally on 11 cilium_vxlan [fe80::f0b3:e6ff:fe1a:90f0%6]:123 Sep 4 17:18:05.352053 ntpd[1997]: 4 Sep 17:18:05 ntpd[1997]: Listen normally on 12 lxc_health [fe80::9820:25ff:feb0:7ec7%8]:123 Sep 4 17:18:05.352053 ntpd[1997]: 4 Sep 17:18:05 ntpd[1997]: Listen normally on 13 lxce63405ccee2b [fe80::1456:e0ff:feec:6b4c%10]:123 Sep 4 17:18:05.352053 ntpd[1997]: 4 Sep 17:18:05 ntpd[1997]: Listen normally on 14 lxc95f7f3b8ed79 [fe80::f036:eff:fe08:93f3%12]:123 Sep 4 17:18:05.351410 ntpd[1997]: Listen normally on 9 cilium_net [fe80::d444:b6ff:fee3:e2fd%4]:123 Sep 4 17:18:05.351499 ntpd[1997]: Listen normally on 10 cilium_host [fe80::e073:b4ff:fe79:cb0a%5]:123 Sep 4 17:18:05.351570 ntpd[1997]: Listen normally on 11 cilium_vxlan [fe80::f0b3:e6ff:fe1a:90f0%6]:123 Sep 4 17:18:05.351640 ntpd[1997]: Listen normally on 12 lxc_health [fe80::9820:25ff:feb0:7ec7%8]:123 Sep 4 17:18:05.351709 ntpd[1997]: Listen normally on 13 lxce63405ccee2b [fe80::1456:e0ff:feec:6b4c%10]:123 Sep 4 17:18:05.351783 ntpd[1997]: Listen normally on 14 lxc95f7f3b8ed79 [fe80::f036:eff:fe08:93f3%12]:123 Sep 4 17:18:05.607240 sshd[4627]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:05.618996 systemd[1]: sshd@9-172.31.17.160:22-139.178.89.65:35066.service: Deactivated successfully. Sep 4 17:18:05.619496 systemd-logind[2004]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:18:05.624855 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:18:05.631845 systemd-logind[2004]: Removed session 10. Sep 4 17:18:08.949507 containerd[2020]: time="2024-09-04T17:18:08.949308302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:08.950706 containerd[2020]: time="2024-09-04T17:18:08.949490930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:08.950706 containerd[2020]: time="2024-09-04T17:18:08.949529930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:08.951226 containerd[2020]: time="2024-09-04T17:18:08.950576702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:08.997637 systemd[1]: Started cri-containerd-6fdfdd62355c1ba880dbb536ed17da0cf25c3cab16e7bd8607e6491017ef7e34.scope - libcontainer container 6fdfdd62355c1ba880dbb536ed17da0cf25c3cab16e7bd8607e6491017ef7e34. Sep 4 17:18:09.072555 containerd[2020]: time="2024-09-04T17:18:09.071897866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:09.072555 containerd[2020]: time="2024-09-04T17:18:09.072029242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:09.072555 containerd[2020]: time="2024-09-04T17:18:09.072069022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:09.073943 containerd[2020]: time="2024-09-04T17:18:09.072495430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:09.147950 containerd[2020]: time="2024-09-04T17:18:09.146459687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zrrq7,Uid:6f689089-f0f9-4e7e-a740-937e13ca382e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fdfdd62355c1ba880dbb536ed17da0cf25c3cab16e7bd8607e6491017ef7e34\"" Sep 4 17:18:09.152437 systemd[1]: Started cri-containerd-7c61f6b83ebad5abd0c94b5c5446b410651e0cfca179b8c8e076794a0e12f07d.scope - libcontainer container 7c61f6b83ebad5abd0c94b5c5446b410651e0cfca179b8c8e076794a0e12f07d. Sep 4 17:18:09.162055 containerd[2020]: time="2024-09-04T17:18:09.161985707Z" level=info msg="CreateContainer within sandbox \"6fdfdd62355c1ba880dbb536ed17da0cf25c3cab16e7bd8607e6491017ef7e34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:18:09.184905 containerd[2020]: time="2024-09-04T17:18:09.184807235Z" level=info msg="CreateContainer within sandbox \"6fdfdd62355c1ba880dbb536ed17da0cf25c3cab16e7bd8607e6491017ef7e34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5c3c8458899785eb2e1194695d79e79fb6258e58618569014daedc2248c5254\"" Sep 4 17:18:09.189616 containerd[2020]: time="2024-09-04T17:18:09.189543803Z" level=info msg="StartContainer for \"c5c3c8458899785eb2e1194695d79e79fb6258e58618569014daedc2248c5254\"" Sep 4 17:18:09.276456 systemd[1]: Started cri-containerd-c5c3c8458899785eb2e1194695d79e79fb6258e58618569014daedc2248c5254.scope - libcontainer container c5c3c8458899785eb2e1194695d79e79fb6258e58618569014daedc2248c5254. Sep 4 17:18:09.290683 containerd[2020]: time="2024-09-04T17:18:09.290612087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-snwzs,Uid:203fca53-6bdc-4e3b-b279-3d9511e150d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c61f6b83ebad5abd0c94b5c5446b410651e0cfca179b8c8e076794a0e12f07d\"" Sep 4 17:18:09.301571 containerd[2020]: time="2024-09-04T17:18:09.300715476Z" level=info msg="CreateContainer within sandbox \"7c61f6b83ebad5abd0c94b5c5446b410651e0cfca179b8c8e076794a0e12f07d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:18:09.324737 containerd[2020]: time="2024-09-04T17:18:09.324543984Z" level=info msg="CreateContainer within sandbox \"7c61f6b83ebad5abd0c94b5c5446b410651e0cfca179b8c8e076794a0e12f07d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"835799fc0d9e1f4dabb8cf96cd882d4e40046bc3084a458788dc89ea0e9b6441\"" Sep 4 17:18:09.328612 containerd[2020]: time="2024-09-04T17:18:09.326310432Z" level=info msg="StartContainer for \"835799fc0d9e1f4dabb8cf96cd882d4e40046bc3084a458788dc89ea0e9b6441\"" Sep 4 17:18:09.423105 containerd[2020]: time="2024-09-04T17:18:09.422943768Z" level=info msg="StartContainer for \"c5c3c8458899785eb2e1194695d79e79fb6258e58618569014daedc2248c5254\" returns successfully" Sep 4 17:18:09.440449 systemd[1]: Started cri-containerd-835799fc0d9e1f4dabb8cf96cd882d4e40046bc3084a458788dc89ea0e9b6441.scope - libcontainer container 835799fc0d9e1f4dabb8cf96cd882d4e40046bc3084a458788dc89ea0e9b6441. Sep 4 17:18:09.525476 containerd[2020]: time="2024-09-04T17:18:09.525395293Z" level=info msg="StartContainer for \"835799fc0d9e1f4dabb8cf96cd882d4e40046bc3084a458788dc89ea0e9b6441\" returns successfully" Sep 4 17:18:09.658920 kubelet[3432]: I0904 17:18:09.658760 3432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-snwzs" podStartSLOduration=28.658660825 podCreationTimestamp="2024-09-04 17:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:09.657394069 +0000 UTC m=+41.634334996" watchObservedRunningTime="2024-09-04 17:18:09.658660825 +0000 UTC m=+41.635601740" Sep 4 17:18:09.686176 kubelet[3432]: I0904 17:18:09.684172 3432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-zrrq7" podStartSLOduration=28.684032365 podCreationTimestamp="2024-09-04 17:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:09.680622889 +0000 UTC m=+41.657563804" watchObservedRunningTime="2024-09-04 17:18:09.684032365 +0000 UTC m=+41.660973292" Sep 4 17:18:10.655826 systemd[1]: Started sshd@10-172.31.17.160:22-139.178.89.65:60158.service - OpenSSH per-connection server daemon (139.178.89.65:60158). Sep 4 17:18:10.844150 sshd[4807]: Accepted publickey for core from 139.178.89.65 port 60158 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:10.846798 sshd[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:10.856266 systemd-logind[2004]: New session 11 of user core. Sep 4 17:18:10.862440 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:18:11.117995 sshd[4807]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:11.125764 systemd[1]: sshd@10-172.31.17.160:22-139.178.89.65:60158.service: Deactivated successfully. Sep 4 17:18:11.130860 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:18:11.134407 systemd-logind[2004]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:18:11.136487 systemd-logind[2004]: Removed session 11. Sep 4 17:18:16.159629 systemd[1]: Started sshd@11-172.31.17.160:22-139.178.89.65:60160.service - OpenSSH per-connection server daemon (139.178.89.65:60160). Sep 4 17:18:16.338401 sshd[4830]: Accepted publickey for core from 139.178.89.65 port 60160 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:16.340564 sshd[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:16.349889 systemd-logind[2004]: New session 12 of user core. Sep 4 17:18:16.356402 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:18:16.594950 sshd[4830]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:16.599845 systemd[1]: sshd@11-172.31.17.160:22-139.178.89.65:60160.service: Deactivated successfully. Sep 4 17:18:16.603003 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:18:16.606905 systemd-logind[2004]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:18:16.609048 systemd-logind[2004]: Removed session 12. Sep 4 17:18:21.640805 systemd[1]: Started sshd@12-172.31.17.160:22-139.178.89.65:34922.service - OpenSSH per-connection server daemon (139.178.89.65:34922). Sep 4 17:18:21.829650 sshd[4846]: Accepted publickey for core from 139.178.89.65 port 34922 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:21.832865 sshd[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:21.843621 systemd-logind[2004]: New session 13 of user core. Sep 4 17:18:21.850494 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:18:22.101329 sshd[4846]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:22.108758 systemd[1]: sshd@12-172.31.17.160:22-139.178.89.65:34922.service: Deactivated successfully. Sep 4 17:18:22.114491 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:18:22.116574 systemd-logind[2004]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:18:22.119737 systemd-logind[2004]: Removed session 13. Sep 4 17:18:27.140668 systemd[1]: Started sshd@13-172.31.17.160:22-139.178.89.65:34928.service - OpenSSH per-connection server daemon (139.178.89.65:34928). Sep 4 17:18:27.320819 sshd[4859]: Accepted publickey for core from 139.178.89.65 port 34928 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:27.323883 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:27.333303 systemd-logind[2004]: New session 14 of user core. Sep 4 17:18:27.339414 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:18:27.588735 sshd[4859]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:27.598730 systemd[1]: sshd@13-172.31.17.160:22-139.178.89.65:34928.service: Deactivated successfully. Sep 4 17:18:27.605449 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:18:27.607442 systemd-logind[2004]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:18:27.610357 systemd-logind[2004]: Removed session 14. Sep 4 17:18:27.632686 systemd[1]: Started sshd@14-172.31.17.160:22-139.178.89.65:51820.service - OpenSSH per-connection server daemon (139.178.89.65:51820). Sep 4 17:18:27.816276 sshd[4873]: Accepted publickey for core from 139.178.89.65 port 51820 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:27.818978 sshd[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:27.827331 systemd-logind[2004]: New session 15 of user core. Sep 4 17:18:27.832418 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:18:29.421828 sshd[4873]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:29.434956 systemd[1]: sshd@14-172.31.17.160:22-139.178.89.65:51820.service: Deactivated successfully. Sep 4 17:18:29.445582 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:18:29.449730 systemd-logind[2004]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:18:29.473663 systemd[1]: Started sshd@15-172.31.17.160:22-139.178.89.65:51826.service - OpenSSH per-connection server daemon (139.178.89.65:51826). Sep 4 17:18:29.475565 systemd-logind[2004]: Removed session 15. Sep 4 17:18:29.667303 sshd[4886]: Accepted publickey for core from 139.178.89.65 port 51826 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:29.669949 sshd[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:29.678652 systemd-logind[2004]: New session 16 of user core. Sep 4 17:18:29.687428 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:18:29.934228 sshd[4886]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:29.938694 systemd[1]: sshd@15-172.31.17.160:22-139.178.89.65:51826.service: Deactivated successfully. Sep 4 17:18:29.942457 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:18:29.945651 systemd-logind[2004]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:18:29.948650 systemd-logind[2004]: Removed session 16. Sep 4 17:18:34.979671 systemd[1]: Started sshd@16-172.31.17.160:22-139.178.89.65:51832.service - OpenSSH per-connection server daemon (139.178.89.65:51832). Sep 4 17:18:35.159331 sshd[4902]: Accepted publickey for core from 139.178.89.65 port 51832 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:35.161994 sshd[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:35.170593 systemd-logind[2004]: New session 17 of user core. Sep 4 17:18:35.178437 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:18:35.420560 sshd[4902]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:35.430594 systemd[1]: sshd@16-172.31.17.160:22-139.178.89.65:51832.service: Deactivated successfully. Sep 4 17:18:35.435278 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:18:35.437774 systemd-logind[2004]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:18:35.442494 systemd-logind[2004]: Removed session 17. Sep 4 17:18:40.461655 systemd[1]: Started sshd@17-172.31.17.160:22-139.178.89.65:36968.service - OpenSSH per-connection server daemon (139.178.89.65:36968). Sep 4 17:18:40.643512 sshd[4915]: Accepted publickey for core from 139.178.89.65 port 36968 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:40.646172 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:40.656426 systemd-logind[2004]: New session 18 of user core. Sep 4 17:18:40.661409 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:18:40.895631 sshd[4915]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:40.902000 systemd[1]: sshd@17-172.31.17.160:22-139.178.89.65:36968.service: Deactivated successfully. Sep 4 17:18:40.906533 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:18:40.909793 systemd-logind[2004]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:18:40.911665 systemd-logind[2004]: Removed session 18. Sep 4 17:18:45.937695 systemd[1]: Started sshd@18-172.31.17.160:22-139.178.89.65:36972.service - OpenSSH per-connection server daemon (139.178.89.65:36972). Sep 4 17:18:46.120227 sshd[4931]: Accepted publickey for core from 139.178.89.65 port 36972 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:46.122868 sshd[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:46.131107 systemd-logind[2004]: New session 19 of user core. Sep 4 17:18:46.138380 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:18:46.386592 sshd[4931]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:46.392701 systemd[1]: sshd@18-172.31.17.160:22-139.178.89.65:36972.service: Deactivated successfully. Sep 4 17:18:46.397369 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:18:46.402534 systemd-logind[2004]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:18:46.404634 systemd-logind[2004]: Removed session 19. Sep 4 17:18:46.428657 systemd[1]: Started sshd@19-172.31.17.160:22-139.178.89.65:36974.service - OpenSSH per-connection server daemon (139.178.89.65:36974). Sep 4 17:18:46.607716 sshd[4944]: Accepted publickey for core from 139.178.89.65 port 36974 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:46.610565 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:46.618580 systemd-logind[2004]: New session 20 of user core. Sep 4 17:18:46.627580 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:18:46.923318 sshd[4944]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:46.929698 systemd[1]: sshd@19-172.31.17.160:22-139.178.89.65:36974.service: Deactivated successfully. Sep 4 17:18:46.934111 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:18:46.935900 systemd-logind[2004]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:18:46.937546 systemd-logind[2004]: Removed session 20. Sep 4 17:18:46.956972 systemd[1]: Started sshd@20-172.31.17.160:22-139.178.89.65:36984.service - OpenSSH per-connection server daemon (139.178.89.65:36984). Sep 4 17:18:47.143292 sshd[4954]: Accepted publickey for core from 139.178.89.65 port 36984 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:47.145829 sshd[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:47.154561 systemd-logind[2004]: New session 21 of user core. Sep 4 17:18:47.161385 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:18:48.565460 sshd[4954]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:48.573552 systemd-logind[2004]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:18:48.577096 systemd[1]: sshd@20-172.31.17.160:22-139.178.89.65:36984.service: Deactivated successfully. Sep 4 17:18:48.584528 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:18:48.604889 systemd[1]: Started sshd@21-172.31.17.160:22-139.178.89.65:54550.service - OpenSSH per-connection server daemon (139.178.89.65:54550). Sep 4 17:18:48.608791 systemd-logind[2004]: Removed session 21. Sep 4 17:18:48.791618 sshd[4972]: Accepted publickey for core from 139.178.89.65 port 54550 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:48.794301 sshd[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:48.803038 systemd-logind[2004]: New session 22 of user core. Sep 4 17:18:48.809408 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:18:49.429101 sshd[4972]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:49.436090 systemd[1]: sshd@21-172.31.17.160:22-139.178.89.65:54550.service: Deactivated successfully. Sep 4 17:18:49.442995 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:18:49.445115 systemd-logind[2004]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:18:49.448227 systemd-logind[2004]: Removed session 22. Sep 4 17:18:49.468669 systemd[1]: Started sshd@22-172.31.17.160:22-139.178.89.65:54566.service - OpenSSH per-connection server daemon (139.178.89.65:54566). Sep 4 17:18:49.653055 sshd[4983]: Accepted publickey for core from 139.178.89.65 port 54566 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:49.655925 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:49.663848 systemd-logind[2004]: New session 23 of user core. Sep 4 17:18:49.675421 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:18:49.912354 sshd[4983]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:49.917486 systemd[1]: sshd@22-172.31.17.160:22-139.178.89.65:54566.service: Deactivated successfully. Sep 4 17:18:49.921175 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:18:49.925051 systemd-logind[2004]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:18:49.927597 systemd-logind[2004]: Removed session 23. Sep 4 17:18:54.951662 systemd[1]: Started sshd@23-172.31.17.160:22-139.178.89.65:54580.service - OpenSSH per-connection server daemon (139.178.89.65:54580). Sep 4 17:18:55.139652 sshd[4996]: Accepted publickey for core from 139.178.89.65 port 54580 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:55.142829 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:55.152023 systemd-logind[2004]: New session 24 of user core. Sep 4 17:18:55.157393 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:18:55.398493 sshd[4996]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:55.405424 systemd[1]: sshd@23-172.31.17.160:22-139.178.89.65:54580.service: Deactivated successfully. Sep 4 17:18:55.410910 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:18:55.413530 systemd-logind[2004]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:18:55.415969 systemd-logind[2004]: Removed session 24. Sep 4 17:19:00.438650 systemd[1]: Started sshd@24-172.31.17.160:22-139.178.89.65:32974.service - OpenSSH per-connection server daemon (139.178.89.65:32974). Sep 4 17:19:00.625078 sshd[5012]: Accepted publickey for core from 139.178.89.65 port 32974 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:00.628504 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:00.641745 systemd-logind[2004]: New session 25 of user core. Sep 4 17:19:00.649493 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:19:00.909165 sshd[5012]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:00.915108 systemd[1]: sshd@24-172.31.17.160:22-139.178.89.65:32974.service: Deactivated successfully. Sep 4 17:19:00.919767 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:19:00.922711 systemd-logind[2004]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:19:00.925046 systemd-logind[2004]: Removed session 25. Sep 4 17:19:05.955707 systemd[1]: Started sshd@25-172.31.17.160:22-139.178.89.65:32980.service - OpenSSH per-connection server daemon (139.178.89.65:32980). Sep 4 17:19:06.139520 sshd[5025]: Accepted publickey for core from 139.178.89.65 port 32980 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:06.142281 sshd[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:06.151439 systemd-logind[2004]: New session 26 of user core. Sep 4 17:19:06.159414 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:19:06.403728 sshd[5025]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:06.408952 systemd-logind[2004]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:19:06.409708 systemd[1]: sshd@25-172.31.17.160:22-139.178.89.65:32980.service: Deactivated successfully. Sep 4 17:19:06.414089 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:19:06.418334 systemd-logind[2004]: Removed session 26. Sep 4 17:19:11.443649 systemd[1]: Started sshd@26-172.31.17.160:22-139.178.89.65:35608.service - OpenSSH per-connection server daemon (139.178.89.65:35608). Sep 4 17:19:11.631966 sshd[5038]: Accepted publickey for core from 139.178.89.65 port 35608 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:11.634627 sshd[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:11.643097 systemd-logind[2004]: New session 27 of user core. Sep 4 17:19:11.648729 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:19:11.881900 sshd[5038]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:11.887815 systemd[1]: sshd@26-172.31.17.160:22-139.178.89.65:35608.service: Deactivated successfully. Sep 4 17:19:11.894022 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:19:11.897619 systemd-logind[2004]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:19:11.899648 systemd-logind[2004]: Removed session 27. Sep 4 17:19:11.924649 systemd[1]: Started sshd@27-172.31.17.160:22-139.178.89.65:35614.service - OpenSSH per-connection server daemon (139.178.89.65:35614). Sep 4 17:19:12.107786 sshd[5051]: Accepted publickey for core from 139.178.89.65 port 35614 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:12.110572 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:12.118604 systemd-logind[2004]: New session 28 of user core. Sep 4 17:19:12.126387 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 17:19:14.924867 systemd[1]: run-containerd-runc-k8s.io-933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0-runc.FUcAXD.mount: Deactivated successfully. Sep 4 17:19:14.930211 containerd[2020]: time="2024-09-04T17:19:14.929052077Z" level=info msg="StopContainer for \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\" with timeout 30 (s)" Sep 4 17:19:14.931887 containerd[2020]: time="2024-09-04T17:19:14.931328586Z" level=info msg="Stop container \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\" with signal terminated" Sep 4 17:19:14.952103 containerd[2020]: time="2024-09-04T17:19:14.952024926Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:19:14.955396 systemd[1]: cri-containerd-d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24.scope: Deactivated successfully. Sep 4 17:19:14.979559 containerd[2020]: time="2024-09-04T17:19:14.979394598Z" level=info msg="StopContainer for \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\" with timeout 2 (s)" Sep 4 17:19:14.980329 containerd[2020]: time="2024-09-04T17:19:14.980249730Z" level=info msg="Stop container \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\" with signal terminated" Sep 4 17:19:14.998413 systemd-networkd[1943]: lxc_health: Link DOWN Sep 4 17:19:14.998427 systemd-networkd[1943]: lxc_health: Lost carrier Sep 4 17:19:15.022255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24-rootfs.mount: Deactivated successfully. Sep 4 17:19:15.039726 containerd[2020]: time="2024-09-04T17:19:15.038312294Z" level=info msg="shim disconnected" id=d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24 namespace=k8s.io Sep 4 17:19:15.039726 containerd[2020]: time="2024-09-04T17:19:15.038395538Z" level=warning msg="cleaning up after shim disconnected" id=d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24 namespace=k8s.io Sep 4 17:19:15.039726 containerd[2020]: time="2024-09-04T17:19:15.038415794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:15.039072 systemd[1]: cri-containerd-933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0.scope: Deactivated successfully. Sep 4 17:19:15.039741 systemd[1]: cri-containerd-933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0.scope: Consumed 14.296s CPU time. Sep 4 17:19:15.077025 containerd[2020]: time="2024-09-04T17:19:15.076948574Z" level=info msg="StopContainer for \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\" returns successfully" Sep 4 17:19:15.079175 containerd[2020]: time="2024-09-04T17:19:15.079100138Z" level=info msg="StopPodSandbox for \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\"" Sep 4 17:19:15.079322 containerd[2020]: time="2024-09-04T17:19:15.079190030Z" level=info msg="Container to stop \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:19:15.085572 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802-shm.mount: Deactivated successfully. Sep 4 17:19:15.100772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0-rootfs.mount: Deactivated successfully. Sep 4 17:19:15.104149 systemd[1]: cri-containerd-0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802.scope: Deactivated successfully. Sep 4 17:19:15.110144 containerd[2020]: time="2024-09-04T17:19:15.109799510Z" level=info msg="shim disconnected" id=933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0 namespace=k8s.io Sep 4 17:19:15.110144 containerd[2020]: time="2024-09-04T17:19:15.109876994Z" level=warning msg="cleaning up after shim disconnected" id=933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0 namespace=k8s.io Sep 4 17:19:15.110144 containerd[2020]: time="2024-09-04T17:19:15.109898114Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:15.148993 containerd[2020]: time="2024-09-04T17:19:15.147200751Z" level=info msg="StopContainer for \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\" returns successfully" Sep 4 17:19:15.148993 containerd[2020]: time="2024-09-04T17:19:15.148338915Z" level=info msg="StopPodSandbox for \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\"" Sep 4 17:19:15.148993 containerd[2020]: time="2024-09-04T17:19:15.148396503Z" level=info msg="Container to stop \"55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:19:15.148993 containerd[2020]: time="2024-09-04T17:19:15.148421379Z" level=info msg="Container to stop \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:19:15.148993 containerd[2020]: time="2024-09-04T17:19:15.148444263Z" level=info msg="Container to stop \"996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:19:15.148993 containerd[2020]: time="2024-09-04T17:19:15.148466643Z" level=info msg="Container to stop \"1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:19:15.148993 containerd[2020]: time="2024-09-04T17:19:15.148506351Z" level=info msg="Container to stop \"b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:19:15.163688 containerd[2020]: time="2024-09-04T17:19:15.163486227Z" level=info msg="shim disconnected" id=0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802 namespace=k8s.io Sep 4 17:19:15.163688 containerd[2020]: time="2024-09-04T17:19:15.163567731Z" level=warning msg="cleaning up after shim disconnected" id=0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802 namespace=k8s.io Sep 4 17:19:15.163688 containerd[2020]: time="2024-09-04T17:19:15.163591239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:15.164285 systemd[1]: cri-containerd-2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5.scope: Deactivated successfully. Sep 4 17:19:15.193028 containerd[2020]: time="2024-09-04T17:19:15.192950019Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:19:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:19:15.195083 containerd[2020]: time="2024-09-04T17:19:15.195017019Z" level=info msg="TearDown network for sandbox \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\" successfully" Sep 4 17:19:15.195395 containerd[2020]: time="2024-09-04T17:19:15.195366507Z" level=info msg="StopPodSandbox for \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\" returns successfully" Sep 4 17:19:15.223781 containerd[2020]: time="2024-09-04T17:19:15.223702227Z" level=info msg="shim disconnected" id=2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5 namespace=k8s.io Sep 4 17:19:15.224090 containerd[2020]: time="2024-09-04T17:19:15.224056155Z" level=warning msg="cleaning up after shim disconnected" id=2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5 namespace=k8s.io Sep 4 17:19:15.224233 containerd[2020]: time="2024-09-04T17:19:15.224204475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:15.249770 containerd[2020]: time="2024-09-04T17:19:15.249678099Z" level=info msg="TearDown network for sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" successfully" Sep 4 17:19:15.250109 containerd[2020]: time="2024-09-04T17:19:15.249950307Z" level=info msg="StopPodSandbox for \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" returns successfully" Sep 4 17:19:15.289735 kubelet[3432]: I0904 17:19:15.289669 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlzpp\" (UniqueName: \"kubernetes.io/projected/20e2d879-d36a-413e-b9db-f2ac5adddfca-kube-api-access-wlzpp\") pod \"20e2d879-d36a-413e-b9db-f2ac5adddfca\" (UID: \"20e2d879-d36a-413e-b9db-f2ac5adddfca\") " Sep 4 17:19:15.290365 kubelet[3432]: I0904 17:19:15.289765 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20e2d879-d36a-413e-b9db-f2ac5adddfca-cilium-config-path\") pod \"20e2d879-d36a-413e-b9db-f2ac5adddfca\" (UID: \"20e2d879-d36a-413e-b9db-f2ac5adddfca\") " Sep 4 17:19:15.296101 kubelet[3432]: I0904 17:19:15.295950 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e2d879-d36a-413e-b9db-f2ac5adddfca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20e2d879-d36a-413e-b9db-f2ac5adddfca" (UID: "20e2d879-d36a-413e-b9db-f2ac5adddfca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:19:15.296367 kubelet[3432]: I0904 17:19:15.296231 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e2d879-d36a-413e-b9db-f2ac5adddfca-kube-api-access-wlzpp" (OuterVolumeSpecName: "kube-api-access-wlzpp") pod "20e2d879-d36a-413e-b9db-f2ac5adddfca" (UID: "20e2d879-d36a-413e-b9db-f2ac5adddfca"). InnerVolumeSpecName "kube-api-access-wlzpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:19:15.390120 kubelet[3432]: I0904 17:19:15.390067 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-cni-path\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.390120 kubelet[3432]: I0904 17:19:15.390084 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-cni-path" (OuterVolumeSpecName: "cni-path") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:19:15.390120 kubelet[3432]: I0904 17:19:15.390227 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-xtables-lock\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.390120 kubelet[3432]: I0904 17:19:15.390274 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-host-proc-sys-net\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.390120 kubelet[3432]: I0904 17:19:15.390314 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-bpf-maps\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.390120 kubelet[3432]: I0904 17:19:15.390347 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:19:15.391426 kubelet[3432]: I0904 17:19:15.390362 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wwgq\" (UniqueName: \"kubernetes.io/projected/7251b004-1131-48df-ae96-89ad940c0e77-kube-api-access-2wwgq\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.391426 kubelet[3432]: I0904 17:19:15.390393 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:19:15.391426 kubelet[3432]: I0904 17:19:15.390408 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7251b004-1131-48df-ae96-89ad940c0e77-hubble-tls\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.391426 kubelet[3432]: I0904 17:19:15.390434 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:19:15.391426 kubelet[3432]: I0904 17:19:15.390445 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-lib-modules\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.391788 kubelet[3432]: I0904 17:19:15.390472 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:19:15.391788 kubelet[3432]: I0904 17:19:15.390516 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-cilium-cgroup\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.391788 kubelet[3432]: I0904 17:19:15.390558 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-host-proc-sys-kernel\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.391788 kubelet[3432]: I0904 17:19:15.390603 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-etc-cni-netd\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.391788 kubelet[3432]: I0904 17:19:15.390640 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-hostproc\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.391788 kubelet[3432]: I0904 17:19:15.390678 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-cilium-run\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.392109 kubelet[3432]: I0904 17:19:15.390723 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7251b004-1131-48df-ae96-89ad940c0e77-clustermesh-secrets\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.392109 kubelet[3432]: I0904 17:19:15.390771 3432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7251b004-1131-48df-ae96-89ad940c0e77-cilium-config-path\") pod \"7251b004-1131-48df-ae96-89ad940c0e77\" (UID: \"7251b004-1131-48df-ae96-89ad940c0e77\") " Sep 4 17:19:15.392109 kubelet[3432]: I0904 17:19:15.390831 3432 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-lib-modules\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.392109 kubelet[3432]: I0904 17:19:15.390921 3432 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wlzpp\" (UniqueName: \"kubernetes.io/projected/20e2d879-d36a-413e-b9db-f2ac5adddfca-kube-api-access-wlzpp\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.392109 kubelet[3432]: I0904 17:19:15.390948 3432 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20e2d879-d36a-413e-b9db-f2ac5adddfca-cilium-config-path\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.392109 kubelet[3432]: I0904 17:19:15.390974 3432 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-cni-path\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.392109 kubelet[3432]: I0904 17:19:15.390998 3432 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-xtables-lock\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.392586 kubelet[3432]: I0904 17:19:15.391022 3432 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-host-proc-sys-net\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.392586 kubelet[3432]: I0904 17:19:15.391049 3432 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-bpf-maps\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.396360 kubelet[3432]: I0904 17:19:15.395349 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:19:15.396360 kubelet[3432]: I0904 17:19:15.395434 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:19:15.396360 kubelet[3432]: I0904 17:19:15.395478 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:19:15.396360 kubelet[3432]: I0904 17:19:15.395518 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-hostproc" (OuterVolumeSpecName: "hostproc") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:19:15.396360 kubelet[3432]: I0904 17:19:15.395555 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:19:15.399817 kubelet[3432]: I0904 17:19:15.399481 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7251b004-1131-48df-ae96-89ad940c0e77-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:19:15.400703 kubelet[3432]: I0904 17:19:15.400478 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7251b004-1131-48df-ae96-89ad940c0e77-kube-api-access-2wwgq" (OuterVolumeSpecName: "kube-api-access-2wwgq") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "kube-api-access-2wwgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:19:15.402775 kubelet[3432]: I0904 17:19:15.402643 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7251b004-1131-48df-ae96-89ad940c0e77-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:19:15.404471 kubelet[3432]: I0904 17:19:15.404416 3432 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7251b004-1131-48df-ae96-89ad940c0e77-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7251b004-1131-48df-ae96-89ad940c0e77" (UID: "7251b004-1131-48df-ae96-89ad940c0e77"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:19:15.491491 kubelet[3432]: I0904 17:19:15.491281 3432 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2wwgq\" (UniqueName: \"kubernetes.io/projected/7251b004-1131-48df-ae96-89ad940c0e77-kube-api-access-2wwgq\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.491491 kubelet[3432]: I0904 17:19:15.491332 3432 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7251b004-1131-48df-ae96-89ad940c0e77-hubble-tls\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.491491 kubelet[3432]: I0904 17:19:15.491361 3432 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-host-proc-sys-kernel\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.491491 kubelet[3432]: I0904 17:19:15.491386 3432 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-etc-cni-netd\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.491491 kubelet[3432]: I0904 17:19:15.491410 3432 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-cilium-cgroup\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.491491 kubelet[3432]: I0904 17:19:15.491456 3432 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-hostproc\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.491886 kubelet[3432]: I0904 17:19:15.491746 3432 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7251b004-1131-48df-ae96-89ad940c0e77-clustermesh-secrets\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.491886 kubelet[3432]: I0904 17:19:15.491777 3432 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7251b004-1131-48df-ae96-89ad940c0e77-cilium-run\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.491886 kubelet[3432]: I0904 17:19:15.491802 3432 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7251b004-1131-48df-ae96-89ad940c0e77-cilium-config-path\") on node \"ip-172-31-17-160\" DevicePath \"\"" Sep 4 17:19:15.813542 kubelet[3432]: I0904 17:19:15.811896 3432 scope.go:117] "RemoveContainer" containerID="d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24" Sep 4 17:19:15.819913 containerd[2020]: time="2024-09-04T17:19:15.818720214Z" level=info msg="RemoveContainer for \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\"" Sep 4 17:19:15.833329 systemd[1]: Removed slice kubepods-besteffort-pod20e2d879_d36a_413e_b9db_f2ac5adddfca.slice - libcontainer container kubepods-besteffort-pod20e2d879_d36a_413e_b9db_f2ac5adddfca.slice. Sep 4 17:19:15.841511 containerd[2020]: time="2024-09-04T17:19:15.840678438Z" level=info msg="RemoveContainer for \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\" returns successfully" Sep 4 17:19:15.841974 kubelet[3432]: I0904 17:19:15.841714 3432 scope.go:117] "RemoveContainer" containerID="d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24" Sep 4 17:19:15.843973 containerd[2020]: time="2024-09-04T17:19:15.843880374Z" level=error msg="ContainerStatus for \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\": not found" Sep 4 17:19:15.844901 kubelet[3432]: E0904 17:19:15.844423 3432 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\": not found" containerID="d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24" Sep 4 17:19:15.844901 kubelet[3432]: I0904 17:19:15.844568 3432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24"} err="failed to get container status \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9cf5992114d84d0af6d2c8750caefce8785a347c1684d1ad9443326213abf24\": not found" Sep 4 17:19:15.844901 kubelet[3432]: I0904 17:19:15.844594 3432 scope.go:117] "RemoveContainer" containerID="933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0" Sep 4 17:19:15.853975 containerd[2020]: time="2024-09-04T17:19:15.851397006Z" level=info msg="RemoveContainer for \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\"" Sep 4 17:19:15.859344 containerd[2020]: time="2024-09-04T17:19:15.858810138Z" level=info msg="RemoveContainer for \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\" returns successfully" Sep 4 17:19:15.861105 kubelet[3432]: I0904 17:19:15.861055 3432 scope.go:117] "RemoveContainer" containerID="1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c" Sep 4 17:19:15.864002 containerd[2020]: time="2024-09-04T17:19:15.863940918Z" level=info msg="RemoveContainer for \"1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c\"" Sep 4 17:19:15.866342 systemd[1]: Removed slice kubepods-burstable-pod7251b004_1131_48df_ae96_89ad940c0e77.slice - libcontainer container kubepods-burstable-pod7251b004_1131_48df_ae96_89ad940c0e77.slice. Sep 4 17:19:15.866716 systemd[1]: kubepods-burstable-pod7251b004_1131_48df_ae96_89ad940c0e77.slice: Consumed 14.446s CPU time. Sep 4 17:19:15.871706 containerd[2020]: time="2024-09-04T17:19:15.871597362Z" level=info msg="RemoveContainer for \"1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c\" returns successfully" Sep 4 17:19:15.872754 kubelet[3432]: I0904 17:19:15.872597 3432 scope.go:117] "RemoveContainer" containerID="996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f" Sep 4 17:19:15.877344 containerd[2020]: time="2024-09-04T17:19:15.877271394Z" level=info msg="RemoveContainer for \"996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f\"" Sep 4 17:19:15.889277 containerd[2020]: time="2024-09-04T17:19:15.889209906Z" level=info msg="RemoveContainer for \"996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f\" returns successfully" Sep 4 17:19:15.889586 kubelet[3432]: I0904 17:19:15.889562 3432 scope.go:117] "RemoveContainer" containerID="55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b" Sep 4 17:19:15.891697 containerd[2020]: time="2024-09-04T17:19:15.891643038Z" level=info msg="RemoveContainer for \"55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b\"" Sep 4 17:19:15.902107 containerd[2020]: time="2024-09-04T17:19:15.902054550Z" level=info msg="RemoveContainer for \"55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b\" returns successfully" Sep 4 17:19:15.902810 kubelet[3432]: I0904 17:19:15.902763 3432 scope.go:117] "RemoveContainer" containerID="b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232" Sep 4 17:19:15.909204 containerd[2020]: time="2024-09-04T17:19:15.906880650Z" level=info msg="RemoveContainer for \"b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232\"" Sep 4 17:19:15.907982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802-rootfs.mount: Deactivated successfully. Sep 4 17:19:15.908183 systemd[1]: var-lib-kubelet-pods-20e2d879\x2dd36a\x2d413e\x2db9db\x2df2ac5adddfca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwlzpp.mount: Deactivated successfully. Sep 4 17:19:15.908335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5-rootfs.mount: Deactivated successfully. Sep 4 17:19:15.908466 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5-shm.mount: Deactivated successfully. Sep 4 17:19:15.908603 systemd[1]: var-lib-kubelet-pods-7251b004\x2d1131\x2d48df\x2dae96\x2d89ad940c0e77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2wwgq.mount: Deactivated successfully. Sep 4 17:19:15.908739 systemd[1]: var-lib-kubelet-pods-7251b004\x2d1131\x2d48df\x2dae96\x2d89ad940c0e77-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 17:19:15.908877 systemd[1]: var-lib-kubelet-pods-7251b004\x2d1131\x2d48df\x2dae96\x2d89ad940c0e77-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 17:19:15.916850 containerd[2020]: time="2024-09-04T17:19:15.916792014Z" level=info msg="RemoveContainer for \"b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232\" returns successfully" Sep 4 17:19:15.917257 kubelet[3432]: I0904 17:19:15.917155 3432 scope.go:117] "RemoveContainer" containerID="933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0" Sep 4 17:19:15.918018 containerd[2020]: time="2024-09-04T17:19:15.917901642Z" level=error msg="ContainerStatus for \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\": not found" Sep 4 17:19:15.920201 kubelet[3432]: E0904 17:19:15.918343 3432 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\": not found" containerID="933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0" Sep 4 17:19:15.920201 kubelet[3432]: I0904 17:19:15.918408 3432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0"} err="failed to get container status \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\": rpc error: code = NotFound desc = an error occurred when try to find container \"933140a7015267e00f6ad3e3f66b2cc56b81b36541c922c17b05a1685757ccf0\": not found" Sep 4 17:19:15.920201 kubelet[3432]: I0904 17:19:15.918436 3432 scope.go:117] "RemoveContainer" containerID="1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c" Sep 4 17:19:15.920201 kubelet[3432]: E0904 17:19:15.919848 3432 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c\": not found" containerID="1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c" Sep 4 17:19:15.920201 kubelet[3432]: I0904 17:19:15.919907 3432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c"} err="failed to get container status \"1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c\": not found" Sep 4 17:19:15.920201 kubelet[3432]: I0904 17:19:15.919931 3432 scope.go:117] "RemoveContainer" containerID="996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f" Sep 4 17:19:15.920620 containerd[2020]: time="2024-09-04T17:19:15.919448298Z" level=error msg="ContainerStatus for \"1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c097151c0560298f0280dc37b1916efbcb63910f3e4013218e3054685f90e2c\": not found" Sep 4 17:19:15.921837 containerd[2020]: time="2024-09-04T17:19:15.921757950Z" level=error msg="ContainerStatus for \"996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f\": not found" Sep 4 17:19:15.922176 kubelet[3432]: E0904 17:19:15.922110 3432 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f\": not found" containerID="996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f" Sep 4 17:19:15.922285 kubelet[3432]: I0904 17:19:15.922268 3432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f"} err="failed to get container status \"996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f\": rpc error: code = NotFound desc = an error occurred when try to find container \"996e70db2e80ff26081a5d4bc8bf24776d365a7fc77ba12b5754d7de6077683f\": not found" Sep 4 17:19:15.922410 kubelet[3432]: I0904 17:19:15.922300 3432 scope.go:117] "RemoveContainer" containerID="55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b" Sep 4 17:19:15.922865 containerd[2020]: time="2024-09-04T17:19:15.922721910Z" level=error msg="ContainerStatus for \"55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b\": not found" Sep 4 17:19:15.923258 kubelet[3432]: E0904 17:19:15.923214 3432 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b\": not found" containerID="55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b" Sep 4 17:19:15.923378 kubelet[3432]: I0904 17:19:15.923325 3432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b"} err="failed to get container status \"55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b\": rpc error: code = NotFound desc = an error occurred when try to find container \"55e0bde373ff3b4038926a67d850c594c753e36b8007989010ae1ec0ba7b343b\": not found" Sep 4 17:19:15.923506 kubelet[3432]: I0904 17:19:15.923354 3432 scope.go:117] "RemoveContainer" containerID="b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232" Sep 4 17:19:15.924314 containerd[2020]: time="2024-09-04T17:19:15.924191562Z" level=error msg="ContainerStatus for \"b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232\": not found" Sep 4 17:19:15.924573 kubelet[3432]: E0904 17:19:15.924505 3432 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232\": not found" containerID="b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232" Sep 4 17:19:15.924680 kubelet[3432]: I0904 17:19:15.924613 3432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232"} err="failed to get container status \"b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232\": rpc error: code = NotFound desc = an error occurred when try to find container \"b029e6c5d32b8a0df4edc8f48a10390aca6ebf69bff55917d10545f42e91d232\": not found" Sep 4 17:19:16.324347 kubelet[3432]: I0904 17:19:16.323944 3432 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="20e2d879-d36a-413e-b9db-f2ac5adddfca" path="/var/lib/kubelet/pods/20e2d879-d36a-413e-b9db-f2ac5adddfca/volumes" Sep 4 17:19:16.325092 kubelet[3432]: I0904 17:19:16.325053 3432 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7251b004-1131-48df-ae96-89ad940c0e77" path="/var/lib/kubelet/pods/7251b004-1131-48df-ae96-89ad940c0e77/volumes" Sep 4 17:19:16.848498 sshd[5051]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:16.853738 systemd-logind[2004]: Session 28 logged out. Waiting for processes to exit. Sep 4 17:19:16.855271 systemd[1]: sshd@27-172.31.17.160:22-139.178.89.65:35614.service: Deactivated successfully. Sep 4 17:19:16.861321 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 17:19:16.861821 systemd[1]: session-28.scope: Consumed 2.032s CPU time. Sep 4 17:19:16.865092 systemd-logind[2004]: Removed session 28. Sep 4 17:19:16.891679 systemd[1]: Started sshd@28-172.31.17.160:22-139.178.89.65:35620.service - OpenSSH per-connection server daemon (139.178.89.65:35620). Sep 4 17:19:17.073849 sshd[5211]: Accepted publickey for core from 139.178.89.65 port 35620 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:17.076598 sshd[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:17.084169 systemd-logind[2004]: New session 29 of user core. Sep 4 17:19:17.093403 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 17:19:17.350277 ntpd[1997]: Deleting interface #12 lxc_health, fe80::9820:25ff:feb0:7ec7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=72 secs Sep 4 17:19:17.350777 ntpd[1997]: 4 Sep 17:19:17 ntpd[1997]: Deleting interface #12 lxc_health, fe80::9820:25ff:feb0:7ec7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=72 secs Sep 4 17:19:18.610248 kubelet[3432]: E0904 17:19:18.610197 3432 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:19:18.943887 sshd[5211]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:18.953828 systemd[1]: sshd@28-172.31.17.160:22-139.178.89.65:35620.service: Deactivated successfully. Sep 4 17:19:18.960055 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 17:19:18.965331 systemd[1]: session-29.scope: Consumed 1.639s CPU time. Sep 4 17:19:18.969587 systemd-logind[2004]: Session 29 logged out. Waiting for processes to exit. Sep 4 17:19:18.981884 kubelet[3432]: I0904 17:19:18.980738 3432 topology_manager.go:215] "Topology Admit Handler" podUID="e6f64ba0-7136-4dae-8ee9-516f124250c2" podNamespace="kube-system" podName="cilium-vllpv" Sep 4 17:19:18.981884 kubelet[3432]: E0904 17:19:18.980830 3432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7251b004-1131-48df-ae96-89ad940c0e77" containerName="mount-bpf-fs" Sep 4 17:19:18.981884 kubelet[3432]: E0904 17:19:18.980853 3432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7251b004-1131-48df-ae96-89ad940c0e77" containerName="apply-sysctl-overwrites" Sep 4 17:19:18.981884 kubelet[3432]: E0904 17:19:18.980872 3432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7251b004-1131-48df-ae96-89ad940c0e77" containerName="mount-cgroup" Sep 4 17:19:18.981884 kubelet[3432]: E0904 17:19:18.980890 3432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20e2d879-d36a-413e-b9db-f2ac5adddfca" containerName="cilium-operator" Sep 4 17:19:18.981884 kubelet[3432]: E0904 17:19:18.980907 3432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7251b004-1131-48df-ae96-89ad940c0e77" containerName="clean-cilium-state" Sep 4 17:19:18.981884 kubelet[3432]: E0904 17:19:18.980925 3432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7251b004-1131-48df-ae96-89ad940c0e77" containerName="cilium-agent" Sep 4 17:19:18.981884 kubelet[3432]: I0904 17:19:18.980970 3432 memory_manager.go:346] "RemoveStaleState removing state" podUID="7251b004-1131-48df-ae96-89ad940c0e77" containerName="cilium-agent" Sep 4 17:19:18.981884 kubelet[3432]: I0904 17:19:18.980988 3432 memory_manager.go:346] "RemoveStaleState removing state" podUID="20e2d879-d36a-413e-b9db-f2ac5adddfca" containerName="cilium-operator" Sep 4 17:19:18.999680 systemd-logind[2004]: Removed session 29. Sep 4 17:19:19.008656 systemd[1]: Started sshd@29-172.31.17.160:22-139.178.89.65:50176.service - OpenSSH per-connection server daemon (139.178.89.65:50176). Sep 4 17:19:19.033524 systemd[1]: Created slice kubepods-burstable-pode6f64ba0_7136_4dae_8ee9_516f124250c2.slice - libcontainer container kubepods-burstable-pode6f64ba0_7136_4dae_8ee9_516f124250c2.slice. Sep 4 17:19:19.112738 kubelet[3432]: I0904 17:19:19.112679 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6f64ba0-7136-4dae-8ee9-516f124250c2-cilium-cgroup\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.112874 kubelet[3432]: I0904 17:19:19.112758 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6f64ba0-7136-4dae-8ee9-516f124250c2-host-proc-sys-kernel\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.112874 kubelet[3432]: I0904 17:19:19.112811 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6f64ba0-7136-4dae-8ee9-516f124250c2-etc-cni-netd\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.112874 kubelet[3432]: I0904 17:19:19.112858 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6f64ba0-7136-4dae-8ee9-516f124250c2-cni-path\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.113071 kubelet[3432]: I0904 17:19:19.112903 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6f64ba0-7136-4dae-8ee9-516f124250c2-clustermesh-secrets\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.113071 kubelet[3432]: I0904 17:19:19.112946 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6f64ba0-7136-4dae-8ee9-516f124250c2-host-proc-sys-net\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.113071 kubelet[3432]: I0904 17:19:19.112990 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6f64ba0-7136-4dae-8ee9-516f124250c2-bpf-maps\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.113071 kubelet[3432]: I0904 17:19:19.113032 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e6f64ba0-7136-4dae-8ee9-516f124250c2-cilium-ipsec-secrets\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.113311 kubelet[3432]: I0904 17:19:19.113074 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6f64ba0-7136-4dae-8ee9-516f124250c2-cilium-config-path\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.113311 kubelet[3432]: I0904 17:19:19.113115 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6f64ba0-7136-4dae-8ee9-516f124250c2-hubble-tls\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.114328 kubelet[3432]: I0904 17:19:19.114283 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6f64ba0-7136-4dae-8ee9-516f124250c2-cilium-run\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.114461 kubelet[3432]: I0904 17:19:19.114356 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6f64ba0-7136-4dae-8ee9-516f124250c2-lib-modules\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.114461 kubelet[3432]: I0904 17:19:19.114403 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6f64ba0-7136-4dae-8ee9-516f124250c2-xtables-lock\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.114461 kubelet[3432]: I0904 17:19:19.114457 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6f64ba0-7136-4dae-8ee9-516f124250c2-hostproc\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.114664 kubelet[3432]: I0904 17:19:19.114503 3432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dcvk\" (UniqueName: \"kubernetes.io/projected/e6f64ba0-7136-4dae-8ee9-516f124250c2-kube-api-access-7dcvk\") pod \"cilium-vllpv\" (UID: \"e6f64ba0-7136-4dae-8ee9-516f124250c2\") " pod="kube-system/cilium-vllpv" Sep 4 17:19:19.253313 sshd[5223]: Accepted publickey for core from 139.178.89.65 port 50176 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:19.258626 sshd[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:19.298431 systemd-logind[2004]: New session 30 of user core. Sep 4 17:19:19.307423 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 4 17:19:19.350088 containerd[2020]: time="2024-09-04T17:19:19.349991623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vllpv,Uid:e6f64ba0-7136-4dae-8ee9-516f124250c2,Namespace:kube-system,Attempt:0,}" Sep 4 17:19:19.387498 containerd[2020]: time="2024-09-04T17:19:19.387235388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:19.388481 containerd[2020]: time="2024-09-04T17:19:19.387536336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:19.388481 containerd[2020]: time="2024-09-04T17:19:19.388352396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:19.388809 containerd[2020]: time="2024-09-04T17:19:19.388711352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:19.423444 systemd[1]: Started cri-containerd-8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b.scope - libcontainer container 8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b. Sep 4 17:19:19.434762 sshd[5223]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:19.440978 systemd[1]: session-30.scope: Deactivated successfully. Sep 4 17:19:19.442365 systemd[1]: sshd@29-172.31.17.160:22-139.178.89.65:50176.service: Deactivated successfully. Sep 4 17:19:19.449309 systemd-logind[2004]: Session 30 logged out. Waiting for processes to exit. Sep 4 17:19:19.455061 systemd-logind[2004]: Removed session 30. Sep 4 17:19:19.480709 systemd[1]: Started sshd@30-172.31.17.160:22-139.178.89.65:50184.service - OpenSSH per-connection server daemon (139.178.89.65:50184). Sep 4 17:19:19.487976 containerd[2020]: time="2024-09-04T17:19:19.487886168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vllpv,Uid:e6f64ba0-7136-4dae-8ee9-516f124250c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b\"" Sep 4 17:19:19.494451 containerd[2020]: time="2024-09-04T17:19:19.493478768Z" level=info msg="CreateContainer within sandbox \"8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:19:19.512649 containerd[2020]: time="2024-09-04T17:19:19.512488244Z" level=info msg="CreateContainer within sandbox \"8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5d87605e930c8547c30c065886df456546bd6f27644e9c5deff75dbebe18c4bb\"" Sep 4 17:19:19.515316 containerd[2020]: time="2024-09-04T17:19:19.514677476Z" level=info msg="StartContainer for \"5d87605e930c8547c30c065886df456546bd6f27644e9c5deff75dbebe18c4bb\"" Sep 4 17:19:19.556829 systemd[1]: Started cri-containerd-5d87605e930c8547c30c065886df456546bd6f27644e9c5deff75dbebe18c4bb.scope - libcontainer container 5d87605e930c8547c30c065886df456546bd6f27644e9c5deff75dbebe18c4bb. Sep 4 17:19:19.612226 containerd[2020]: time="2024-09-04T17:19:19.612165165Z" level=info msg="StartContainer for \"5d87605e930c8547c30c065886df456546bd6f27644e9c5deff75dbebe18c4bb\" returns successfully" Sep 4 17:19:19.632103 systemd[1]: cri-containerd-5d87605e930c8547c30c065886df456546bd6f27644e9c5deff75dbebe18c4bb.scope: Deactivated successfully. Sep 4 17:19:19.677027 sshd[5273]: Accepted publickey for core from 139.178.89.65 port 50184 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:19.683634 sshd[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:19.688210 containerd[2020]: time="2024-09-04T17:19:19.688024485Z" level=info msg="shim disconnected" id=5d87605e930c8547c30c065886df456546bd6f27644e9c5deff75dbebe18c4bb namespace=k8s.io Sep 4 17:19:19.688210 containerd[2020]: time="2024-09-04T17:19:19.688105041Z" level=warning msg="cleaning up after shim disconnected" id=5d87605e930c8547c30c065886df456546bd6f27644e9c5deff75dbebe18c4bb namespace=k8s.io Sep 4 17:19:19.688210 containerd[2020]: time="2024-09-04T17:19:19.688160133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:19.694244 systemd-logind[2004]: New session 31 of user core. Sep 4 17:19:19.701464 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 4 17:19:19.869809 containerd[2020]: time="2024-09-04T17:19:19.869665654Z" level=info msg="CreateContainer within sandbox \"8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:19:19.894363 containerd[2020]: time="2024-09-04T17:19:19.894303058Z" level=info msg="CreateContainer within sandbox \"8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b80e2afa9e122fd178323017a17a43b1dd4fba862513bc582f9f24e035e41e20\"" Sep 4 17:19:19.896655 containerd[2020]: time="2024-09-04T17:19:19.896372518Z" level=info msg="StartContainer for \"b80e2afa9e122fd178323017a17a43b1dd4fba862513bc582f9f24e035e41e20\"" Sep 4 17:19:19.976417 systemd[1]: Started cri-containerd-b80e2afa9e122fd178323017a17a43b1dd4fba862513bc582f9f24e035e41e20.scope - libcontainer container b80e2afa9e122fd178323017a17a43b1dd4fba862513bc582f9f24e035e41e20. Sep 4 17:19:20.031699 containerd[2020]: time="2024-09-04T17:19:20.031610731Z" level=info msg="StartContainer for \"b80e2afa9e122fd178323017a17a43b1dd4fba862513bc582f9f24e035e41e20\" returns successfully" Sep 4 17:19:20.044949 systemd[1]: cri-containerd-b80e2afa9e122fd178323017a17a43b1dd4fba862513bc582f9f24e035e41e20.scope: Deactivated successfully. Sep 4 17:19:20.100277 containerd[2020]: time="2024-09-04T17:19:20.100045087Z" level=info msg="shim disconnected" id=b80e2afa9e122fd178323017a17a43b1dd4fba862513bc582f9f24e035e41e20 namespace=k8s.io Sep 4 17:19:20.100277 containerd[2020]: time="2024-09-04T17:19:20.100200679Z" level=warning msg="cleaning up after shim disconnected" id=b80e2afa9e122fd178323017a17a43b1dd4fba862513bc582f9f24e035e41e20 namespace=k8s.io Sep 4 17:19:20.100277 containerd[2020]: time="2024-09-04T17:19:20.100221631Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:20.287904 kubelet[3432]: I0904 17:19:20.287755 3432 setters.go:552] "Node became not ready" node="ip-172-31-17-160" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-04T17:19:20Z","lastTransitionTime":"2024-09-04T17:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 17:19:20.872403 containerd[2020]: time="2024-09-04T17:19:20.871907351Z" level=info msg="CreateContainer within sandbox \"8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:19:20.897337 containerd[2020]: time="2024-09-04T17:19:20.895977731Z" level=info msg="CreateContainer within sandbox \"8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5453104ac605873720edf70a47485de154e784502b243cb25b28e37157ac1f00\"" Sep 4 17:19:20.897972 containerd[2020]: time="2024-09-04T17:19:20.897912059Z" level=info msg="StartContainer for \"5453104ac605873720edf70a47485de154e784502b243cb25b28e37157ac1f00\"" Sep 4 17:19:20.963443 systemd[1]: Started cri-containerd-5453104ac605873720edf70a47485de154e784502b243cb25b28e37157ac1f00.scope - libcontainer container 5453104ac605873720edf70a47485de154e784502b243cb25b28e37157ac1f00. Sep 4 17:19:21.015063 containerd[2020]: time="2024-09-04T17:19:21.014635928Z" level=info msg="StartContainer for \"5453104ac605873720edf70a47485de154e784502b243cb25b28e37157ac1f00\" returns successfully" Sep 4 17:19:21.021093 systemd[1]: cri-containerd-5453104ac605873720edf70a47485de154e784502b243cb25b28e37157ac1f00.scope: Deactivated successfully. Sep 4 17:19:21.062832 containerd[2020]: time="2024-09-04T17:19:21.062669000Z" level=info msg="shim disconnected" id=5453104ac605873720edf70a47485de154e784502b243cb25b28e37157ac1f00 namespace=k8s.io Sep 4 17:19:21.063421 containerd[2020]: time="2024-09-04T17:19:21.062780696Z" level=warning msg="cleaning up after shim disconnected" id=5453104ac605873720edf70a47485de154e784502b243cb25b28e37157ac1f00 namespace=k8s.io Sep 4 17:19:21.063421 containerd[2020]: time="2024-09-04T17:19:21.063166916Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:21.230458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5453104ac605873720edf70a47485de154e784502b243cb25b28e37157ac1f00-rootfs.mount: Deactivated successfully. Sep 4 17:19:21.881174 containerd[2020]: time="2024-09-04T17:19:21.880936632Z" level=info msg="CreateContainer within sandbox \"8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:19:21.902340 containerd[2020]: time="2024-09-04T17:19:21.901776936Z" level=info msg="CreateContainer within sandbox \"8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eafe4cba02bbe6e1d7148c806ad4b87f8205b344e6672deb3a4bdf5bc1e02d7f\"" Sep 4 17:19:21.905100 containerd[2020]: time="2024-09-04T17:19:21.903646692Z" level=info msg="StartContainer for \"eafe4cba02bbe6e1d7148c806ad4b87f8205b344e6672deb3a4bdf5bc1e02d7f\"" Sep 4 17:19:21.964715 systemd[1]: Started cri-containerd-eafe4cba02bbe6e1d7148c806ad4b87f8205b344e6672deb3a4bdf5bc1e02d7f.scope - libcontainer container eafe4cba02bbe6e1d7148c806ad4b87f8205b344e6672deb3a4bdf5bc1e02d7f. Sep 4 17:19:22.012505 systemd[1]: cri-containerd-eafe4cba02bbe6e1d7148c806ad4b87f8205b344e6672deb3a4bdf5bc1e02d7f.scope: Deactivated successfully. Sep 4 17:19:22.014086 containerd[2020]: time="2024-09-04T17:19:22.013391661Z" level=info msg="StartContainer for \"eafe4cba02bbe6e1d7148c806ad4b87f8205b344e6672deb3a4bdf5bc1e02d7f\" returns successfully" Sep 4 17:19:22.065592 containerd[2020]: time="2024-09-04T17:19:22.065474373Z" level=info msg="shim disconnected" id=eafe4cba02bbe6e1d7148c806ad4b87f8205b344e6672deb3a4bdf5bc1e02d7f namespace=k8s.io Sep 4 17:19:22.065592 containerd[2020]: time="2024-09-04T17:19:22.065555997Z" level=warning msg="cleaning up after shim disconnected" id=eafe4cba02bbe6e1d7148c806ad4b87f8205b344e6672deb3a4bdf5bc1e02d7f namespace=k8s.io Sep 4 17:19:22.065592 containerd[2020]: time="2024-09-04T17:19:22.065577813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:22.230186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eafe4cba02bbe6e1d7148c806ad4b87f8205b344e6672deb3a4bdf5bc1e02d7f-rootfs.mount: Deactivated successfully. Sep 4 17:19:22.887152 containerd[2020]: time="2024-09-04T17:19:22.887050849Z" level=info msg="CreateContainer within sandbox \"8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:19:22.922206 containerd[2020]: time="2024-09-04T17:19:22.922105621Z" level=info msg="CreateContainer within sandbox \"8275f4e7f67ce2237118639c57fba89220c03c8a7de469d7609e77da3b328d5b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ecd0348f94bd500e566a76c76625d4f6c40be9c83778806fbdb4add2e8c9c2f5\"" Sep 4 17:19:22.923869 containerd[2020]: time="2024-09-04T17:19:22.923763349Z" level=info msg="StartContainer for \"ecd0348f94bd500e566a76c76625d4f6c40be9c83778806fbdb4add2e8c9c2f5\"" Sep 4 17:19:22.986466 systemd[1]: Started cri-containerd-ecd0348f94bd500e566a76c76625d4f6c40be9c83778806fbdb4add2e8c9c2f5.scope - libcontainer container ecd0348f94bd500e566a76c76625d4f6c40be9c83778806fbdb4add2e8c9c2f5. Sep 4 17:19:23.038335 containerd[2020]: time="2024-09-04T17:19:23.038063590Z" level=info msg="StartContainer for \"ecd0348f94bd500e566a76c76625d4f6c40be9c83778806fbdb4add2e8c9c2f5\" returns successfully" Sep 4 17:19:23.793166 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 17:19:28.022382 systemd-networkd[1943]: lxc_health: Link UP Sep 4 17:19:28.033503 systemd-networkd[1943]: lxc_health: Gained carrier Sep 4 17:19:28.037884 (udev-worker)[6077]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:19:28.303692 containerd[2020]: time="2024-09-04T17:19:28.303531448Z" level=info msg="StopPodSandbox for \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\"" Sep 4 17:19:28.304894 containerd[2020]: time="2024-09-04T17:19:28.304277740Z" level=info msg="TearDown network for sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" successfully" Sep 4 17:19:28.304894 containerd[2020]: time="2024-09-04T17:19:28.304318828Z" level=info msg="StopPodSandbox for \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" returns successfully" Sep 4 17:19:28.308187 containerd[2020]: time="2024-09-04T17:19:28.305347720Z" level=info msg="RemovePodSandbox for \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\"" Sep 4 17:19:28.308187 containerd[2020]: time="2024-09-04T17:19:28.305409496Z" level=info msg="Forcibly stopping sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\"" Sep 4 17:19:28.308187 containerd[2020]: time="2024-09-04T17:19:28.305516236Z" level=info msg="TearDown network for sandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" successfully" Sep 4 17:19:28.311684 containerd[2020]: time="2024-09-04T17:19:28.311621392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:19:28.311997 containerd[2020]: time="2024-09-04T17:19:28.311960884Z" level=info msg="RemovePodSandbox \"2d2f75d65da3e7fa97cb9de8ed58c73ba6c25bf79a006d87111d3c0133e0b0d5\" returns successfully" Sep 4 17:19:28.313158 containerd[2020]: time="2024-09-04T17:19:28.313091200Z" level=info msg="StopPodSandbox for \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\"" Sep 4 17:19:28.313306 containerd[2020]: time="2024-09-04T17:19:28.313256236Z" level=info msg="TearDown network for sandbox \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\" successfully" Sep 4 17:19:28.313306 containerd[2020]: time="2024-09-04T17:19:28.313281220Z" level=info msg="StopPodSandbox for \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\" returns successfully" Sep 4 17:19:28.315269 containerd[2020]: time="2024-09-04T17:19:28.314533528Z" level=info msg="RemovePodSandbox for \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\"" Sep 4 17:19:28.315269 containerd[2020]: time="2024-09-04T17:19:28.315038248Z" level=info msg="Forcibly stopping sandbox \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\"" Sep 4 17:19:28.315269 containerd[2020]: time="2024-09-04T17:19:28.315184336Z" level=info msg="TearDown network for sandbox \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\" successfully" Sep 4 17:19:28.324748 containerd[2020]: time="2024-09-04T17:19:28.323804020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:19:28.324748 containerd[2020]: time="2024-09-04T17:19:28.323911384Z" level=info msg="RemovePodSandbox \"0638aded5a5a78e753b83abfec05225e53333b762890863af5c9fffa0a80c802\" returns successfully" Sep 4 17:19:28.692338 systemd[1]: run-containerd-runc-k8s.io-ecd0348f94bd500e566a76c76625d4f6c40be9c83778806fbdb4add2e8c9c2f5-runc.mqFtAl.mount: Deactivated successfully. Sep 4 17:19:29.254669 systemd[1]: Started sshd@31-172.31.17.160:22-64.62.197.99:17945.service - OpenSSH per-connection server daemon (64.62.197.99:17945). Sep 4 17:19:29.391301 kubelet[3432]: I0904 17:19:29.391237 3432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vllpv" podStartSLOduration=11.391180649 podCreationTimestamp="2024-09-04 17:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:19:23.928427006 +0000 UTC m=+115.905368077" watchObservedRunningTime="2024-09-04 17:19:29.391180649 +0000 UTC m=+121.368121576" Sep 4 17:19:29.421638 sshd[6124]: Invalid user from 64.62.197.99 port 17945 Sep 4 17:19:29.933368 systemd-networkd[1943]: lxc_health: Gained IPv6LL Sep 4 17:19:32.350362 ntpd[1997]: Listen normally on 15 lxc_health [fe80::e839:7aff:fed3:1f55%14]:123 Sep 4 17:19:32.351187 ntpd[1997]: 4 Sep 17:19:32 ntpd[1997]: Listen normally on 15 lxc_health [fe80::e839:7aff:fed3:1f55%14]:123 Sep 4 17:19:33.247333 sshd[6124]: Connection closed by invalid user 64.62.197.99 port 17945 [preauth] Sep 4 17:19:33.248560 systemd[1]: sshd@31-172.31.17.160:22-64.62.197.99:17945.service: Deactivated successfully. Sep 4 17:19:33.431631 systemd[1]: run-containerd-runc-k8s.io-ecd0348f94bd500e566a76c76625d4f6c40be9c83778806fbdb4add2e8c9c2f5-runc.NoadzS.mount: Deactivated successfully. Sep 4 17:19:35.857519 sshd[5273]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:35.863192 systemd[1]: sshd@30-172.31.17.160:22-139.178.89.65:50184.service: Deactivated successfully. Sep 4 17:19:35.868816 systemd[1]: session-31.scope: Deactivated successfully. Sep 4 17:19:35.873530 systemd-logind[2004]: Session 31 logged out. Waiting for processes to exit. Sep 4 17:19:35.876502 systemd-logind[2004]: Removed session 31. Sep 4 17:19:42.988290 update_engine[2005]: I0904 17:19:42.988222 2005 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 4 17:19:42.988290 update_engine[2005]: I0904 17:19:42.988286 2005 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 4 17:19:42.989002 update_engine[2005]: I0904 17:19:42.988571 2005 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 4 17:19:42.989867 update_engine[2005]: I0904 17:19:42.989411 2005 omaha_request_params.cc:62] Current group set to beta Sep 4 17:19:42.989867 update_engine[2005]: I0904 17:19:42.989642 2005 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 4 17:19:42.989867 update_engine[2005]: I0904 17:19:42.989656 2005 update_attempter.cc:643] Scheduling an action processor start. Sep 4 17:19:42.989867 update_engine[2005]: I0904 17:19:42.989696 2005 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 17:19:42.989867 update_engine[2005]: I0904 17:19:42.989753 2005 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 4 17:19:42.990216 update_engine[2005]: I0904 17:19:42.990187 2005 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 17:19:42.990216 update_engine[2005]: I0904 17:19:42.990206 2005 omaha_request_action.cc:272] Request: Sep 4 17:19:42.990216 update_engine[2005]: Sep 4 17:19:42.990216 update_engine[2005]: Sep 4 17:19:42.990216 update_engine[2005]: Sep 4 17:19:42.990216 update_engine[2005]: Sep 4 17:19:42.990216 update_engine[2005]: Sep 4 17:19:42.990216 update_engine[2005]: Sep 4 17:19:42.990216 update_engine[2005]: Sep 4 17:19:42.990216 update_engine[2005]: Sep 4 17:19:42.990216 update_engine[2005]: I0904 17:19:42.990216 2005 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:19:42.990776 locksmithd[2066]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 4 17:19:42.992481 update_engine[2005]: I0904 17:19:42.992424 2005 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:19:42.992931 update_engine[2005]: I0904 17:19:42.992887 2005 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:19:43.015612 update_engine[2005]: E0904 17:19:43.015558 2005 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:19:43.015740 update_engine[2005]: I0904 17:19:43.015659 2005 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 4 17:19:49.690607 systemd[1]: cri-containerd-d08573d28ba007652ee08b6df0cfbc87330ad5aafff2011a78760de23932ab8a.scope: Deactivated successfully. Sep 4 17:19:49.691215 systemd[1]: cri-containerd-d08573d28ba007652ee08b6df0cfbc87330ad5aafff2011a78760de23932ab8a.scope: Consumed 5.463s CPU time, 21.7M memory peak, 0B memory swap peak. Sep 4 17:19:49.734687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d08573d28ba007652ee08b6df0cfbc87330ad5aafff2011a78760de23932ab8a-rootfs.mount: Deactivated successfully. Sep 4 17:19:49.737992 containerd[2020]: time="2024-09-04T17:19:49.737538026Z" level=info msg="shim disconnected" id=d08573d28ba007652ee08b6df0cfbc87330ad5aafff2011a78760de23932ab8a namespace=k8s.io Sep 4 17:19:49.737992 containerd[2020]: time="2024-09-04T17:19:49.737649314Z" level=warning msg="cleaning up after shim disconnected" id=d08573d28ba007652ee08b6df0cfbc87330ad5aafff2011a78760de23932ab8a namespace=k8s.io Sep 4 17:19:49.737992 containerd[2020]: time="2024-09-04T17:19:49.737699966Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:49.973094 kubelet[3432]: I0904 17:19:49.972948 3432 scope.go:117] "RemoveContainer" containerID="d08573d28ba007652ee08b6df0cfbc87330ad5aafff2011a78760de23932ab8a" Sep 4 17:19:49.977460 containerd[2020]: time="2024-09-04T17:19:49.977349112Z" level=info msg="CreateContainer within sandbox \"59a44922ee1dd3eb3b571a0276e31178ab136b9e04455dbd4a295985839a3bf4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 17:19:49.995739 containerd[2020]: time="2024-09-04T17:19:49.995662336Z" level=info msg="CreateContainer within sandbox \"59a44922ee1dd3eb3b571a0276e31178ab136b9e04455dbd4a295985839a3bf4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"004401b6bb4b30d98e74965df2082f0d370b8c6ce101b208c76dc20369635b07\"" Sep 4 17:19:49.997154 containerd[2020]: time="2024-09-04T17:19:49.996522508Z" level=info msg="StartContainer for \"004401b6bb4b30d98e74965df2082f0d370b8c6ce101b208c76dc20369635b07\"" Sep 4 17:19:50.052527 systemd[1]: Started cri-containerd-004401b6bb4b30d98e74965df2082f0d370b8c6ce101b208c76dc20369635b07.scope - libcontainer container 004401b6bb4b30d98e74965df2082f0d370b8c6ce101b208c76dc20369635b07. Sep 4 17:19:50.122015 containerd[2020]: time="2024-09-04T17:19:50.121699872Z" level=info msg="StartContainer for \"004401b6bb4b30d98e74965df2082f0d370b8c6ce101b208c76dc20369635b07\" returns successfully" Sep 4 17:19:50.507883 kubelet[3432]: E0904 17:19:50.507430 3432 controller.go:193] "Failed to update lease" err="Put \"https://172.31.17.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-160?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 4 17:19:52.983404 update_engine[2005]: I0904 17:19:52.983340 2005 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:19:52.984012 update_engine[2005]: I0904 17:19:52.983685 2005 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:19:52.984012 update_engine[2005]: I0904 17:19:52.983984 2005 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:19:52.985255 update_engine[2005]: E0904 17:19:52.985210 2005 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:19:52.985379 update_engine[2005]: I0904 17:19:52.985296 2005 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 4 17:19:54.301897 systemd[1]: cri-containerd-f5fddc45a8411d97ffdde6e33ddcf07442179b778076ee882e02a47c96dd694e.scope: Deactivated successfully. Sep 4 17:19:54.303335 systemd[1]: cri-containerd-f5fddc45a8411d97ffdde6e33ddcf07442179b778076ee882e02a47c96dd694e.scope: Consumed 3.881s CPU time, 16.1M memory peak, 0B memory swap peak. Sep 4 17:19:54.343073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5fddc45a8411d97ffdde6e33ddcf07442179b778076ee882e02a47c96dd694e-rootfs.mount: Deactivated successfully. Sep 4 17:19:54.351182 containerd[2020]: time="2024-09-04T17:19:54.351016709Z" level=info msg="shim disconnected" id=f5fddc45a8411d97ffdde6e33ddcf07442179b778076ee882e02a47c96dd694e namespace=k8s.io Sep 4 17:19:54.351182 containerd[2020]: time="2024-09-04T17:19:54.351178769Z" level=warning msg="cleaning up after shim disconnected" id=f5fddc45a8411d97ffdde6e33ddcf07442179b778076ee882e02a47c96dd694e namespace=k8s.io Sep 4 17:19:54.351986 containerd[2020]: time="2024-09-04T17:19:54.351204353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:54.371520 containerd[2020]: time="2024-09-04T17:19:54.371444897Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:19:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:19:54.995109 kubelet[3432]: I0904 17:19:54.995067 3432 scope.go:117] "RemoveContainer" containerID="f5fddc45a8411d97ffdde6e33ddcf07442179b778076ee882e02a47c96dd694e" Sep 4 17:19:54.998830 containerd[2020]: time="2024-09-04T17:19:54.998768385Z" level=info msg="CreateContainer within sandbox \"3b37f15693dcb6fd5b19d04ec3aaa497d33dab856c3a1e26ffd61bc775b3d48a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 17:19:55.021829 containerd[2020]: time="2024-09-04T17:19:55.021750749Z" level=info msg="CreateContainer within sandbox \"3b37f15693dcb6fd5b19d04ec3aaa497d33dab856c3a1e26ffd61bc775b3d48a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"00dfb8e1989a4b7596820f932c8e855c9335576c81081f7e83690b43fa0b60bc\"" Sep 4 17:19:55.023483 containerd[2020]: time="2024-09-04T17:19:55.022501637Z" level=info msg="StartContainer for \"00dfb8e1989a4b7596820f932c8e855c9335576c81081f7e83690b43fa0b60bc\"" Sep 4 17:19:55.081471 systemd[1]: Started cri-containerd-00dfb8e1989a4b7596820f932c8e855c9335576c81081f7e83690b43fa0b60bc.scope - libcontainer container 00dfb8e1989a4b7596820f932c8e855c9335576c81081f7e83690b43fa0b60bc. Sep 4 17:19:55.143778 containerd[2020]: time="2024-09-04T17:19:55.143545685Z" level=info msg="StartContainer for \"00dfb8e1989a4b7596820f932c8e855c9335576c81081f7e83690b43fa0b60bc\" returns successfully"