Sep 4 17:10:51.208007 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 4 17:10:51.208056 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Wed Sep 4 15:52:28 -00 2024 Sep 4 17:10:51.208085 kernel: KASLR disabled due to lack of seed Sep 4 17:10:51.208140 kernel: efi: EFI v2.7 by EDK II Sep 4 17:10:51.208161 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Sep 4 17:10:51.208178 kernel: ACPI: Early table checksum verification disabled Sep 4 17:10:51.208196 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 4 17:10:51.208212 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 17:10:51.208229 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:10:51.208245 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 4 17:10:51.208269 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:10:51.208285 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 4 17:10:51.208301 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 4 17:10:51.208317 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 4 17:10:51.208336 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:10:51.208356 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 4 17:10:51.208374 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 4 17:10:51.208390 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 4 17:10:51.208407 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 4 17:10:51.208424 kernel: printk: bootconsole [uart0] enabled Sep 4 17:10:51.208442 kernel: NUMA: Failed to initialise from firmware Sep 4 17:10:51.208460 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:10:51.208483 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 4 17:10:51.208500 kernel: Zone ranges: Sep 4 17:10:51.208517 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 17:10:51.208534 kernel: DMA32 empty Sep 4 17:10:51.208556 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 4 17:10:51.208576 kernel: Movable zone start for each node Sep 4 17:10:51.208594 kernel: Early memory node ranges Sep 4 17:10:51.208611 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 4 17:10:51.208628 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 4 17:10:51.208645 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 4 17:10:51.208662 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 4 17:10:51.208679 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 4 17:10:51.208696 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 4 17:10:51.208713 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 4 17:10:51.208730 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 4 17:10:51.208747 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:10:51.208767 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 4 17:10:51.208785 kernel: psci: probing for conduit method from ACPI. Sep 4 17:10:51.208809 kernel: psci: PSCIv1.0 detected in firmware. Sep 4 17:10:51.208826 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:10:51.208844 kernel: psci: Trusted OS migration not required Sep 4 17:10:51.208866 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:10:51.208883 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:10:51.208901 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:10:51.208919 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 17:10:51.208936 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:10:51.208954 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:10:51.208971 kernel: CPU features: detected: Spectre-v2 Sep 4 17:10:51.208989 kernel: CPU features: detected: Spectre-v3a Sep 4 17:10:51.209006 kernel: CPU features: detected: Spectre-BHB Sep 4 17:10:51.209024 kernel: CPU features: detected: ARM erratum 1742098 Sep 4 17:10:51.209041 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 4 17:10:51.209063 kernel: alternatives: applying boot alternatives Sep 4 17:10:51.209083 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:10:51.210586 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:10:51.210628 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:10:51.210647 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:10:51.210666 kernel: Fallback order for Node 0: 0 Sep 4 17:10:51.210684 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 4 17:10:51.210701 kernel: Policy zone: Normal Sep 4 17:10:51.210719 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:10:51.210736 kernel: software IO TLB: area num 2. Sep 4 17:10:51.210754 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 4 17:10:51.210781 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8076K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Sep 4 17:10:51.210800 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:10:51.210818 kernel: trace event string verifier disabled Sep 4 17:10:51.210835 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:10:51.210854 kernel: rcu: RCU event tracing is enabled. Sep 4 17:10:51.210872 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:10:51.210890 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:10:51.210908 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:10:51.210926 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:10:51.210944 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:10:51.210962 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:10:51.210984 kernel: GICv3: 96 SPIs implemented Sep 4 17:10:51.211002 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:10:51.211019 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:10:51.211037 kernel: GICv3: GICv3 features: 16 PPIs Sep 4 17:10:51.211054 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 4 17:10:51.211072 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 4 17:10:51.211090 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:10:51.211133 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:10:51.211153 kernel: GICv3: using LPI property table @0x00000004000e0000 Sep 4 17:10:51.211171 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 4 17:10:51.211189 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Sep 4 17:10:51.211207 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:10:51.211231 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 4 17:10:51.211249 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 4 17:10:51.211267 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 4 17:10:51.211285 kernel: Console: colour dummy device 80x25 Sep 4 17:10:51.211303 kernel: printk: console [tty1] enabled Sep 4 17:10:51.211321 kernel: ACPI: Core revision 20230628 Sep 4 17:10:51.211340 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 4 17:10:51.211358 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:10:51.211376 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:10:51.211394 kernel: SELinux: Initializing. Sep 4 17:10:51.211416 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:10:51.211435 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:10:51.211453 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:10:51.211471 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:10:51.211489 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:10:51.211507 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:10:51.211525 kernel: Platform MSI: ITS@0x10080000 domain created Sep 4 17:10:51.211543 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 4 17:10:51.211561 kernel: Remapping and enabling EFI services. Sep 4 17:10:51.211583 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:10:51.211601 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:10:51.211619 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 4 17:10:51.211637 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Sep 4 17:10:51.211655 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 4 17:10:51.211673 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:10:51.211691 kernel: SMP: Total of 2 processors activated. Sep 4 17:10:51.211709 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:10:51.211726 kernel: CPU features: detected: 32-bit EL1 Support Sep 4 17:10:51.211748 kernel: CPU features: detected: CRC32 instructions Sep 4 17:10:51.211767 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:10:51.211795 kernel: alternatives: applying system-wide alternatives Sep 4 17:10:51.211818 kernel: devtmpfs: initialized Sep 4 17:10:51.211837 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:10:51.211856 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:10:51.211874 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:10:51.211893 kernel: SMBIOS 3.0.0 present. Sep 4 17:10:51.211912 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 4 17:10:51.211935 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:10:51.211953 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:10:51.211972 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:10:51.211991 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:10:51.212010 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:10:51.212029 kernel: audit: type=2000 audit(0.293:1): state=initialized audit_enabled=0 res=1 Sep 4 17:10:51.212048 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:10:51.212070 kernel: cpuidle: using governor menu Sep 4 17:10:51.212089 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:10:51.212225 kernel: ASID allocator initialised with 65536 entries Sep 4 17:10:51.212286 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:10:51.212309 kernel: Serial: AMBA PL011 UART driver Sep 4 17:10:51.212328 kernel: Modules: 17600 pages in range for non-PLT usage Sep 4 17:10:51.212347 kernel: Modules: 509120 pages in range for PLT usage Sep 4 17:10:51.212366 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:10:51.212386 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:10:51.212412 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:10:51.212431 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:10:51.212451 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:10:51.212470 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:10:51.212488 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:10:51.212507 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:10:51.212526 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:10:51.212544 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:10:51.212563 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:10:51.212586 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:10:51.212605 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:10:51.212623 kernel: ACPI: Interpreter enabled Sep 4 17:10:51.212642 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:10:51.212661 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:10:51.212680 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 4 17:10:51.212983 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:10:51.213238 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:10:51.213450 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:10:51.213666 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 4 17:10:51.213903 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 4 17:10:51.213930 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 4 17:10:51.213950 kernel: acpiphp: Slot [1] registered Sep 4 17:10:51.213969 kernel: acpiphp: Slot [2] registered Sep 4 17:10:51.213988 kernel: acpiphp: Slot [3] registered Sep 4 17:10:51.214006 kernel: acpiphp: Slot [4] registered Sep 4 17:10:51.214025 kernel: acpiphp: Slot [5] registered Sep 4 17:10:51.214049 kernel: acpiphp: Slot [6] registered Sep 4 17:10:51.214068 kernel: acpiphp: Slot [7] registered Sep 4 17:10:51.214087 kernel: acpiphp: Slot [8] registered Sep 4 17:10:51.214132 kernel: acpiphp: Slot [9] registered Sep 4 17:10:51.214155 kernel: acpiphp: Slot [10] registered Sep 4 17:10:51.214174 kernel: acpiphp: Slot [11] registered Sep 4 17:10:51.214192 kernel: acpiphp: Slot [12] registered Sep 4 17:10:51.214211 kernel: acpiphp: Slot [13] registered Sep 4 17:10:51.214229 kernel: acpiphp: Slot [14] registered Sep 4 17:10:51.214272 kernel: acpiphp: Slot [15] registered Sep 4 17:10:51.214293 kernel: acpiphp: Slot [16] registered Sep 4 17:10:51.214311 kernel: acpiphp: Slot [17] registered Sep 4 17:10:51.214330 kernel: acpiphp: Slot [18] registered Sep 4 17:10:51.214348 kernel: acpiphp: Slot [19] registered Sep 4 17:10:51.214367 kernel: acpiphp: Slot [20] registered Sep 4 17:10:51.214385 kernel: acpiphp: Slot [21] registered Sep 4 17:10:51.214404 kernel: acpiphp: Slot [22] registered Sep 4 17:10:51.214422 kernel: acpiphp: Slot [23] registered Sep 4 17:10:51.214440 kernel: acpiphp: Slot [24] registered Sep 4 17:10:51.214465 kernel: acpiphp: Slot [25] registered Sep 4 17:10:51.214484 kernel: acpiphp: Slot [26] registered Sep 4 17:10:51.214503 kernel: acpiphp: Slot [27] registered Sep 4 17:10:51.214521 kernel: acpiphp: Slot [28] registered Sep 4 17:10:51.214540 kernel: acpiphp: Slot [29] registered Sep 4 17:10:51.214558 kernel: acpiphp: Slot [30] registered Sep 4 17:10:51.214577 kernel: acpiphp: Slot [31] registered Sep 4 17:10:51.214595 kernel: PCI host bridge to bus 0000:00 Sep 4 17:10:51.214803 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 4 17:10:51.214993 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:10:51.215207 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 4 17:10:51.215393 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 4 17:10:51.215630 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 4 17:10:51.215859 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 4 17:10:51.216067 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 4 17:10:51.216336 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:10:51.216542 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 4 17:10:51.216744 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:10:51.216959 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:10:51.217195 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 4 17:10:51.217402 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 4 17:10:51.217602 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 4 17:10:51.217811 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:10:51.218015 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 4 17:10:51.220333 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 4 17:10:51.220570 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 4 17:10:51.220773 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 4 17:10:51.220977 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 4 17:10:51.223282 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 4 17:10:51.223507 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:10:51.223686 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 4 17:10:51.223713 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:10:51.223733 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:10:51.223752 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:10:51.223771 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:10:51.223790 kernel: iommu: Default domain type: Translated Sep 4 17:10:51.223809 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:10:51.223833 kernel: efivars: Registered efivars operations Sep 4 17:10:51.223852 kernel: vgaarb: loaded Sep 4 17:10:51.223871 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:10:51.223889 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:10:51.223908 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:10:51.223927 kernel: pnp: PnP ACPI init Sep 4 17:10:51.224177 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 4 17:10:51.224208 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:10:51.224235 kernel: NET: Registered PF_INET protocol family Sep 4 17:10:51.224255 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:10:51.224275 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:10:51.224294 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:10:51.224313 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:10:51.224332 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:10:51.224351 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:10:51.224370 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:10:51.224390 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:10:51.224414 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:10:51.224434 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:10:51.224453 kernel: kvm [1]: HYP mode not available Sep 4 17:10:51.224471 kernel: Initialise system trusted keyrings Sep 4 17:10:51.224490 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:10:51.224510 kernel: Key type asymmetric registered Sep 4 17:10:51.224529 kernel: Asymmetric key parser 'x509' registered Sep 4 17:10:51.224549 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:10:51.224567 kernel: io scheduler mq-deadline registered Sep 4 17:10:51.224590 kernel: io scheduler kyber registered Sep 4 17:10:51.224610 kernel: io scheduler bfq registered Sep 4 17:10:51.224830 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 4 17:10:51.224858 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:10:51.224877 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:10:51.224896 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 4 17:10:51.224914 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 17:10:51.224933 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:10:51.224958 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 17:10:51.225213 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 4 17:10:51.225241 kernel: printk: console [ttyS0] disabled Sep 4 17:10:51.225261 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 4 17:10:51.225280 kernel: printk: console [ttyS0] enabled Sep 4 17:10:51.225299 kernel: printk: bootconsole [uart0] disabled Sep 4 17:10:51.225318 kernel: thunder_xcv, ver 1.0 Sep 4 17:10:51.225336 kernel: thunder_bgx, ver 1.0 Sep 4 17:10:51.225355 kernel: nicpf, ver 1.0 Sep 4 17:10:51.225373 kernel: nicvf, ver 1.0 Sep 4 17:10:51.225595 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:10:51.225786 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:10:50 UTC (1725469850) Sep 4 17:10:51.225812 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:10:51.225831 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 4 17:10:51.225851 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:10:51.225869 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:10:51.225888 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:10:51.225907 kernel: Segment Routing with IPv6 Sep 4 17:10:51.225931 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:10:51.225950 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:10:51.225969 kernel: Key type dns_resolver registered Sep 4 17:10:51.225987 kernel: registered taskstats version 1 Sep 4 17:10:51.226006 kernel: Loading compiled-in X.509 certificates Sep 4 17:10:51.226030 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 1f5b9f288f9cae6ec9698678cdc0f614482066f7' Sep 4 17:10:51.226050 kernel: Key type .fscrypt registered Sep 4 17:10:51.226068 kernel: Key type fscrypt-provisioning registered Sep 4 17:10:51.226087 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:10:51.228632 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:10:51.228783 kernel: ima: No architecture policies found Sep 4 17:10:51.228804 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:10:51.228823 kernel: clk: Disabling unused clocks Sep 4 17:10:51.228843 kernel: Freeing unused kernel memory: 39040K Sep 4 17:10:51.228861 kernel: Run /init as init process Sep 4 17:10:51.228880 kernel: with arguments: Sep 4 17:10:51.228900 kernel: /init Sep 4 17:10:51.228919 kernel: with environment: Sep 4 17:10:51.228949 kernel: HOME=/ Sep 4 17:10:51.228968 kernel: TERM=linux Sep 4 17:10:51.228987 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:10:51.229012 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:10:51.229036 systemd[1]: Detected virtualization amazon. Sep 4 17:10:51.229057 systemd[1]: Detected architecture arm64. Sep 4 17:10:51.229077 systemd[1]: Running in initrd. Sep 4 17:10:51.229097 systemd[1]: No hostname configured, using default hostname. Sep 4 17:10:51.229154 systemd[1]: Hostname set to . Sep 4 17:10:51.229177 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:10:51.229197 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:10:51.229218 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:10:51.229240 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:10:51.229261 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:10:51.229282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:10:51.229309 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:10:51.229330 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:10:51.229354 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:10:51.229374 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:10:51.229395 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:10:51.229415 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:10:51.229436 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:10:51.229461 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:10:51.229481 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:10:51.229501 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:10:51.229522 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:10:51.229542 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:10:51.229564 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:10:51.229584 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:10:51.229606 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:10:51.229626 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:10:51.229652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:10:51.229673 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:10:51.229693 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:10:51.229714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:10:51.229735 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:10:51.229755 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:10:51.229776 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:10:51.229796 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:10:51.229820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:10:51.229842 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:10:51.229862 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:10:51.229883 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:10:51.229905 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:10:51.229974 systemd-journald[250]: Collecting audit messages is disabled. Sep 4 17:10:51.230021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:10:51.230042 systemd-journald[250]: Journal started Sep 4 17:10:51.230084 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2251f3230d4672acaa6a8de470bbba) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:10:51.208543 systemd-modules-load[251]: Inserted module 'overlay' Sep 4 17:10:51.244742 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:10:51.249598 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:10:51.250529 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:10:51.262092 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:10:51.262164 kernel: Bridge firewalling registered Sep 4 17:10:51.262083 systemd-modules-load[251]: Inserted module 'br_netfilter' Sep 4 17:10:51.266439 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:10:51.280498 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:10:51.286395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:10:51.292416 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:10:51.336307 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:10:51.342850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:10:51.354562 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:10:51.377608 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:10:51.382458 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:10:51.408361 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:10:51.447245 dracut-cmdline[289]: dracut-dracut-053 Sep 4 17:10:51.455810 systemd-resolved[286]: Positive Trust Anchors: Sep 4 17:10:51.457118 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:10:51.457186 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:10:51.490598 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:10:51.630145 kernel: SCSI subsystem initialized Sep 4 17:10:51.638227 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:10:51.651242 kernel: iscsi: registered transport (tcp) Sep 4 17:10:51.674226 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:10:51.674314 kernel: QLogic iSCSI HBA Driver Sep 4 17:10:51.720170 kernel: random: crng init done Sep 4 17:10:51.719561 systemd-resolved[286]: Defaulting to hostname 'linux'. Sep 4 17:10:51.725545 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:10:51.730451 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:10:51.749192 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:10:51.761492 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:10:51.795463 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:10:51.795552 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:10:51.795580 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:10:51.864163 kernel: raid6: neonx8 gen() 6657 MB/s Sep 4 17:10:51.881157 kernel: raid6: neonx4 gen() 6439 MB/s Sep 4 17:10:51.898159 kernel: raid6: neonx2 gen() 5367 MB/s Sep 4 17:10:51.915147 kernel: raid6: neonx1 gen() 3921 MB/s Sep 4 17:10:51.932157 kernel: raid6: int64x8 gen() 3793 MB/s Sep 4 17:10:51.949153 kernel: raid6: int64x4 gen() 3678 MB/s Sep 4 17:10:51.966155 kernel: raid6: int64x2 gen() 3559 MB/s Sep 4 17:10:51.983966 kernel: raid6: int64x1 gen() 2758 MB/s Sep 4 17:10:51.984054 kernel: raid6: using algorithm neonx8 gen() 6657 MB/s Sep 4 17:10:52.001948 kernel: raid6: .... xor() 4864 MB/s, rmw enabled Sep 4 17:10:52.002030 kernel: raid6: using neon recovery algorithm Sep 4 17:10:52.010154 kernel: xor: measuring software checksum speed Sep 4 17:10:52.012144 kernel: 8regs : 11033 MB/sec Sep 4 17:10:52.014149 kernel: 32regs : 11958 MB/sec Sep 4 17:10:52.016013 kernel: arm64_neon : 9301 MB/sec Sep 4 17:10:52.016061 kernel: xor: using function: 32regs (11958 MB/sec) Sep 4 17:10:52.103545 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:10:52.122854 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:10:52.136447 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:10:52.171142 systemd-udevd[470]: Using default interface naming scheme 'v255'. Sep 4 17:10:52.179189 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:10:52.199507 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:10:52.229599 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Sep 4 17:10:52.287884 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:10:52.313518 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:10:52.430165 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:10:52.450095 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:10:52.495298 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:10:52.507968 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:10:52.519271 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:10:52.528767 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:10:52.545627 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:10:52.594636 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:10:52.656847 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:10:52.656911 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 4 17:10:52.667448 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:10:52.673907 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:10:52.674320 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:10:52.674156 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:10:52.688145 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:e9:85:04:d9:4d Sep 4 17:10:52.688586 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:10:52.701652 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 17:10:52.701697 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:10:52.700350 (udev-worker)[531]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:10:52.702353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:10:52.722381 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:10:52.707597 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:10:52.712635 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:10:52.731694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:10:52.745071 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:10:52.745160 kernel: GPT:9289727 != 16777215 Sep 4 17:10:52.745197 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:10:52.749039 kernel: GPT:9289727 != 16777215 Sep 4 17:10:52.750139 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:10:52.750202 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:10:52.760605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:10:52.770477 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:10:52.806433 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:10:52.896721 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:10:52.909463 kernel: BTRFS: device fsid 2be47701-3393-455e-86fc-33755ceb9c20 devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (517) Sep 4 17:10:52.909505 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (520) Sep 4 17:10:52.994928 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:10:53.028960 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:10:53.046512 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:10:53.049576 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:10:53.066523 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:10:53.084169 disk-uuid[660]: Primary Header is updated. Sep 4 17:10:53.084169 disk-uuid[660]: Secondary Entries is updated. Sep 4 17:10:53.084169 disk-uuid[660]: Secondary Header is updated. Sep 4 17:10:53.110145 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:10:53.117143 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:10:53.127143 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:10:54.127444 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:10:54.130902 disk-uuid[661]: The operation has completed successfully. Sep 4 17:10:54.296480 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:10:54.299143 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:10:54.361537 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:10:54.368402 sh[1004]: Success Sep 4 17:10:54.397222 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:10:54.522389 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:10:54.529264 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:10:54.541948 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:10:54.564275 kernel: BTRFS info (device dm-0): first mount of filesystem 2be47701-3393-455e-86fc-33755ceb9c20 Sep 4 17:10:54.564337 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:10:54.564364 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:10:54.565623 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:10:54.566714 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:10:54.628137 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:10:54.693267 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:10:54.698764 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:10:54.711400 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:10:54.715726 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:10:54.753830 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:10:54.753912 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:10:54.755672 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:10:54.760178 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:10:54.777615 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:10:54.780972 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:10:54.801793 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:10:54.813746 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:10:54.901736 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:10:54.916426 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:10:54.972247 systemd-networkd[1208]: lo: Link UP Sep 4 17:10:54.972714 systemd-networkd[1208]: lo: Gained carrier Sep 4 17:10:54.975488 systemd-networkd[1208]: Enumeration completed Sep 4 17:10:54.975631 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:10:54.977059 systemd-networkd[1208]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:10:54.977067 systemd-networkd[1208]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:10:54.981879 systemd[1]: Reached target network.target - Network. Sep 4 17:10:54.985887 systemd-networkd[1208]: eth0: Link UP Sep 4 17:10:54.985894 systemd-networkd[1208]: eth0: Gained carrier Sep 4 17:10:54.985911 systemd-networkd[1208]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:10:55.020228 systemd-networkd[1208]: eth0: DHCPv4 address 172.31.29.45/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:10:55.195772 ignition[1138]: Ignition 2.18.0 Sep 4 17:10:55.196345 ignition[1138]: Stage: fetch-offline Sep 4 17:10:55.196897 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:55.196922 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:55.197359 ignition[1138]: Ignition finished successfully Sep 4 17:10:55.209704 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:10:55.222603 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:10:55.244461 ignition[1220]: Ignition 2.18.0 Sep 4 17:10:55.244492 ignition[1220]: Stage: fetch Sep 4 17:10:55.245346 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:55.245379 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:55.245527 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:55.258414 ignition[1220]: PUT result: OK Sep 4 17:10:55.261559 ignition[1220]: parsed url from cmdline: "" Sep 4 17:10:55.261574 ignition[1220]: no config URL provided Sep 4 17:10:55.261589 ignition[1220]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:10:55.261615 ignition[1220]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:10:55.261646 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:55.266411 ignition[1220]: PUT result: OK Sep 4 17:10:55.266497 ignition[1220]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:10:55.275607 ignition[1220]: GET result: OK Sep 4 17:10:55.275782 ignition[1220]: parsing config with SHA512: d439519e25965cb16a878ff04f0d09785d0e9f09db1fbe87fc8334f884242757f7a2ee385dc07db1e4a0751624983a9fda3bed962446fc6b2c36eb0ffd478546 Sep 4 17:10:55.285448 unknown[1220]: fetched base config from "system" Sep 4 17:10:55.285681 unknown[1220]: fetched base config from "system" Sep 4 17:10:55.285696 unknown[1220]: fetched user config from "aws" Sep 4 17:10:55.288212 ignition[1220]: fetch: fetch complete Sep 4 17:10:55.288225 ignition[1220]: fetch: fetch passed Sep 4 17:10:55.292767 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:10:55.288337 ignition[1220]: Ignition finished successfully Sep 4 17:10:55.306496 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:10:55.336262 ignition[1228]: Ignition 2.18.0 Sep 4 17:10:55.336778 ignition[1228]: Stage: kargs Sep 4 17:10:55.337444 ignition[1228]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:55.337498 ignition[1228]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:55.337636 ignition[1228]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:55.342386 ignition[1228]: PUT result: OK Sep 4 17:10:55.351885 ignition[1228]: kargs: kargs passed Sep 4 17:10:55.352052 ignition[1228]: Ignition finished successfully Sep 4 17:10:55.356397 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:10:55.378512 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:10:55.402660 ignition[1235]: Ignition 2.18.0 Sep 4 17:10:55.402681 ignition[1235]: Stage: disks Sep 4 17:10:55.403349 ignition[1235]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:55.403374 ignition[1235]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:55.403513 ignition[1235]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:55.413874 ignition[1235]: PUT result: OK Sep 4 17:10:55.418222 ignition[1235]: disks: disks passed Sep 4 17:10:55.418510 ignition[1235]: Ignition finished successfully Sep 4 17:10:55.420235 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:10:55.430489 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:10:55.433245 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:10:55.441030 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:10:55.443368 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:10:55.445827 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:10:55.467495 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:10:55.503795 systemd-fsck[1244]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:10:55.514002 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:10:55.540359 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:10:55.621150 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f2f4f3ba-c5a3-49c0-ace4-444935e9934b r/w with ordered data mode. Quota mode: none. Sep 4 17:10:55.623487 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:10:55.628057 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:10:55.649289 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:10:55.658181 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:10:55.663091 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:10:55.678066 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1263) Sep 4 17:10:55.679162 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:10:55.679208 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:10:55.679235 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:10:55.663220 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:10:55.663301 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:10:55.696675 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:10:55.703380 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:10:55.712263 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:10:55.721392 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:10:56.036671 initrd-setup-root[1287]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:10:56.056963 initrd-setup-root[1294]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:10:56.079444 initrd-setup-root[1301]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:10:56.089323 initrd-setup-root[1308]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:10:56.345720 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:10:56.356312 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:10:56.366500 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:10:56.385883 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:10:56.389159 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:10:56.421170 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:10:56.437151 ignition[1377]: INFO : Ignition 2.18.0 Sep 4 17:10:56.437151 ignition[1377]: INFO : Stage: mount Sep 4 17:10:56.437151 ignition[1377]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:56.437151 ignition[1377]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:56.437151 ignition[1377]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:56.437151 ignition[1377]: INFO : PUT result: OK Sep 4 17:10:56.454814 ignition[1377]: INFO : mount: mount passed Sep 4 17:10:56.454814 ignition[1377]: INFO : Ignition finished successfully Sep 4 17:10:56.459664 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:10:56.478460 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:10:56.630570 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:10:56.661148 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1391) Sep 4 17:10:56.664750 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:10:56.664791 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:10:56.664819 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:10:56.670153 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:10:56.673797 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:10:56.707881 ignition[1408]: INFO : Ignition 2.18.0 Sep 4 17:10:56.707881 ignition[1408]: INFO : Stage: files Sep 4 17:10:56.712339 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:56.712339 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:56.712339 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:56.721434 ignition[1408]: INFO : PUT result: OK Sep 4 17:10:56.726646 ignition[1408]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:10:56.729660 ignition[1408]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:10:56.729660 ignition[1408]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:10:56.749332 ignition[1408]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:10:56.752839 ignition[1408]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:10:56.756536 unknown[1408]: wrote ssh authorized keys file for user: core Sep 4 17:10:56.760175 ignition[1408]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:10:56.767560 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:10:56.767560 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:10:56.824093 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:10:56.903786 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:10:56.903786 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:10:56.914271 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 17:10:56.983392 systemd-networkd[1208]: eth0: Gained IPv6LL Sep 4 17:10:57.231617 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:10:57.372009 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:10:57.375734 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Sep 4 17:10:57.668555 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:10:58.022240 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:10:58.022240 ignition[1408]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 17:10:58.038434 ignition[1408]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:10:58.038434 ignition[1408]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:10:58.038434 ignition[1408]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 17:10:58.038434 ignition[1408]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:10:58.038434 ignition[1408]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:10:58.038434 ignition[1408]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:10:58.038434 ignition[1408]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:10:58.038434 ignition[1408]: INFO : files: files passed Sep 4 17:10:58.038434 ignition[1408]: INFO : Ignition finished successfully Sep 4 17:10:58.070618 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:10:58.091432 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:10:58.098099 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:10:58.104736 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:10:58.105056 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:10:58.143610 initrd-setup-root-after-ignition[1437]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:10:58.143610 initrd-setup-root-after-ignition[1437]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:10:58.153487 initrd-setup-root-after-ignition[1441]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:10:58.159734 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:10:58.164313 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:10:58.179392 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:10:58.240834 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:10:58.242220 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:10:58.247989 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:10:58.254844 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:10:58.257073 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:10:58.267390 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:10:58.298479 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:10:58.319500 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:10:58.346765 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:10:58.355388 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:10:58.363426 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:10:58.367296 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:10:58.367605 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:10:58.375995 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:10:58.381302 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:10:58.383662 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:10:58.392746 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:10:58.395652 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:10:58.398380 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:10:58.401047 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:10:58.411473 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:10:58.414215 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:10:58.423725 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:10:58.426260 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:10:58.426516 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:10:58.435101 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:10:58.438002 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:10:58.445807 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:10:58.450499 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:10:58.453193 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:10:58.453426 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:10:58.455947 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:10:58.456197 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:10:58.459439 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:10:58.459652 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:10:58.484620 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:10:58.499480 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:10:58.503008 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:10:58.503458 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:10:58.515031 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:10:58.515384 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:10:58.530285 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:10:58.532194 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:10:58.551603 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:10:58.558672 ignition[1461]: INFO : Ignition 2.18.0 Sep 4 17:10:58.558672 ignition[1461]: INFO : Stage: umount Sep 4 17:10:58.563091 ignition[1461]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:58.563091 ignition[1461]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:58.563091 ignition[1461]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:58.580244 ignition[1461]: INFO : PUT result: OK Sep 4 17:10:58.572074 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:10:58.574451 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:10:58.586699 ignition[1461]: INFO : umount: umount passed Sep 4 17:10:58.588594 ignition[1461]: INFO : Ignition finished successfully Sep 4 17:10:58.593229 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:10:58.593955 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:10:58.601392 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:10:58.601486 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:10:58.604057 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:10:58.604517 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:10:58.615167 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:10:58.615255 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:10:58.617636 systemd[1]: Stopped target network.target - Network. Sep 4 17:10:58.619621 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:10:58.619701 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:10:58.622489 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:10:58.624559 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:10:58.637259 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:10:58.642404 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:10:58.649909 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:10:58.654249 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:10:58.654341 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:10:58.656330 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:10:58.656401 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:10:58.658387 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:10:58.658472 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:10:58.660413 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:10:58.660488 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:10:58.662550 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:10:58.662624 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:10:58.665119 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:10:58.675183 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:10:58.698261 systemd-networkd[1208]: eth0: DHCPv6 lease lost Sep 4 17:10:58.698955 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:10:58.699271 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:10:58.711798 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:10:58.714507 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:10:58.719021 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:10:58.719182 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:10:58.734484 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:10:58.737295 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:10:58.737406 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:10:58.740676 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:10:58.740756 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:10:58.743580 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:10:58.743667 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:10:58.748389 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:10:58.748475 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:10:58.749147 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:10:58.792999 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:10:58.795515 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:10:58.802831 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:10:58.803040 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:10:58.810290 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:10:58.810429 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:10:58.817970 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:10:58.818058 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:10:58.826033 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:10:58.826155 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:10:58.828993 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:10:58.829080 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:10:58.840517 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:10:58.840614 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:10:58.862486 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:10:58.867317 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:10:58.867455 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:10:58.879039 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:10:58.880201 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:10:58.887418 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:10:58.887514 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:10:58.890312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:10:58.890392 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:10:58.893850 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:10:58.894020 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:10:58.896976 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:10:58.913324 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:10:58.933247 systemd[1]: Switching root. Sep 4 17:10:58.972861 systemd-journald[250]: Journal stopped Sep 4 17:11:01.549713 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Sep 4 17:11:01.549859 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:11:01.549909 kernel: SELinux: policy capability open_perms=1 Sep 4 17:11:01.549941 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:11:01.549972 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:11:01.550002 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:11:01.550039 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:11:01.550069 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:11:01.550100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:11:01.550190 kernel: audit: type=1403 audit(1725469859.958:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:11:01.550234 systemd[1]: Successfully loaded SELinux policy in 56.384ms. Sep 4 17:11:01.550287 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.719ms. Sep 4 17:11:01.550323 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:11:01.550356 systemd[1]: Detected virtualization amazon. Sep 4 17:11:01.550386 systemd[1]: Detected architecture arm64. Sep 4 17:11:01.550423 systemd[1]: Detected first boot. Sep 4 17:11:01.550456 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:11:01.550490 zram_generator::config[1505]: No configuration found. Sep 4 17:11:01.550525 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:11:01.550557 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:11:01.550612 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:11:01.550650 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:11:01.550682 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:11:01.550719 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:11:01.550750 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:11:01.550781 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:11:01.550814 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:11:01.550845 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:11:01.550875 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:11:01.550907 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:11:01.550937 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:11:01.550967 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:11:01.551000 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:11:01.551029 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:11:01.551060 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:11:01.551093 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:11:01.555929 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:11:01.555976 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:11:01.556007 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:11:01.556038 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:11:01.556067 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:11:01.556121 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:11:01.556158 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:11:01.556190 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:11:01.556220 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:11:01.556250 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:11:01.556279 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:11:01.556311 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:11:01.556342 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:11:01.556377 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:11:01.556406 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:11:01.556437 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:11:01.556466 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:11:01.556499 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:11:01.556530 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:11:01.556817 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:11:01.556856 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:11:01.556887 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:11:01.556924 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:11:01.556956 systemd[1]: Reached target machines.target - Containers. Sep 4 17:11:01.556989 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:11:01.557019 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:11:01.557048 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:11:01.562387 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:11:01.562435 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:11:01.562468 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:11:01.562505 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:11:01.562535 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:11:01.562564 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:11:01.562593 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:11:01.562622 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:11:01.562664 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:11:01.562693 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:11:01.562721 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:11:01.562755 kernel: fuse: init (API version 7.39) Sep 4 17:11:01.562785 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:11:01.562813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:11:01.562847 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:11:01.562877 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:11:01.562908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:11:01.562937 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:11:01.562965 systemd[1]: Stopped verity-setup.service. Sep 4 17:11:01.562994 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:11:01.563026 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:11:01.563060 kernel: loop: module loaded Sep 4 17:11:01.563088 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:11:01.563317 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:11:01.563354 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:11:01.563390 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:11:01.563420 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:11:01.563448 kernel: ACPI: bus type drm_connector registered Sep 4 17:11:01.563476 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:11:01.563507 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:11:01.563536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:11:01.563608 systemd-journald[1586]: Collecting audit messages is disabled. Sep 4 17:11:01.563669 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:11:01.563704 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:11:01.563736 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:11:01.563769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:11:01.563799 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:11:01.563829 systemd-journald[1586]: Journal started Sep 4 17:11:01.563874 systemd-journald[1586]: Runtime Journal (/run/log/journal/ec2251f3230d4672acaa6a8de470bbba) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:11:00.939045 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:11:00.997404 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:11:00.998230 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:11:01.572281 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:11:01.577014 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:11:01.578549 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:11:01.583519 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:11:01.585243 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:11:01.589799 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:11:01.595138 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:11:01.602257 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:11:01.621680 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:11:01.637909 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:11:01.648354 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:11:01.663387 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:11:01.671261 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:11:01.671340 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:11:01.677609 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:11:01.690888 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:11:01.706476 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:11:01.710931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:11:01.715493 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:11:01.723837 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:11:01.728713 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:11:01.731420 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:11:01.736674 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:11:01.740458 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:11:01.748624 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:11:01.763437 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:11:01.773492 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:11:01.785838 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:11:01.797728 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:11:01.830666 systemd-journald[1586]: Time spent on flushing to /var/log/journal/ec2251f3230d4672acaa6a8de470bbba is 114.908ms for 912 entries. Sep 4 17:11:01.830666 systemd-journald[1586]: System Journal (/var/log/journal/ec2251f3230d4672acaa6a8de470bbba) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:11:01.975236 systemd-journald[1586]: Received client request to flush runtime journal. Sep 4 17:11:01.975650 kernel: loop0: detected capacity change from 0 to 59688 Sep 4 17:11:01.975688 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:11:01.977203 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:11:01.842721 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:11:01.849725 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:11:01.865434 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:11:01.875145 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:11:01.889440 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:11:01.912879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:11:01.932495 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Sep 4 17:11:01.932520 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Sep 4 17:11:01.965204 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:11:01.984488 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:11:01.995180 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:11:02.015031 udevadm[1645]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:11:02.024215 kernel: loop1: detected capacity change from 0 to 194512 Sep 4 17:11:02.037339 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:11:02.042822 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:11:02.085785 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:11:02.095171 kernel: loop2: detected capacity change from 0 to 51896 Sep 4 17:11:02.101424 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:11:02.172385 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Sep 4 17:11:02.172454 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Sep 4 17:11:02.180163 kernel: loop3: detected capacity change from 0 to 113672 Sep 4 17:11:02.189805 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:11:02.297169 kernel: loop4: detected capacity change from 0 to 59688 Sep 4 17:11:02.325169 kernel: loop5: detected capacity change from 0 to 194512 Sep 4 17:11:02.347167 kernel: loop6: detected capacity change from 0 to 51896 Sep 4 17:11:02.367627 kernel: loop7: detected capacity change from 0 to 113672 Sep 4 17:11:02.377644 (sd-merge)[1662]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:11:02.378627 (sd-merge)[1662]: Merged extensions into '/usr'. Sep 4 17:11:02.390049 systemd[1]: Reloading requested from client PID 1633 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:11:02.390084 systemd[1]: Reloading... Sep 4 17:11:02.577139 zram_generator::config[1689]: No configuration found. Sep 4 17:11:02.900271 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:11:02.909155 ldconfig[1628]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:11:03.020606 systemd[1]: Reloading finished in 627 ms. Sep 4 17:11:03.072903 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:11:03.079205 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:11:03.098674 systemd[1]: Starting ensure-sysext.service... Sep 4 17:11:03.117519 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:11:03.123653 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:11:03.137453 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:11:03.144861 systemd[1]: Reloading requested from client PID 1738 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:11:03.144901 systemd[1]: Reloading... Sep 4 17:11:03.168605 systemd-tmpfiles[1739]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:11:03.169805 systemd-tmpfiles[1739]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:11:03.173998 systemd-tmpfiles[1739]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:11:03.174708 systemd-tmpfiles[1739]: ACLs are not supported, ignoring. Sep 4 17:11:03.174842 systemd-tmpfiles[1739]: ACLs are not supported, ignoring. Sep 4 17:11:03.186863 systemd-tmpfiles[1739]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:11:03.186883 systemd-tmpfiles[1739]: Skipping /boot Sep 4 17:11:03.209092 systemd-tmpfiles[1739]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:11:03.209336 systemd-tmpfiles[1739]: Skipping /boot Sep 4 17:11:03.256553 systemd-udevd[1742]: Using default interface naming scheme 'v255'. Sep 4 17:11:03.322152 zram_generator::config[1765]: No configuration found. Sep 4 17:11:03.452136 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1772) Sep 4 17:11:03.510335 (udev-worker)[1771]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:11:03.683132 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1799) Sep 4 17:11:03.684842 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:11:03.827443 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:11:03.827657 systemd[1]: Reloading finished in 682 ms. Sep 4 17:11:03.856374 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:11:03.886300 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:11:03.948659 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:11:03.961719 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:11:03.980645 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:11:04.001558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:11:04.017605 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:11:04.030629 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:11:04.050284 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:11:04.081158 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:11:04.092856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:11:04.109544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:11:04.137681 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:11:04.143386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:11:04.148658 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:11:04.162233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:11:04.178573 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:11:04.185143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:11:04.185538 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:11:04.192667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:11:04.193023 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:11:04.199650 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:11:04.200073 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:11:04.218394 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:11:04.227129 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:11:04.231048 augenrules[1960]: No rules Sep 4 17:11:04.233483 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:11:04.282049 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:11:04.286178 systemd[1]: Finished ensure-sysext.service. Sep 4 17:11:04.294551 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:11:04.308702 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:11:04.317326 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:11:04.335602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:11:04.341652 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:11:04.349668 lvm[1975]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:11:04.358424 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:11:04.369440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:11:04.373747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:11:04.376604 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:11:04.381491 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:11:04.390561 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:11:04.396364 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:11:04.397478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:11:04.402845 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:11:04.403180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:11:04.421223 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:11:04.428353 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:11:04.440413 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:11:04.448641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:11:04.451229 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:11:04.452043 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:11:04.452342 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:11:04.456094 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:11:04.462272 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:11:04.465252 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:11:04.470799 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:11:04.478356 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:11:04.488411 lvm[1987]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:11:04.516842 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:11:04.525844 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:11:04.620863 systemd-networkd[1938]: lo: Link UP Sep 4 17:11:04.621458 systemd-networkd[1938]: lo: Gained carrier Sep 4 17:11:04.624472 systemd-networkd[1938]: Enumeration completed Sep 4 17:11:04.625349 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:11:04.626142 systemd-networkd[1938]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:11:04.626158 systemd-networkd[1938]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:11:04.631863 systemd-networkd[1938]: eth0: Link UP Sep 4 17:11:04.633155 systemd-networkd[1938]: eth0: Gained carrier Sep 4 17:11:04.633324 systemd-networkd[1938]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:11:04.639847 systemd-resolved[1941]: Positive Trust Anchors: Sep 4 17:11:04.640179 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:11:04.645488 systemd-resolved[1941]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:11:04.645557 systemd-resolved[1941]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:11:04.653909 systemd-resolved[1941]: Defaulting to hostname 'linux'. Sep 4 17:11:04.656989 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:11:04.660267 systemd[1]: Reached target network.target - Network. Sep 4 17:11:04.660345 systemd-networkd[1938]: eth0: DHCPv4 address 172.31.29.45/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:11:04.664473 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:11:04.667962 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:11:04.670923 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:11:04.674010 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:11:04.677355 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:11:04.680216 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:11:04.683271 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:11:04.686325 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:11:04.686500 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:11:04.688750 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:11:04.692094 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:11:04.696643 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:11:04.715395 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:11:04.719044 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:11:04.722032 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:11:04.724677 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:11:04.727022 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:11:04.727076 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:11:04.735305 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:11:04.741463 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:11:04.756636 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:11:04.762372 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:11:04.768417 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:11:04.770924 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:11:04.779647 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:11:04.794549 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:11:04.806321 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:11:04.820780 jq[2006]: false Sep 4 17:11:04.822343 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:11:04.831153 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:11:04.844401 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:11:04.856381 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:11:04.861042 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:11:04.861952 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:11:04.871589 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:11:04.878307 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:11:04.887325 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:11:04.887697 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:11:04.954719 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:11:04.955567 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:11:04.966862 dbus-daemon[2005]: [system] SELinux support is enabled Sep 4 17:11:04.967439 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:11:04.977315 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:11:04.977372 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:11:04.983900 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:11:04.996584 jq[2021]: true Sep 4 17:11:04.984214 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:11:05.006813 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:11:05.008225 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:11:05.033132 tar[2028]: linux-arm64/helm Sep 4 17:11:05.036422 dbus-daemon[2005]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1938 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:11:05.045205 extend-filesystems[2007]: Found loop4 Sep 4 17:11:05.045205 extend-filesystems[2007]: Found loop5 Sep 4 17:11:05.045205 extend-filesystems[2007]: Found loop6 Sep 4 17:11:05.045205 extend-filesystems[2007]: Found loop7 Sep 4 17:11:05.045205 extend-filesystems[2007]: Found nvme0n1 Sep 4 17:11:05.045205 extend-filesystems[2007]: Found nvme0n1p2 Sep 4 17:11:05.045205 extend-filesystems[2007]: Found nvme0n1p3 Sep 4 17:11:05.045205 extend-filesystems[2007]: Found usr Sep 4 17:11:05.045205 extend-filesystems[2007]: Found nvme0n1p4 Sep 4 17:11:05.045205 extend-filesystems[2007]: Found nvme0n1p6 Sep 4 17:11:05.045205 extend-filesystems[2007]: Found nvme0n1p7 Sep 4 17:11:05.045205 extend-filesystems[2007]: Found nvme0n1p9 Sep 4 17:11:05.045205 extend-filesystems[2007]: Checking size of /dev/nvme0n1p9 Sep 4 17:11:05.065586 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:11:05.117328 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:13:39 UTC 2024 (1): Starting Sep 4 17:11:05.117328 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:11:05.117328 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: ---------------------------------------------------- Sep 4 17:11:05.117328 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:11:05.117328 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:11:05.117328 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: corporation. Support and training for ntp-4 are Sep 4 17:11:05.117328 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: available at https://www.nwtime.org/support Sep 4 17:11:05.117328 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: ---------------------------------------------------- Sep 4 17:11:05.115435 ntpd[2009]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:13:39 UTC 2024 (1): Starting Sep 4 17:11:05.094453 (ntainerd)[2042]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:11:05.115483 ntpd[2009]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:11:05.124159 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: proto: precision = 0.096 usec (-23) Sep 4 17:11:05.124159 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: basedate set to 2024-08-23 Sep 4 17:11:05.124159 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: gps base set to 2024-08-25 (week 2329) Sep 4 17:11:05.104529 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:11:05.115504 ntpd[2009]: ---------------------------------------------------- Sep 4 17:11:05.115523 ntpd[2009]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:11:05.115541 ntpd[2009]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:11:05.115559 ntpd[2009]: corporation. Support and training for ntp-4 are Sep 4 17:11:05.115577 ntpd[2009]: available at https://www.nwtime.org/support Sep 4 17:11:05.115596 ntpd[2009]: ---------------------------------------------------- Sep 4 17:11:05.119707 ntpd[2009]: proto: precision = 0.096 usec (-23) Sep 4 17:11:05.121352 ntpd[2009]: basedate set to 2024-08-23 Sep 4 17:11:05.121385 ntpd[2009]: gps base set to 2024-08-25 (week 2329) Sep 4 17:11:05.129237 ntpd[2009]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:11:05.131234 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:11:05.131234 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:11:05.131234 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:11:05.131234 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: Listen normally on 3 eth0 172.31.29.45:123 Sep 4 17:11:05.131234 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: Listen normally on 4 lo [::1]:123 Sep 4 17:11:05.131234 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: bind(21) AF_INET6 fe80::4e9:85ff:fe04:d94d%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:11:05.131234 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: unable to create socket on eth0 (5) for fe80::4e9:85ff:fe04:d94d%2#123 Sep 4 17:11:05.131234 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: failed to init interface for address fe80::4e9:85ff:fe04:d94d%2 Sep 4 17:11:05.131234 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: Listening on routing socket on fd #21 for interface updates Sep 4 17:11:05.129769 ntpd[2009]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:11:05.130044 ntpd[2009]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:11:05.130145 ntpd[2009]: Listen normally on 3 eth0 172.31.29.45:123 Sep 4 17:11:05.130228 ntpd[2009]: Listen normally on 4 lo [::1]:123 Sep 4 17:11:05.130318 ntpd[2009]: bind(21) AF_INET6 fe80::4e9:85ff:fe04:d94d%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:11:05.130358 ntpd[2009]: unable to create socket on eth0 (5) for fe80::4e9:85ff:fe04:d94d%2#123 Sep 4 17:11:05.130387 ntpd[2009]: failed to init interface for address fe80::4e9:85ff:fe04:d94d%2 Sep 4 17:11:05.130442 ntpd[2009]: Listening on routing socket on fd #21 for interface updates Sep 4 17:11:05.136532 ntpd[2009]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:11:05.139757 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:11:05.140353 ntpd[2009]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:11:05.140506 ntpd[2009]: 4 Sep 17:11:05 ntpd[2009]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:11:05.148480 jq[2041]: true Sep 4 17:11:05.185344 update_engine[2019]: I0904 17:11:05.184624 2019 main.cc:92] Flatcar Update Engine starting Sep 4 17:11:05.186874 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:11:05.199650 extend-filesystems[2007]: Resized partition /dev/nvme0n1p9 Sep 4 17:11:05.216286 extend-filesystems[2060]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:11:05.221267 update_engine[2019]: I0904 17:11:05.208607 2019 update_check_scheduler.cc:74] Next update check in 10m1s Sep 4 17:11:05.208236 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:11:05.236164 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:11:05.243672 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:11:05.281381 coreos-metadata[2004]: Sep 04 17:11:05.281 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:11:05.281381 coreos-metadata[2004]: Sep 04 17:11:05.281 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:11:05.282211 coreos-metadata[2004]: Sep 04 17:11:05.281 INFO Fetch successful Sep 4 17:11:05.282211 coreos-metadata[2004]: Sep 04 17:11:05.281 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:11:05.283050 coreos-metadata[2004]: Sep 04 17:11:05.282 INFO Fetch successful Sep 4 17:11:05.283050 coreos-metadata[2004]: Sep 04 17:11:05.283 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:11:05.286459 coreos-metadata[2004]: Sep 04 17:11:05.286 INFO Fetch successful Sep 4 17:11:05.286459 coreos-metadata[2004]: Sep 04 17:11:05.286 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:11:05.294643 coreos-metadata[2004]: Sep 04 17:11:05.294 INFO Fetch successful Sep 4 17:11:05.294643 coreos-metadata[2004]: Sep 04 17:11:05.294 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:11:05.296268 coreos-metadata[2004]: Sep 04 17:11:05.296 INFO Fetch failed with 404: resource not found Sep 4 17:11:05.296268 coreos-metadata[2004]: Sep 04 17:11:05.296 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:11:05.300697 coreos-metadata[2004]: Sep 04 17:11:05.300 INFO Fetch successful Sep 4 17:11:05.300697 coreos-metadata[2004]: Sep 04 17:11:05.300 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:11:05.306545 coreos-metadata[2004]: Sep 04 17:11:05.306 INFO Fetch successful Sep 4 17:11:05.306545 coreos-metadata[2004]: Sep 04 17:11:05.306 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:11:05.308345 coreos-metadata[2004]: Sep 04 17:11:05.308 INFO Fetch successful Sep 4 17:11:05.308345 coreos-metadata[2004]: Sep 04 17:11:05.308 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:11:05.313242 coreos-metadata[2004]: Sep 04 17:11:05.313 INFO Fetch successful Sep 4 17:11:05.313242 coreos-metadata[2004]: Sep 04 17:11:05.313 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:11:05.319585 coreos-metadata[2004]: Sep 04 17:11:05.319 INFO Fetch successful Sep 4 17:11:05.331172 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:11:05.370602 systemd-logind[2016]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:11:05.376254 extend-filesystems[2060]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:11:05.376254 extend-filesystems[2060]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:11:05.376254 extend-filesystems[2060]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:11:05.374871 systemd-logind[2016]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 4 17:11:05.406907 extend-filesystems[2007]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:11:05.406907 extend-filesystems[2007]: Found nvme0n1p1 Sep 4 17:11:05.376535 systemd-logind[2016]: New seat seat0. Sep 4 17:11:05.378026 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:11:05.378415 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:11:05.395874 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:11:05.431163 bash[2081]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:11:05.476137 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1785) Sep 4 17:11:05.518511 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:11:05.524378 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:11:05.545230 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:11:05.554590 systemd[1]: Starting sshkeys.service... Sep 4 17:11:05.667234 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:11:05.720230 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:11:05.728760 dbus-daemon[2005]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:11:05.731142 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:11:05.737990 dbus-daemon[2005]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2046 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:11:05.747852 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:11:05.759153 containerd[2042]: time="2024-09-04T17:11:05.756637799Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:11:05.782001 polkitd[2140]: Started polkitd version 121 Sep 4 17:11:05.791549 polkitd[2140]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:11:05.791682 polkitd[2140]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:11:05.793568 polkitd[2140]: Finished loading, compiling and executing 2 rules Sep 4 17:11:05.795074 dbus-daemon[2005]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:11:05.795480 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:11:05.800183 polkitd[2140]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:11:05.828843 systemd-hostnamed[2046]: Hostname set to (transient) Sep 4 17:11:05.829020 systemd-resolved[1941]: System hostname changed to 'ip-172-31-29-45'. Sep 4 17:11:05.934345 locksmithd[2061]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:11:05.943320 systemd-networkd[1938]: eth0: Gained IPv6LL Sep 4 17:11:05.959689 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:11:05.967449 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:11:05.980922 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:11:05.989187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:06.005768 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:11:06.052703 containerd[2042]: time="2024-09-04T17:11:06.052640217Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:11:06.066217 coreos-metadata[2124]: Sep 04 17:11:06.063 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:11:06.068065 containerd[2042]: time="2024-09-04T17:11:06.063906249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:06.070593 coreos-metadata[2124]: Sep 04 17:11:06.068 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:11:06.071191 coreos-metadata[2124]: Sep 04 17:11:06.070 INFO Fetch successful Sep 4 17:11:06.071191 coreos-metadata[2124]: Sep 04 17:11:06.070 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:11:06.073290 coreos-metadata[2124]: Sep 04 17:11:06.071 INFO Fetch successful Sep 4 17:11:06.078711 unknown[2124]: wrote ssh authorized keys file for user: core Sep 4 17:11:06.110373 containerd[2042]: time="2024-09-04T17:11:06.109462965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:06.111354 containerd[2042]: time="2024-09-04T17:11:06.111290901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:06.117280 containerd[2042]: time="2024-09-04T17:11:06.117176733Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:06.117280 containerd[2042]: time="2024-09-04T17:11:06.117270333Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:11:06.121179 containerd[2042]: time="2024-09-04T17:11:06.119366001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:06.124811 containerd[2042]: time="2024-09-04T17:11:06.122257701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:06.128313 containerd[2042]: time="2024-09-04T17:11:06.128237541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:06.128613 containerd[2042]: time="2024-09-04T17:11:06.128545245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:06.137682 containerd[2042]: time="2024-09-04T17:11:06.137199297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:06.141401 containerd[2042]: time="2024-09-04T17:11:06.137302917Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:11:06.141401 containerd[2042]: time="2024-09-04T17:11:06.140181477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:06.152184 containerd[2042]: time="2024-09-04T17:11:06.146825085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:06.155017 containerd[2042]: time="2024-09-04T17:11:06.151182309Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:11:06.158678 containerd[2042]: time="2024-09-04T17:11:06.158603793Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:11:06.158678 containerd[2042]: time="2024-09-04T17:11:06.158666901Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:11:06.161558 amazon-ssm-agent[2189]: Initializing new seelog logger Sep 4 17:11:06.162340 amazon-ssm-agent[2189]: New Seelog Logger Creation Complete Sep 4 17:11:06.162584 amazon-ssm-agent[2189]: 2024/09/04 17:11:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:06.162672 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:06.163520 amazon-ssm-agent[2189]: 2024/09/04 17:11:06 processing appconfig overrides Sep 4 17:11:06.165849 amazon-ssm-agent[2189]: 2024/09/04 17:11:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:06.171309 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:06.171309 amazon-ssm-agent[2189]: 2024/09/04 17:11:06 processing appconfig overrides Sep 4 17:11:06.171309 amazon-ssm-agent[2189]: 2024/09/04 17:11:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:06.171309 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:06.171309 amazon-ssm-agent[2189]: 2024/09/04 17:11:06 processing appconfig overrides Sep 4 17:11:06.174584 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO Proxy environment variables: Sep 4 17:11:06.175507 update-ssh-keys[2204]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:11:06.179250 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:11:06.189164 systemd[1]: Finished sshkeys.service. Sep 4 17:11:06.195182 containerd[2042]: time="2024-09-04T17:11:06.194289009Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:11:06.195182 containerd[2042]: time="2024-09-04T17:11:06.194355633Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:11:06.195182 containerd[2042]: time="2024-09-04T17:11:06.194386497Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:11:06.196075 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:11:06.202738 amazon-ssm-agent[2189]: 2024/09/04 17:11:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:06.202738 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:06.202738 amazon-ssm-agent[2189]: 2024/09/04 17:11:06 processing appconfig overrides Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.197006877Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.197090781Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.199207641Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.199242045Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.199496769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.199536345Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.199569297Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.199601553Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.199636005Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.199676817Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.199707957Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.199737897Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.199769253Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:11:06.202883 containerd[2042]: time="2024-09-04T17:11:06.199799121Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:11:06.203533 containerd[2042]: time="2024-09-04T17:11:06.199827657Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:11:06.203533 containerd[2042]: time="2024-09-04T17:11:06.199854177Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:11:06.206615 containerd[2042]: time="2024-09-04T17:11:06.206403922Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:11:06.208570 containerd[2042]: time="2024-09-04T17:11:06.206849098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:11:06.208570 containerd[2042]: time="2024-09-04T17:11:06.206912794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.208570 containerd[2042]: time="2024-09-04T17:11:06.206949382Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:11:06.208570 containerd[2042]: time="2024-09-04T17:11:06.206998558Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:11:06.212216 containerd[2042]: time="2024-09-04T17:11:06.211023718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.212216 containerd[2042]: time="2024-09-04T17:11:06.211088854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.212216 containerd[2042]: time="2024-09-04T17:11:06.211141294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.212216 containerd[2042]: time="2024-09-04T17:11:06.211177858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.212216 containerd[2042]: time="2024-09-04T17:11:06.211209946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.212216 containerd[2042]: time="2024-09-04T17:11:06.211239682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.212216 containerd[2042]: time="2024-09-04T17:11:06.211268506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.212216 containerd[2042]: time="2024-09-04T17:11:06.211300342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.212216 containerd[2042]: time="2024-09-04T17:11:06.211333450Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:11:06.212216 containerd[2042]: time="2024-09-04T17:11:06.211664206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.216129 containerd[2042]: time="2024-09-04T17:11:06.211710442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.216129 containerd[2042]: time="2024-09-04T17:11:06.213227530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.216129 containerd[2042]: time="2024-09-04T17:11:06.213264262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.216129 containerd[2042]: time="2024-09-04T17:11:06.213296098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.216129 containerd[2042]: time="2024-09-04T17:11:06.213336274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.216129 containerd[2042]: time="2024-09-04T17:11:06.213365782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.216129 containerd[2042]: time="2024-09-04T17:11:06.213393286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:11:06.216521 containerd[2042]: time="2024-09-04T17:11:06.213838810Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:11:06.216521 containerd[2042]: time="2024-09-04T17:11:06.213947266Z" level=info msg="Connect containerd service" Sep 4 17:11:06.216521 containerd[2042]: time="2024-09-04T17:11:06.214008706Z" level=info msg="using legacy CRI server" Sep 4 17:11:06.216521 containerd[2042]: time="2024-09-04T17:11:06.214026274Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:11:06.222518 containerd[2042]: time="2024-09-04T17:11:06.219291118Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:11:06.222518 containerd[2042]: time="2024-09-04T17:11:06.221493382Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:11:06.222518 containerd[2042]: time="2024-09-04T17:11:06.221599054Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:11:06.222518 containerd[2042]: time="2024-09-04T17:11:06.221642734Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:11:06.222518 containerd[2042]: time="2024-09-04T17:11:06.221669290Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:11:06.222518 containerd[2042]: time="2024-09-04T17:11:06.221698918Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:11:06.222518 containerd[2042]: time="2024-09-04T17:11:06.221814658Z" level=info msg="Start subscribing containerd event" Sep 4 17:11:06.222518 containerd[2042]: time="2024-09-04T17:11:06.221910586Z" level=info msg="Start recovering state" Sep 4 17:11:06.222518 containerd[2042]: time="2024-09-04T17:11:06.222039634Z" level=info msg="Start event monitor" Sep 4 17:11:06.222518 containerd[2042]: time="2024-09-04T17:11:06.222063598Z" level=info msg="Start snapshots syncer" Sep 4 17:11:06.222518 containerd[2042]: time="2024-09-04T17:11:06.222085618Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:11:06.222518 containerd[2042]: time="2024-09-04T17:11:06.222136234Z" level=info msg="Start streaming server" Sep 4 17:11:06.238134 containerd[2042]: time="2024-09-04T17:11:06.227165938Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:11:06.238134 containerd[2042]: time="2024-09-04T17:11:06.227308222Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:11:06.238134 containerd[2042]: time="2024-09-04T17:11:06.227730310Z" level=info msg="containerd successfully booted in 0.486828s" Sep 4 17:11:06.227557 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:11:06.279388 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO https_proxy: Sep 4 17:11:06.384995 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO http_proxy: Sep 4 17:11:06.484708 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO no_proxy: Sep 4 17:11:06.585216 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:11:06.683801 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:11:06.784311 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO Agent will take identity from EC2 Sep 4 17:11:06.844047 sshd_keygen[2037]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:11:06.884143 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:11:06.938521 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:11:06.955629 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:11:06.964275 systemd[1]: Started sshd@0-172.31.29.45:22-139.178.89.65:59732.service - OpenSSH per-connection server daemon (139.178.89.65:59732). Sep 4 17:11:06.986293 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:11:06.999593 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:11:07.003131 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:11:07.019621 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:11:07.085713 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:11:07.089712 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:11:07.114791 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:11:07.127670 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:11:07.132987 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:11:07.132965 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:11:07.134264 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 4 17:11:07.134413 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:11:07.134530 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:11:07.134643 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [Registrar] Starting registrar module Sep 4 17:11:07.134785 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:11:07.135040 amazon-ssm-agent[2189]: 2024-09-04 17:11:07 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:11:07.136952 amazon-ssm-agent[2189]: 2024-09-04 17:11:07 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:11:07.136952 amazon-ssm-agent[2189]: 2024-09-04 17:11:07 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:11:07.136952 amazon-ssm-agent[2189]: 2024-09-04 17:11:07 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:11:07.185245 amazon-ssm-agent[2189]: 2024-09-04 17:11:07 INFO [CredentialRefresher] Next credential rotation will be in 30.191596645366666 minutes Sep 4 17:11:07.208083 sshd[2238]: Accepted publickey for core from 139.178.89.65 port 59732 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:07.210323 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:07.236835 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:11:07.249800 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:11:07.263604 systemd-logind[2016]: New session 1 of user core. Sep 4 17:11:07.303176 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:11:07.322824 tar[2028]: linux-arm64/LICENSE Sep 4 17:11:07.322824 tar[2028]: linux-arm64/README.md Sep 4 17:11:07.323677 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:11:07.349014 (systemd)[2249]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:07.371830 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:11:07.583940 systemd[2249]: Queued start job for default target default.target. Sep 4 17:11:07.600171 systemd[2249]: Created slice app.slice - User Application Slice. Sep 4 17:11:07.600223 systemd[2249]: Reached target paths.target - Paths. Sep 4 17:11:07.600256 systemd[2249]: Reached target timers.target - Timers. Sep 4 17:11:07.604364 systemd[2249]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:11:07.633405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:07.634550 systemd[2249]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:11:07.639241 systemd[2249]: Reached target sockets.target - Sockets. Sep 4 17:11:07.639468 systemd[2249]: Reached target basic.target - Basic System. Sep 4 17:11:07.639672 systemd[2249]: Reached target default.target - Main User Target. Sep 4 17:11:07.639849 systemd[2249]: Startup finished in 275ms. Sep 4 17:11:07.640096 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:11:07.643615 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:11:07.654624 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:11:07.655931 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:11:07.663726 systemd[1]: Startup finished in 1.157s (kernel) + 9.154s (initrd) + 7.760s (userspace) = 18.071s. Sep 4 17:11:07.824698 systemd[1]: Started sshd@1-172.31.29.45:22-139.178.89.65:37160.service - OpenSSH per-connection server daemon (139.178.89.65:37160). Sep 4 17:11:08.016244 sshd[2273]: Accepted publickey for core from 139.178.89.65 port 37160 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:08.018794 sshd[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:08.027138 systemd-logind[2016]: New session 2 of user core. Sep 4 17:11:08.034403 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:11:08.116254 ntpd[2009]: Listen normally on 6 eth0 [fe80::4e9:85ff:fe04:d94d%2]:123 Sep 4 17:11:08.116949 ntpd[2009]: 4 Sep 17:11:08 ntpd[2009]: Listen normally on 6 eth0 [fe80::4e9:85ff:fe04:d94d%2]:123 Sep 4 17:11:08.168264 amazon-ssm-agent[2189]: 2024-09-04 17:11:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:11:08.167470 sshd[2273]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:08.178805 systemd[1]: sshd@1-172.31.29.45:22-139.178.89.65:37160.service: Deactivated successfully. Sep 4 17:11:08.184736 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:11:08.188703 systemd-logind[2016]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:11:08.216705 systemd[1]: Started sshd@2-172.31.29.45:22-139.178.89.65:37170.service - OpenSSH per-connection server daemon (139.178.89.65:37170). Sep 4 17:11:08.219210 systemd-logind[2016]: Removed session 2. Sep 4 17:11:08.269542 amazon-ssm-agent[2189]: 2024-09-04 17:11:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2282) started Sep 4 17:11:08.368287 amazon-ssm-agent[2189]: 2024-09-04 17:11:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:11:08.436507 sshd[2287]: Accepted publickey for core from 139.178.89.65 port 37170 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:08.439051 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:08.452149 systemd-logind[2016]: New session 3 of user core. Sep 4 17:11:08.457489 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:11:08.544812 kubelet[2264]: E0904 17:11:08.544519 2264 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:11:08.549755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:11:08.551186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:11:08.553262 systemd[1]: kubelet.service: Consumed 1.315s CPU time. Sep 4 17:11:08.578355 sshd[2287]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:08.583597 systemd[1]: sshd@2-172.31.29.45:22-139.178.89.65:37170.service: Deactivated successfully. Sep 4 17:11:08.586626 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:11:08.590742 systemd-logind[2016]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:11:08.592338 systemd-logind[2016]: Removed session 3. Sep 4 17:11:08.619623 systemd[1]: Started sshd@3-172.31.29.45:22-139.178.89.65:37184.service - OpenSSH per-connection server daemon (139.178.89.65:37184). Sep 4 17:11:08.810973 sshd[2304]: Accepted publickey for core from 139.178.89.65 port 37184 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:08.813432 sshd[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:08.820948 systemd-logind[2016]: New session 4 of user core. Sep 4 17:11:08.831398 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:11:08.962565 sshd[2304]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:08.968489 systemd[1]: sshd@3-172.31.29.45:22-139.178.89.65:37184.service: Deactivated successfully. Sep 4 17:11:08.971555 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:11:08.972823 systemd-logind[2016]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:11:08.974822 systemd-logind[2016]: Removed session 4. Sep 4 17:11:08.999302 systemd[1]: Started sshd@4-172.31.29.45:22-139.178.89.65:37190.service - OpenSSH per-connection server daemon (139.178.89.65:37190). Sep 4 17:11:09.180365 sshd[2311]: Accepted publickey for core from 139.178.89.65 port 37190 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:09.182844 sshd[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:09.191222 systemd-logind[2016]: New session 5 of user core. Sep 4 17:11:09.198411 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:11:09.316274 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:11:09.316919 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:11:09.333184 sudo[2314]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:09.356806 sshd[2311]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:09.362735 systemd[1]: sshd@4-172.31.29.45:22-139.178.89.65:37190.service: Deactivated successfully. Sep 4 17:11:09.366186 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:11:09.370054 systemd-logind[2016]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:11:09.372042 systemd-logind[2016]: Removed session 5. Sep 4 17:11:09.404624 systemd[1]: Started sshd@5-172.31.29.45:22-139.178.89.65:37206.service - OpenSSH per-connection server daemon (139.178.89.65:37206). Sep 4 17:11:09.589300 sshd[2319]: Accepted publickey for core from 139.178.89.65 port 37206 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:09.592007 sshd[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:09.600870 systemd-logind[2016]: New session 6 of user core. Sep 4 17:11:09.614441 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:11:09.720550 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:11:09.721062 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:11:09.727280 sudo[2323]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:09.737406 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:11:09.737996 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:11:09.765598 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:11:09.768266 auditctl[2326]: No rules Sep 4 17:11:09.768968 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:11:09.769400 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:11:09.779742 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:11:09.825201 augenrules[2344]: No rules Sep 4 17:11:09.829181 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:11:09.832435 sudo[2322]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:09.856924 sshd[2319]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:09.864381 systemd[1]: sshd@5-172.31.29.45:22-139.178.89.65:37206.service: Deactivated successfully. Sep 4 17:11:09.868774 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:11:09.874184 systemd-logind[2016]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:11:09.876092 systemd-logind[2016]: Removed session 6. Sep 4 17:11:09.898693 systemd[1]: Started sshd@6-172.31.29.45:22-139.178.89.65:37210.service - OpenSSH per-connection server daemon (139.178.89.65:37210). Sep 4 17:11:10.080638 sshd[2352]: Accepted publickey for core from 139.178.89.65 port 37210 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:10.083079 sshd[2352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:10.092489 systemd-logind[2016]: New session 7 of user core. Sep 4 17:11:10.105423 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:11:10.212269 sudo[2355]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:11:10.212811 sudo[2355]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:11:10.398592 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:11:10.410632 (dockerd)[2364]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:11:10.753405 dockerd[2364]: time="2024-09-04T17:11:10.753304012Z" level=info msg="Starting up" Sep 4 17:11:11.084002 dockerd[2364]: time="2024-09-04T17:11:11.083838734Z" level=info msg="Loading containers: start." Sep 4 17:11:11.234150 kernel: Initializing XFRM netlink socket Sep 4 17:11:11.267371 (udev-worker)[2378]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:11:11.344639 systemd-networkd[1938]: docker0: Link UP Sep 4 17:11:11.371793 dockerd[2364]: time="2024-09-04T17:11:11.371658279Z" level=info msg="Loading containers: done." Sep 4 17:11:11.462578 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck857484284-merged.mount: Deactivated successfully. Sep 4 17:11:11.466412 dockerd[2364]: time="2024-09-04T17:11:11.466314892Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:11:11.466673 dockerd[2364]: time="2024-09-04T17:11:11.466623976Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:11:11.466867 dockerd[2364]: time="2024-09-04T17:11:11.466832872Z" level=info msg="Daemon has completed initialization" Sep 4 17:11:11.525350 dockerd[2364]: time="2024-09-04T17:11:11.525242020Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:11:11.525801 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:11:11.962911 systemd-resolved[1941]: Clock change detected. Flushing caches. Sep 4 17:11:12.356567 containerd[2042]: time="2024-09-04T17:11:12.356404901Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:11:12.999421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3554553634.mount: Deactivated successfully. Sep 4 17:11:14.923656 containerd[2042]: time="2024-09-04T17:11:14.923578510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:14.925136 containerd[2042]: time="2024-09-04T17:11:14.925063630Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=32283562" Sep 4 17:11:14.929860 containerd[2042]: time="2024-09-04T17:11:14.929790394Z" level=info msg="ImageCreate event name:\"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:14.936132 containerd[2042]: time="2024-09-04T17:11:14.936044998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:14.938537 containerd[2042]: time="2024-09-04T17:11:14.938275090Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"32280362\" in 2.581789057s" Sep 4 17:11:14.938537 containerd[2042]: time="2024-09-04T17:11:14.938342458Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\"" Sep 4 17:11:14.979173 containerd[2042]: time="2024-09-04T17:11:14.978842446Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:11:16.932941 containerd[2042]: time="2024-09-04T17:11:16.932862000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:16.936768 containerd[2042]: time="2024-09-04T17:11:16.934984848Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=29368210" Sep 4 17:11:16.940624 containerd[2042]: time="2024-09-04T17:11:16.940551912Z" level=info msg="ImageCreate event name:\"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:16.946456 containerd[2042]: time="2024-09-04T17:11:16.946396536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:16.948894 containerd[2042]: time="2024-09-04T17:11:16.948834252Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"30855477\" in 1.969928758s" Sep 4 17:11:16.949092 containerd[2042]: time="2024-09-04T17:11:16.949059840Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\"" Sep 4 17:11:16.989671 containerd[2042]: time="2024-09-04T17:11:16.989540304Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:11:18.210673 containerd[2042]: time="2024-09-04T17:11:18.210579311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:18.212886 containerd[2042]: time="2024-09-04T17:11:18.212813087Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=15751073" Sep 4 17:11:18.214246 containerd[2042]: time="2024-09-04T17:11:18.214120367Z" level=info msg="ImageCreate event name:\"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:18.220180 containerd[2042]: time="2024-09-04T17:11:18.220124267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:18.223845 containerd[2042]: time="2024-09-04T17:11:18.223604723Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"17238358\" in 1.233667171s" Sep 4 17:11:18.223845 containerd[2042]: time="2024-09-04T17:11:18.223691135Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\"" Sep 4 17:11:18.264798 containerd[2042]: time="2024-09-04T17:11:18.264473135Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:11:18.647111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:11:18.660397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:19.206111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:19.218358 (kubelet)[2587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:11:19.322814 kubelet[2587]: E0904 17:11:19.322147 2587 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:11:19.332595 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:11:19.332987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:11:19.888004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1754993684.mount: Deactivated successfully. Sep 4 17:11:20.456812 containerd[2042]: time="2024-09-04T17:11:20.456157778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:20.465663 containerd[2042]: time="2024-09-04T17:11:20.465574370Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=25251883" Sep 4 17:11:20.487156 containerd[2042]: time="2024-09-04T17:11:20.487067582Z" level=info msg="ImageCreate event name:\"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:20.506669 containerd[2042]: time="2024-09-04T17:11:20.506568614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:20.508778 containerd[2042]: time="2024-09-04T17:11:20.508238426Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"25250902\" in 2.243705399s" Sep 4 17:11:20.508778 containerd[2042]: time="2024-09-04T17:11:20.508297562Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\"" Sep 4 17:11:20.551539 containerd[2042]: time="2024-09-04T17:11:20.551222462Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:11:21.124683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843564107.mount: Deactivated successfully. Sep 4 17:11:22.260357 containerd[2042]: time="2024-09-04T17:11:22.259926903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:22.263129 containerd[2042]: time="2024-09-04T17:11:22.262943895Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Sep 4 17:11:22.263129 containerd[2042]: time="2024-09-04T17:11:22.263043243Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:22.269354 containerd[2042]: time="2024-09-04T17:11:22.269295783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:22.272131 containerd[2042]: time="2024-09-04T17:11:22.271901739Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.720615257s" Sep 4 17:11:22.272131 containerd[2042]: time="2024-09-04T17:11:22.271972707Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Sep 4 17:11:22.315202 containerd[2042]: time="2024-09-04T17:11:22.315119727Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:11:22.828482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776707544.mount: Deactivated successfully. Sep 4 17:11:22.836372 containerd[2042]: time="2024-09-04T17:11:22.836284626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:22.838825 containerd[2042]: time="2024-09-04T17:11:22.838757658Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Sep 4 17:11:22.840979 containerd[2042]: time="2024-09-04T17:11:22.840857550Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:22.845285 containerd[2042]: time="2024-09-04T17:11:22.845193114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:22.847793 containerd[2042]: time="2024-09-04T17:11:22.847202466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 532.008435ms" Sep 4 17:11:22.847793 containerd[2042]: time="2024-09-04T17:11:22.847273530Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:11:22.887319 containerd[2042]: time="2024-09-04T17:11:22.887271846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:11:23.476601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411322010.mount: Deactivated successfully. Sep 4 17:11:26.500031 containerd[2042]: time="2024-09-04T17:11:26.499966808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:26.503849 containerd[2042]: time="2024-09-04T17:11:26.503767052Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Sep 4 17:11:26.506215 containerd[2042]: time="2024-09-04T17:11:26.505103300Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:26.513787 containerd[2042]: time="2024-09-04T17:11:26.513689468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:26.517235 containerd[2042]: time="2024-09-04T17:11:26.517155092Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.629612814s" Sep 4 17:11:26.517455 containerd[2042]: time="2024-09-04T17:11:26.517420592Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Sep 4 17:11:29.583433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:11:29.593302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:30.036994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:30.053591 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:11:30.146007 kubelet[2774]: E0904 17:11:30.145941 2774 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:11:30.150846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:11:30.152103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:11:34.938923 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:34.948275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:34.994182 systemd[1]: Reloading requested from client PID 2790 ('systemctl') (unit session-7.scope)... Sep 4 17:11:34.994216 systemd[1]: Reloading... Sep 4 17:11:35.204779 zram_generator::config[2832]: No configuration found. Sep 4 17:11:35.433000 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:11:35.603373 systemd[1]: Reloading finished in 608 ms. Sep 4 17:11:35.686022 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:11:35.686211 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:11:35.686650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:35.695383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:35.710195 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:11:36.047226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:36.048871 (kubelet)[2894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:11:36.143780 kubelet[2894]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:11:36.143780 kubelet[2894]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:11:36.143780 kubelet[2894]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:11:36.145400 kubelet[2894]: I0904 17:11:36.145310 2894 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:11:37.217688 kubelet[2894]: I0904 17:11:37.217629 2894 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:11:37.217688 kubelet[2894]: I0904 17:11:37.217681 2894 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:11:37.218608 kubelet[2894]: I0904 17:11:37.218056 2894 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:11:37.248213 kubelet[2894]: I0904 17:11:37.248003 2894 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:11:37.248805 kubelet[2894]: E0904 17:11:37.248758 2894 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:37.267958 kubelet[2894]: I0904 17:11:37.267861 2894 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:11:37.268327 kubelet[2894]: I0904 17:11:37.268299 2894 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:11:37.268654 kubelet[2894]: I0904 17:11:37.268617 2894 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:11:37.268848 kubelet[2894]: I0904 17:11:37.268665 2894 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:11:37.268848 kubelet[2894]: I0904 17:11:37.268687 2894 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:11:37.271064 kubelet[2894]: I0904 17:11:37.271009 2894 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:11:37.275395 kubelet[2894]: I0904 17:11:37.275343 2894 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:11:37.275395 kubelet[2894]: I0904 17:11:37.275396 2894 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:11:37.275775 kubelet[2894]: I0904 17:11:37.275452 2894 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:11:37.275775 kubelet[2894]: I0904 17:11:37.275484 2894 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:11:37.278587 kubelet[2894]: W0904 17:11:37.278507 2894 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:37.278770 kubelet[2894]: E0904 17:11:37.278612 2894 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:37.279784 kubelet[2894]: W0904 17:11:37.279151 2894 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-45&limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:37.279784 kubelet[2894]: E0904 17:11:37.279215 2894 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-45&limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:37.279784 kubelet[2894]: I0904 17:11:37.279524 2894 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:11:37.280772 kubelet[2894]: I0904 17:11:37.280053 2894 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:11:37.281168 kubelet[2894]: W0904 17:11:37.281124 2894 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:11:37.282891 kubelet[2894]: I0904 17:11:37.282838 2894 server.go:1256] "Started kubelet" Sep 4 17:11:37.292926 kubelet[2894]: E0904 17:11:37.292873 2894 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.45:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-45.17f219baca2b9c35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-45,UID:ip-172-31-29-45,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-45,},FirstTimestamp:2024-09-04 17:11:37.282788405 +0000 UTC m=+1.223966719,LastTimestamp:2024-09-04 17:11:37.282788405 +0000 UTC m=+1.223966719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-45,}" Sep 4 17:11:37.293559 kubelet[2894]: I0904 17:11:37.293522 2894 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:11:37.297509 kubelet[2894]: I0904 17:11:37.297448 2894 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:11:37.298960 kubelet[2894]: I0904 17:11:37.298913 2894 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:11:37.300804 kubelet[2894]: I0904 17:11:37.300702 2894 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:11:37.301423 kubelet[2894]: I0904 17:11:37.301041 2894 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:11:37.302928 kubelet[2894]: I0904 17:11:37.302884 2894 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:11:37.303670 kubelet[2894]: I0904 17:11:37.303508 2894 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:11:37.303670 kubelet[2894]: I0904 17:11:37.303622 2894 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:11:37.304783 kubelet[2894]: W0904 17:11:37.304147 2894 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:37.304783 kubelet[2894]: E0904 17:11:37.304239 2894 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:37.304783 kubelet[2894]: E0904 17:11:37.304366 2894 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-45?timeout=10s\": dial tcp 172.31.29.45:6443: connect: connection refused" interval="200ms" Sep 4 17:11:37.306596 kubelet[2894]: E0904 17:11:37.306471 2894 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:11:37.308304 kubelet[2894]: I0904 17:11:37.308253 2894 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:11:37.308461 kubelet[2894]: I0904 17:11:37.308419 2894 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:11:37.312781 kubelet[2894]: I0904 17:11:37.311838 2894 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:11:37.347565 kubelet[2894]: I0904 17:11:37.347490 2894 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:11:37.351547 kubelet[2894]: I0904 17:11:37.351497 2894 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:11:37.352357 kubelet[2894]: I0904 17:11:37.351801 2894 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:11:37.352357 kubelet[2894]: I0904 17:11:37.351847 2894 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:11:37.352357 kubelet[2894]: E0904 17:11:37.351936 2894 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:11:37.360118 kubelet[2894]: W0904 17:11:37.359942 2894 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:37.360118 kubelet[2894]: E0904 17:11:37.360010 2894 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:37.360816 kubelet[2894]: I0904 17:11:37.360774 2894 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:11:37.360816 kubelet[2894]: I0904 17:11:37.360815 2894 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:11:37.360955 kubelet[2894]: I0904 17:11:37.360845 2894 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:11:37.365183 kubelet[2894]: I0904 17:11:37.365036 2894 policy_none.go:49] "None policy: Start" Sep 4 17:11:37.366855 kubelet[2894]: I0904 17:11:37.366378 2894 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:11:37.366855 kubelet[2894]: I0904 17:11:37.366456 2894 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:11:37.378264 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:11:37.391323 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:11:37.399122 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:11:37.407046 kubelet[2894]: I0904 17:11:37.406810 2894 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:11:37.407243 kubelet[2894]: I0904 17:11:37.407214 2894 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:11:37.409798 kubelet[2894]: I0904 17:11:37.409432 2894 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-45" Sep 4 17:11:37.410427 kubelet[2894]: E0904 17:11:37.410384 2894 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.45:6443/api/v1/nodes\": dial tcp 172.31.29.45:6443: connect: connection refused" node="ip-172-31-29-45" Sep 4 17:11:37.412260 kubelet[2894]: E0904 17:11:37.412216 2894 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-45\" not found" Sep 4 17:11:37.453284 kubelet[2894]: I0904 17:11:37.452939 2894 topology_manager.go:215] "Topology Admit Handler" podUID="7fa0fece7bba66d4e895b6f8594341e5" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-45" Sep 4 17:11:37.455930 kubelet[2894]: I0904 17:11:37.455359 2894 topology_manager.go:215] "Topology Admit Handler" podUID="5765e44dc114dc4ea25997fdacfdd636" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:37.458035 kubelet[2894]: I0904 17:11:37.457707 2894 topology_manager.go:215] "Topology Admit Handler" podUID="2f703472f21acf02d4f40e77eb59ca41" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-45" Sep 4 17:11:37.472390 systemd[1]: Created slice kubepods-burstable-pod7fa0fece7bba66d4e895b6f8594341e5.slice - libcontainer container kubepods-burstable-pod7fa0fece7bba66d4e895b6f8594341e5.slice. Sep 4 17:11:37.491099 systemd[1]: Created slice kubepods-burstable-pod5765e44dc114dc4ea25997fdacfdd636.slice - libcontainer container kubepods-burstable-pod5765e44dc114dc4ea25997fdacfdd636.slice. Sep 4 17:11:37.505275 kubelet[2894]: I0904 17:11:37.504933 2894 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa0fece7bba66d4e895b6f8594341e5-ca-certs\") pod \"kube-apiserver-ip-172-31-29-45\" (UID: \"7fa0fece7bba66d4e895b6f8594341e5\") " pod="kube-system/kube-apiserver-ip-172-31-29-45" Sep 4 17:11:37.505275 kubelet[2894]: I0904 17:11:37.505006 2894 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa0fece7bba66d4e895b6f8594341e5-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-45\" (UID: \"7fa0fece7bba66d4e895b6f8594341e5\") " pod="kube-system/kube-apiserver-ip-172-31-29-45" Sep 4 17:11:37.505275 kubelet[2894]: I0904 17:11:37.505069 2894 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa0fece7bba66d4e895b6f8594341e5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-45\" (UID: \"7fa0fece7bba66d4e895b6f8594341e5\") " pod="kube-system/kube-apiserver-ip-172-31-29-45" Sep 4 17:11:37.505275 kubelet[2894]: I0904 17:11:37.505157 2894 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5765e44dc114dc4ea25997fdacfdd636-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-45\" (UID: \"5765e44dc114dc4ea25997fdacfdd636\") " pod="kube-system/kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:37.505275 kubelet[2894]: I0904 17:11:37.505231 2894 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5765e44dc114dc4ea25997fdacfdd636-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-45\" (UID: \"5765e44dc114dc4ea25997fdacfdd636\") " pod="kube-system/kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:37.505625 kubelet[2894]: I0904 17:11:37.505291 2894 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5765e44dc114dc4ea25997fdacfdd636-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-45\" (UID: \"5765e44dc114dc4ea25997fdacfdd636\") " pod="kube-system/kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:37.505625 kubelet[2894]: I0904 17:11:37.505340 2894 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5765e44dc114dc4ea25997fdacfdd636-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-45\" (UID: \"5765e44dc114dc4ea25997fdacfdd636\") " pod="kube-system/kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:37.505625 kubelet[2894]: I0904 17:11:37.505405 2894 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5765e44dc114dc4ea25997fdacfdd636-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-45\" (UID: \"5765e44dc114dc4ea25997fdacfdd636\") " pod="kube-system/kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:37.505625 kubelet[2894]: I0904 17:11:37.505451 2894 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f703472f21acf02d4f40e77eb59ca41-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-45\" (UID: \"2f703472f21acf02d4f40e77eb59ca41\") " pod="kube-system/kube-scheduler-ip-172-31-29-45" Sep 4 17:11:37.507365 kubelet[2894]: E0904 17:11:37.506535 2894 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-45?timeout=10s\": dial tcp 172.31.29.45:6443: connect: connection refused" interval="400ms" Sep 4 17:11:37.508581 systemd[1]: Created slice kubepods-burstable-pod2f703472f21acf02d4f40e77eb59ca41.slice - libcontainer container kubepods-burstable-pod2f703472f21acf02d4f40e77eb59ca41.slice. Sep 4 17:11:37.613004 kubelet[2894]: I0904 17:11:37.612940 2894 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-45" Sep 4 17:11:37.613805 kubelet[2894]: E0904 17:11:37.613750 2894 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.45:6443/api/v1/nodes\": dial tcp 172.31.29.45:6443: connect: connection refused" node="ip-172-31-29-45" Sep 4 17:11:37.786840 containerd[2042]: time="2024-09-04T17:11:37.786637604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-45,Uid:7fa0fece7bba66d4e895b6f8594341e5,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:37.804383 containerd[2042]: time="2024-09-04T17:11:37.804030020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-45,Uid:5765e44dc114dc4ea25997fdacfdd636,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:37.815098 containerd[2042]: time="2024-09-04T17:11:37.815016500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-45,Uid:2f703472f21acf02d4f40e77eb59ca41,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:37.907302 kubelet[2894]: E0904 17:11:37.907237 2894 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-45?timeout=10s\": dial tcp 172.31.29.45:6443: connect: connection refused" interval="800ms" Sep 4 17:11:38.016940 kubelet[2894]: I0904 17:11:38.016878 2894 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-45" Sep 4 17:11:38.017507 kubelet[2894]: E0904 17:11:38.017474 2894 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.45:6443/api/v1/nodes\": dial tcp 172.31.29.45:6443: connect: connection refused" node="ip-172-31-29-45" Sep 4 17:11:38.348528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3321518303.mount: Deactivated successfully. Sep 4 17:11:38.358297 containerd[2042]: time="2024-09-04T17:11:38.358213267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:38.360019 containerd[2042]: time="2024-09-04T17:11:38.359960935Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:38.362229 containerd[2042]: time="2024-09-04T17:11:38.362168107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:11:38.362229 containerd[2042]: time="2024-09-04T17:11:38.362647087Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 17:11:38.363844 kubelet[2894]: W0904 17:11:38.363667 2894 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-45&limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:38.363844 kubelet[2894]: E0904 17:11:38.363779 2894 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-45&limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:38.364889 containerd[2042]: time="2024-09-04T17:11:38.364829551Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:38.367789 containerd[2042]: time="2024-09-04T17:11:38.367607455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:11:38.368788 containerd[2042]: time="2024-09-04T17:11:38.368702659Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:38.376544 containerd[2042]: time="2024-09-04T17:11:38.376412803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:38.378580 containerd[2042]: time="2024-09-04T17:11:38.378293935Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.139159ms" Sep 4 17:11:38.380471 containerd[2042]: time="2024-09-04T17:11:38.380403967Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 576.182919ms" Sep 4 17:11:38.383257 containerd[2042]: time="2024-09-04T17:11:38.383165407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 596.365179ms" Sep 4 17:11:38.594803 containerd[2042]: time="2024-09-04T17:11:38.593525804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:38.594803 containerd[2042]: time="2024-09-04T17:11:38.593658008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:38.594803 containerd[2042]: time="2024-09-04T17:11:38.593702948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:38.594803 containerd[2042]: time="2024-09-04T17:11:38.593771324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:38.599464 containerd[2042]: time="2024-09-04T17:11:38.599039204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:38.600643 containerd[2042]: time="2024-09-04T17:11:38.600355772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:38.600643 containerd[2042]: time="2024-09-04T17:11:38.600447932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:38.600643 containerd[2042]: time="2024-09-04T17:11:38.600486404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:38.603662 containerd[2042]: time="2024-09-04T17:11:38.602488256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:38.603662 containerd[2042]: time="2024-09-04T17:11:38.602688596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:38.604344 containerd[2042]: time="2024-09-04T17:11:38.603833828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:38.604344 containerd[2042]: time="2024-09-04T17:11:38.603888896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:38.653232 systemd[1]: Started cri-containerd-9054e155ef05c9c64ae948b0ba183d663efd632d9148242b90119c24fe3e9950.scope - libcontainer container 9054e155ef05c9c64ae948b0ba183d663efd632d9148242b90119c24fe3e9950. Sep 4 17:11:38.660340 systemd[1]: Started cri-containerd-b7eba1390fcd94681f91528afc0b583a526027918ce850aef899029374b26f46.scope - libcontainer container b7eba1390fcd94681f91528afc0b583a526027918ce850aef899029374b26f46. Sep 4 17:11:38.667850 kubelet[2894]: W0904 17:11:38.665850 2894 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:38.667850 kubelet[2894]: E0904 17:11:38.667678 2894 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:38.672413 systemd[1]: Started cri-containerd-962dc377521dd6388798da052270961d0829eea89937c9e2432b77cc2bc8f45d.scope - libcontainer container 962dc377521dd6388798da052270961d0829eea89937c9e2432b77cc2bc8f45d. Sep 4 17:11:38.692706 kubelet[2894]: W0904 17:11:38.692614 2894 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:38.692706 kubelet[2894]: E0904 17:11:38.692710 2894 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:38.709284 kubelet[2894]: E0904 17:11:38.709229 2894 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-45?timeout=10s\": dial tcp 172.31.29.45:6443: connect: connection refused" interval="1.6s" Sep 4 17:11:38.756818 containerd[2042]: time="2024-09-04T17:11:38.756648345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-45,Uid:5765e44dc114dc4ea25997fdacfdd636,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7eba1390fcd94681f91528afc0b583a526027918ce850aef899029374b26f46\"" Sep 4 17:11:38.775385 containerd[2042]: time="2024-09-04T17:11:38.775117869Z" level=info msg="CreateContainer within sandbox \"b7eba1390fcd94681f91528afc0b583a526027918ce850aef899029374b26f46\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:11:38.807211 containerd[2042]: time="2024-09-04T17:11:38.807003825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-45,Uid:2f703472f21acf02d4f40e77eb59ca41,Namespace:kube-system,Attempt:0,} returns sandbox id \"9054e155ef05c9c64ae948b0ba183d663efd632d9148242b90119c24fe3e9950\"" Sep 4 17:11:38.816143 containerd[2042]: time="2024-09-04T17:11:38.815324565Z" level=info msg="CreateContainer within sandbox \"9054e155ef05c9c64ae948b0ba183d663efd632d9148242b90119c24fe3e9950\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:11:38.816143 containerd[2042]: time="2024-09-04T17:11:38.815861961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-45,Uid:7fa0fece7bba66d4e895b6f8594341e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"962dc377521dd6388798da052270961d0829eea89937c9e2432b77cc2bc8f45d\"" Sep 4 17:11:38.821894 kubelet[2894]: I0904 17:11:38.821836 2894 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-45" Sep 4 17:11:38.822421 kubelet[2894]: E0904 17:11:38.822376 2894 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.45:6443/api/v1/nodes\": dial tcp 172.31.29.45:6443: connect: connection refused" node="ip-172-31-29-45" Sep 4 17:11:38.826083 containerd[2042]: time="2024-09-04T17:11:38.826026261Z" level=info msg="CreateContainer within sandbox \"962dc377521dd6388798da052270961d0829eea89937c9e2432b77cc2bc8f45d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:11:38.835405 containerd[2042]: time="2024-09-04T17:11:38.835325325Z" level=info msg="CreateContainer within sandbox \"b7eba1390fcd94681f91528afc0b583a526027918ce850aef899029374b26f46\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8509ab4c74586351cbf2b10ea981bb6f44f328ae52a2962d88b51cc0435cc79e\"" Sep 4 17:11:38.836425 containerd[2042]: time="2024-09-04T17:11:38.836265057Z" level=info msg="StartContainer for \"8509ab4c74586351cbf2b10ea981bb6f44f328ae52a2962d88b51cc0435cc79e\"" Sep 4 17:11:38.860956 containerd[2042]: time="2024-09-04T17:11:38.860635449Z" level=info msg="CreateContainer within sandbox \"9054e155ef05c9c64ae948b0ba183d663efd632d9148242b90119c24fe3e9950\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eacdf63cd8ac9498504a29647d5f3b33c17be01a9a8bb6e8d4860565744c7897\"" Sep 4 17:11:38.863775 containerd[2042]: time="2024-09-04T17:11:38.863282109Z" level=info msg="StartContainer for \"eacdf63cd8ac9498504a29647d5f3b33c17be01a9a8bb6e8d4860565744c7897\"" Sep 4 17:11:38.871104 containerd[2042]: time="2024-09-04T17:11:38.871032513Z" level=info msg="CreateContainer within sandbox \"962dc377521dd6388798da052270961d0829eea89937c9e2432b77cc2bc8f45d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b31d92721aac50a903effbef33a63991aa8354e085ab099cd5515185492d9f6e\"" Sep 4 17:11:38.871866 containerd[2042]: time="2024-09-04T17:11:38.871818873Z" level=info msg="StartContainer for \"b31d92721aac50a903effbef33a63991aa8354e085ab099cd5515185492d9f6e\"" Sep 4 17:11:38.886113 systemd[1]: Started cri-containerd-8509ab4c74586351cbf2b10ea981bb6f44f328ae52a2962d88b51cc0435cc79e.scope - libcontainer container 8509ab4c74586351cbf2b10ea981bb6f44f328ae52a2962d88b51cc0435cc79e. Sep 4 17:11:38.908016 kubelet[2894]: W0904 17:11:38.907969 2894 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:38.908408 kubelet[2894]: E0904 17:11:38.908275 2894 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.45:6443: connect: connection refused Sep 4 17:11:38.967160 systemd[1]: Started cri-containerd-b31d92721aac50a903effbef33a63991aa8354e085ab099cd5515185492d9f6e.scope - libcontainer container b31d92721aac50a903effbef33a63991aa8354e085ab099cd5515185492d9f6e. Sep 4 17:11:38.977076 systemd[1]: Started cri-containerd-eacdf63cd8ac9498504a29647d5f3b33c17be01a9a8bb6e8d4860565744c7897.scope - libcontainer container eacdf63cd8ac9498504a29647d5f3b33c17be01a9a8bb6e8d4860565744c7897. Sep 4 17:11:39.002090 containerd[2042]: time="2024-09-04T17:11:39.002007030Z" level=info msg="StartContainer for \"8509ab4c74586351cbf2b10ea981bb6f44f328ae52a2962d88b51cc0435cc79e\" returns successfully" Sep 4 17:11:39.105873 containerd[2042]: time="2024-09-04T17:11:39.105795930Z" level=info msg="StartContainer for \"b31d92721aac50a903effbef33a63991aa8354e085ab099cd5515185492d9f6e\" returns successfully" Sep 4 17:11:39.125367 containerd[2042]: time="2024-09-04T17:11:39.124876590Z" level=info msg="StartContainer for \"eacdf63cd8ac9498504a29647d5f3b33c17be01a9a8bb6e8d4860565744c7897\" returns successfully" Sep 4 17:11:40.425790 kubelet[2894]: I0904 17:11:40.424971 2894 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-45" Sep 4 17:11:43.083545 kubelet[2894]: E0904 17:11:43.083481 2894 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-45\" not found" node="ip-172-31-29-45" Sep 4 17:11:43.113298 kubelet[2894]: I0904 17:11:43.113109 2894 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-45" Sep 4 17:11:43.229695 kubelet[2894]: E0904 17:11:43.228908 2894 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-29-45\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:43.281357 kubelet[2894]: I0904 17:11:43.281310 2894 apiserver.go:52] "Watching apiserver" Sep 4 17:11:43.304300 kubelet[2894]: I0904 17:11:43.304252 2894 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:11:46.249313 systemd[1]: Reloading requested from client PID 3172 ('systemctl') (unit session-7.scope)... Sep 4 17:11:46.249931 systemd[1]: Reloading... Sep 4 17:11:46.431776 zram_generator::config[3210]: No configuration found. Sep 4 17:11:46.666746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:11:46.869958 systemd[1]: Reloading finished in 619 ms. Sep 4 17:11:46.948221 kubelet[2894]: I0904 17:11:46.947934 2894 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:11:46.948327 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:46.966435 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:11:46.966956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:46.967055 systemd[1]: kubelet.service: Consumed 1.930s CPU time, 113.7M memory peak, 0B memory swap peak. Sep 4 17:11:46.977425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:47.421139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:47.430534 (kubelet)[3270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:11:47.547815 kubelet[3270]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:11:47.547815 kubelet[3270]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:11:47.547815 kubelet[3270]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:11:47.547815 kubelet[3270]: I0904 17:11:47.547425 3270 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:11:47.559121 sudo[3282]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 17:11:47.559670 sudo[3282]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 4 17:11:47.566801 kubelet[3270]: I0904 17:11:47.566700 3270 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:11:47.567035 kubelet[3270]: I0904 17:11:47.567014 3270 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:11:47.567841 kubelet[3270]: I0904 17:11:47.567811 3270 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:11:47.572481 kubelet[3270]: I0904 17:11:47.571770 3270 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:11:47.575528 kubelet[3270]: I0904 17:11:47.575466 3270 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:11:47.585837 kubelet[3270]: I0904 17:11:47.585799 3270 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:11:47.586528 kubelet[3270]: I0904 17:11:47.586500 3270 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:11:47.586973 kubelet[3270]: I0904 17:11:47.586941 3270 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:11:47.587687 kubelet[3270]: I0904 17:11:47.587178 3270 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:11:47.587687 kubelet[3270]: I0904 17:11:47.587208 3270 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:11:47.587687 kubelet[3270]: I0904 17:11:47.587273 3270 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:11:47.587687 kubelet[3270]: I0904 17:11:47.587481 3270 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:11:47.587687 kubelet[3270]: I0904 17:11:47.587513 3270 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:11:47.587687 kubelet[3270]: I0904 17:11:47.587559 3270 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:11:47.587687 kubelet[3270]: I0904 17:11:47.587584 3270 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:11:47.594778 kubelet[3270]: I0904 17:11:47.593285 3270 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:11:47.594778 kubelet[3270]: I0904 17:11:47.593619 3270 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:11:47.594778 kubelet[3270]: I0904 17:11:47.594288 3270 server.go:1256] "Started kubelet" Sep 4 17:11:47.608927 kubelet[3270]: I0904 17:11:47.608877 3270 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:11:47.623771 kubelet[3270]: I0904 17:11:47.623155 3270 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:11:47.638348 kubelet[3270]: I0904 17:11:47.636769 3270 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:11:47.645453 kubelet[3270]: I0904 17:11:47.645412 3270 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:11:47.648941 kubelet[3270]: I0904 17:11:47.648905 3270 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:11:47.656258 kubelet[3270]: I0904 17:11:47.656218 3270 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:11:47.661765 kubelet[3270]: I0904 17:11:47.661696 3270 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:11:47.662203 kubelet[3270]: I0904 17:11:47.662181 3270 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:11:47.693016 kubelet[3270]: I0904 17:11:47.692887 3270 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:11:47.701958 kubelet[3270]: I0904 17:11:47.701920 3270 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:11:47.702176 kubelet[3270]: I0904 17:11:47.702153 3270 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:11:47.702308 kubelet[3270]: I0904 17:11:47.702289 3270 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:11:47.702511 kubelet[3270]: E0904 17:11:47.702487 3270 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:11:47.718186 kubelet[3270]: I0904 17:11:47.718132 3270 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:11:47.718718 kubelet[3270]: I0904 17:11:47.718634 3270 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:11:47.723779 kubelet[3270]: E0904 17:11:47.723513 3270 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:11:47.726690 kubelet[3270]: I0904 17:11:47.725109 3270 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:11:47.770680 kubelet[3270]: I0904 17:11:47.768467 3270 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-45" Sep 4 17:11:47.804190 kubelet[3270]: E0904 17:11:47.802658 3270 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:11:47.804190 kubelet[3270]: I0904 17:11:47.802875 3270 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-29-45" Sep 4 17:11:47.804190 kubelet[3270]: I0904 17:11:47.802977 3270 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-45" Sep 4 17:11:47.870340 kubelet[3270]: I0904 17:11:47.870280 3270 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:11:47.870493 kubelet[3270]: I0904 17:11:47.870356 3270 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:11:47.870493 kubelet[3270]: I0904 17:11:47.870393 3270 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:11:47.871155 kubelet[3270]: I0904 17:11:47.870630 3270 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:11:47.871155 kubelet[3270]: I0904 17:11:47.870692 3270 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:11:47.871155 kubelet[3270]: I0904 17:11:47.870711 3270 policy_none.go:49] "None policy: Start" Sep 4 17:11:47.873759 kubelet[3270]: I0904 17:11:47.872403 3270 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:11:47.873759 kubelet[3270]: I0904 17:11:47.872473 3270 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:11:47.873759 kubelet[3270]: I0904 17:11:47.872706 3270 state_mem.go:75] "Updated machine memory state" Sep 4 17:11:47.885176 kubelet[3270]: I0904 17:11:47.885113 3270 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:11:47.890051 kubelet[3270]: I0904 17:11:47.888659 3270 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:11:48.003576 kubelet[3270]: I0904 17:11:48.003212 3270 topology_manager.go:215] "Topology Admit Handler" podUID="5765e44dc114dc4ea25997fdacfdd636" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:48.003576 kubelet[3270]: I0904 17:11:48.003340 3270 topology_manager.go:215] "Topology Admit Handler" podUID="2f703472f21acf02d4f40e77eb59ca41" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-45" Sep 4 17:11:48.003576 kubelet[3270]: I0904 17:11:48.003430 3270 topology_manager.go:215] "Topology Admit Handler" podUID="7fa0fece7bba66d4e895b6f8594341e5" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-45" Sep 4 17:11:48.067738 kubelet[3270]: I0904 17:11:48.067023 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa0fece7bba66d4e895b6f8594341e5-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-45\" (UID: \"7fa0fece7bba66d4e895b6f8594341e5\") " pod="kube-system/kube-apiserver-ip-172-31-29-45" Sep 4 17:11:48.067738 kubelet[3270]: I0904 17:11:48.067251 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5765e44dc114dc4ea25997fdacfdd636-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-45\" (UID: \"5765e44dc114dc4ea25997fdacfdd636\") " pod="kube-system/kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:48.067738 kubelet[3270]: I0904 17:11:48.067387 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5765e44dc114dc4ea25997fdacfdd636-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-45\" (UID: \"5765e44dc114dc4ea25997fdacfdd636\") " pod="kube-system/kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:48.067738 kubelet[3270]: I0904 17:11:48.067499 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5765e44dc114dc4ea25997fdacfdd636-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-45\" (UID: \"5765e44dc114dc4ea25997fdacfdd636\") " pod="kube-system/kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:48.067738 kubelet[3270]: I0904 17:11:48.067663 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f703472f21acf02d4f40e77eb59ca41-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-45\" (UID: \"2f703472f21acf02d4f40e77eb59ca41\") " pod="kube-system/kube-scheduler-ip-172-31-29-45" Sep 4 17:11:48.068111 kubelet[3270]: I0904 17:11:48.067853 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa0fece7bba66d4e895b6f8594341e5-ca-certs\") pod \"kube-apiserver-ip-172-31-29-45\" (UID: \"7fa0fece7bba66d4e895b6f8594341e5\") " pod="kube-system/kube-apiserver-ip-172-31-29-45" Sep 4 17:11:48.068237 kubelet[3270]: I0904 17:11:48.068145 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa0fece7bba66d4e895b6f8594341e5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-45\" (UID: \"7fa0fece7bba66d4e895b6f8594341e5\") " pod="kube-system/kube-apiserver-ip-172-31-29-45" Sep 4 17:11:48.068326 kubelet[3270]: I0904 17:11:48.068295 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5765e44dc114dc4ea25997fdacfdd636-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-45\" (UID: \"5765e44dc114dc4ea25997fdacfdd636\") " pod="kube-system/kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:48.068925 kubelet[3270]: I0904 17:11:48.068491 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5765e44dc114dc4ea25997fdacfdd636-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-45\" (UID: \"5765e44dc114dc4ea25997fdacfdd636\") " pod="kube-system/kube-controller-manager-ip-172-31-29-45" Sep 4 17:11:48.446253 sudo[3282]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:48.590690 kubelet[3270]: I0904 17:11:48.590414 3270 apiserver.go:52] "Watching apiserver" Sep 4 17:11:48.662889 kubelet[3270]: I0904 17:11:48.662828 3270 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:11:48.804825 kubelet[3270]: E0904 17:11:48.804677 3270 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-29-45\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-45" Sep 4 17:11:48.826615 kubelet[3270]: I0904 17:11:48.826556 3270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-45" podStartSLOduration=0.826498255 podStartE2EDuration="826.498255ms" podCreationTimestamp="2024-09-04 17:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:48.825891187 +0000 UTC m=+1.384778888" watchObservedRunningTime="2024-09-04 17:11:48.826498255 +0000 UTC m=+1.385385968" Sep 4 17:11:48.853151 kubelet[3270]: I0904 17:11:48.851311 3270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-45" podStartSLOduration=0.851257663 podStartE2EDuration="851.257663ms" podCreationTimestamp="2024-09-04 17:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:48.838416295 +0000 UTC m=+1.397320460" watchObservedRunningTime="2024-09-04 17:11:48.851257663 +0000 UTC m=+1.410145364" Sep 4 17:11:50.441046 update_engine[2019]: I0904 17:11:50.440477 2019 update_attempter.cc:509] Updating boot flags... Sep 4 17:11:50.556782 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3341) Sep 4 17:11:50.830131 sudo[2355]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:50.864092 sshd[2352]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:50.886164 systemd[1]: sshd@6-172.31.29.45:22-139.178.89.65:37210.service: Deactivated successfully. Sep 4 17:11:50.895619 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:11:50.898056 systemd[1]: session-7.scope: Consumed 11.768s CPU time, 133.9M memory peak, 0B memory swap peak. Sep 4 17:11:50.900223 systemd-logind[2016]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:11:50.905372 systemd-logind[2016]: Removed session 7. Sep 4 17:11:50.930806 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3341) Sep 4 17:11:52.194670 kubelet[3270]: I0904 17:11:52.194324 3270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-45" podStartSLOduration=4.194237167 podStartE2EDuration="4.194237167s" podCreationTimestamp="2024-09-04 17:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:48.853781695 +0000 UTC m=+1.412669408" watchObservedRunningTime="2024-09-04 17:11:52.194237167 +0000 UTC m=+4.753124892" Sep 4 17:12:01.162229 kubelet[3270]: I0904 17:12:01.161970 3270 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:12:01.163452 containerd[2042]: time="2024-09-04T17:12:01.163302592Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:12:01.164451 kubelet[3270]: I0904 17:12:01.164377 3270 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:12:02.033028 kubelet[3270]: I0904 17:12:02.032947 3270 topology_manager.go:215] "Topology Admit Handler" podUID="a2ea7c8e-dc75-41ec-b2d0-f8c4967b2f79" podNamespace="kube-system" podName="kube-proxy-zcq2w" Sep 4 17:12:02.054626 systemd[1]: Created slice kubepods-besteffort-poda2ea7c8e_dc75_41ec_b2d0_f8c4967b2f79.slice - libcontainer container kubepods-besteffort-poda2ea7c8e_dc75_41ec_b2d0_f8c4967b2f79.slice. Sep 4 17:12:02.062864 kubelet[3270]: I0904 17:12:02.061615 3270 topology_manager.go:215] "Topology Admit Handler" podUID="5609af2a-ee77-4d03-863c-d1fb6c9489df" podNamespace="kube-system" podName="cilium-bnvxw" Sep 4 17:12:02.068197 kubelet[3270]: I0904 17:12:02.068130 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a2ea7c8e-dc75-41ec-b2d0-f8c4967b2f79-kube-proxy\") pod \"kube-proxy-zcq2w\" (UID: \"a2ea7c8e-dc75-41ec-b2d0-f8c4967b2f79\") " pod="kube-system/kube-proxy-zcq2w" Sep 4 17:12:02.068377 kubelet[3270]: I0904 17:12:02.068210 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2ea7c8e-dc75-41ec-b2d0-f8c4967b2f79-lib-modules\") pod \"kube-proxy-zcq2w\" (UID: \"a2ea7c8e-dc75-41ec-b2d0-f8c4967b2f79\") " pod="kube-system/kube-proxy-zcq2w" Sep 4 17:12:02.068377 kubelet[3270]: I0904 17:12:02.068259 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2ea7c8e-dc75-41ec-b2d0-f8c4967b2f79-xtables-lock\") pod \"kube-proxy-zcq2w\" (UID: \"a2ea7c8e-dc75-41ec-b2d0-f8c4967b2f79\") " pod="kube-system/kube-proxy-zcq2w" Sep 4 17:12:02.068377 kubelet[3270]: I0904 17:12:02.068324 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djmq8\" (UniqueName: \"kubernetes.io/projected/a2ea7c8e-dc75-41ec-b2d0-f8c4967b2f79-kube-api-access-djmq8\") pod \"kube-proxy-zcq2w\" (UID: \"a2ea7c8e-dc75-41ec-b2d0-f8c4967b2f79\") " pod="kube-system/kube-proxy-zcq2w" Sep 4 17:12:02.085639 systemd[1]: Created slice kubepods-burstable-pod5609af2a_ee77_4d03_863c_d1fb6c9489df.slice - libcontainer container kubepods-burstable-pod5609af2a_ee77_4d03_863c_d1fb6c9489df.slice. Sep 4 17:12:02.170805 kubelet[3270]: I0904 17:12:02.169542 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-etc-cni-netd\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.170805 kubelet[3270]: I0904 17:12:02.169614 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5609af2a-ee77-4d03-863c-d1fb6c9489df-hubble-tls\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.170805 kubelet[3270]: I0904 17:12:02.169686 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-bpf-maps\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.170805 kubelet[3270]: I0904 17:12:02.169749 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-hostproc\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.170805 kubelet[3270]: I0904 17:12:02.169800 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-xtables-lock\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.170805 kubelet[3270]: I0904 17:12:02.169847 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5609af2a-ee77-4d03-863c-d1fb6c9489df-clustermesh-secrets\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.172715 kubelet[3270]: I0904 17:12:02.169892 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-cilium-run\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.172715 kubelet[3270]: I0904 17:12:02.169939 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-cilium-cgroup\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.172715 kubelet[3270]: I0904 17:12:02.169984 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-lib-modules\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.172715 kubelet[3270]: I0904 17:12:02.170032 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5609af2a-ee77-4d03-863c-d1fb6c9489df-cilium-config-path\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.172715 kubelet[3270]: I0904 17:12:02.170079 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-host-proc-sys-kernel\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.172715 kubelet[3270]: I0904 17:12:02.170126 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-cni-path\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.173185 kubelet[3270]: I0904 17:12:02.170171 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88jdd\" (UniqueName: \"kubernetes.io/projected/5609af2a-ee77-4d03-863c-d1fb6c9489df-kube-api-access-88jdd\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.176928 kubelet[3270]: I0904 17:12:02.175215 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-host-proc-sys-net\") pod \"cilium-bnvxw\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " pod="kube-system/cilium-bnvxw" Sep 4 17:12:02.178508 kubelet[3270]: I0904 17:12:02.176718 3270 topology_manager.go:215] "Topology Admit Handler" podUID="ae03a0f3-8501-4b20-b922-a1d3dc9e796e" podNamespace="kube-system" podName="cilium-operator-5cc964979-sqllz" Sep 4 17:12:02.196635 systemd[1]: Created slice kubepods-besteffort-podae03a0f3_8501_4b20_b922_a1d3dc9e796e.slice - libcontainer container kubepods-besteffort-podae03a0f3_8501_4b20_b922_a1d3dc9e796e.slice. Sep 4 17:12:02.278107 kubelet[3270]: I0904 17:12:02.277043 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae03a0f3-8501-4b20-b922-a1d3dc9e796e-cilium-config-path\") pod \"cilium-operator-5cc964979-sqllz\" (UID: \"ae03a0f3-8501-4b20-b922-a1d3dc9e796e\") " pod="kube-system/cilium-operator-5cc964979-sqllz" Sep 4 17:12:02.278107 kubelet[3270]: I0904 17:12:02.277335 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d96z5\" (UniqueName: \"kubernetes.io/projected/ae03a0f3-8501-4b20-b922-a1d3dc9e796e-kube-api-access-d96z5\") pod \"cilium-operator-5cc964979-sqllz\" (UID: \"ae03a0f3-8501-4b20-b922-a1d3dc9e796e\") " pod="kube-system/cilium-operator-5cc964979-sqllz" Sep 4 17:12:02.375834 containerd[2042]: time="2024-09-04T17:12:02.375767310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zcq2w,Uid:a2ea7c8e-dc75-41ec-b2d0-f8c4967b2f79,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:02.398968 containerd[2042]: time="2024-09-04T17:12:02.398890158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bnvxw,Uid:5609af2a-ee77-4d03-863c-d1fb6c9489df,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:02.455299 containerd[2042]: time="2024-09-04T17:12:02.454994682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:02.455299 containerd[2042]: time="2024-09-04T17:12:02.455165154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:02.455887 containerd[2042]: time="2024-09-04T17:12:02.455282622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:02.455887 containerd[2042]: time="2024-09-04T17:12:02.455383086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:02.465304 containerd[2042]: time="2024-09-04T17:12:02.464773206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:02.465304 containerd[2042]: time="2024-09-04T17:12:02.464872734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:02.465304 containerd[2042]: time="2024-09-04T17:12:02.464922126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:02.465304 containerd[2042]: time="2024-09-04T17:12:02.464952150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:02.495256 systemd[1]: Started cri-containerd-cd8b953ecf8ab076dd119a40960b519ea9988bc04b0ee9b438949ac13ec6d05d.scope - libcontainer container cd8b953ecf8ab076dd119a40960b519ea9988bc04b0ee9b438949ac13ec6d05d. Sep 4 17:12:02.511496 containerd[2042]: time="2024-09-04T17:12:02.510852571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-sqllz,Uid:ae03a0f3-8501-4b20-b922-a1d3dc9e796e,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:02.513790 systemd[1]: Started cri-containerd-16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41.scope - libcontainer container 16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41. Sep 4 17:12:02.593162 containerd[2042]: time="2024-09-04T17:12:02.592906663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zcq2w,Uid:a2ea7c8e-dc75-41ec-b2d0-f8c4967b2f79,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd8b953ecf8ab076dd119a40960b519ea9988bc04b0ee9b438949ac13ec6d05d\"" Sep 4 17:12:02.596259 containerd[2042]: time="2024-09-04T17:12:02.594881815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:02.599876 containerd[2042]: time="2024-09-04T17:12:02.598118107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:02.599876 containerd[2042]: time="2024-09-04T17:12:02.598206655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:02.599876 containerd[2042]: time="2024-09-04T17:12:02.598234639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:02.605313 containerd[2042]: time="2024-09-04T17:12:02.605261203Z" level=info msg="CreateContainer within sandbox \"cd8b953ecf8ab076dd119a40960b519ea9988bc04b0ee9b438949ac13ec6d05d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:12:02.605842 containerd[2042]: time="2024-09-04T17:12:02.605468059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bnvxw,Uid:5609af2a-ee77-4d03-863c-d1fb6c9489df,Namespace:kube-system,Attempt:0,} returns sandbox id \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\"" Sep 4 17:12:02.612225 containerd[2042]: time="2024-09-04T17:12:02.611153167Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 17:12:02.637622 containerd[2042]: time="2024-09-04T17:12:02.635783563Z" level=info msg="CreateContainer within sandbox \"cd8b953ecf8ab076dd119a40960b519ea9988bc04b0ee9b438949ac13ec6d05d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"76c561c6349876375a6512fc1af31dabaeaf592482dfe0bccc74b7c842be9af2\"" Sep 4 17:12:02.638927 containerd[2042]: time="2024-09-04T17:12:02.638209255Z" level=info msg="StartContainer for \"76c561c6349876375a6512fc1af31dabaeaf592482dfe0bccc74b7c842be9af2\"" Sep 4 17:12:02.651356 systemd[1]: Started cri-containerd-9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92.scope - libcontainer container 9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92. Sep 4 17:12:02.707813 systemd[1]: Started cri-containerd-76c561c6349876375a6512fc1af31dabaeaf592482dfe0bccc74b7c842be9af2.scope - libcontainer container 76c561c6349876375a6512fc1af31dabaeaf592482dfe0bccc74b7c842be9af2. Sep 4 17:12:02.761676 containerd[2042]: time="2024-09-04T17:12:02.761456120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-sqllz,Uid:ae03a0f3-8501-4b20-b922-a1d3dc9e796e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\"" Sep 4 17:12:02.790354 containerd[2042]: time="2024-09-04T17:12:02.790273340Z" level=info msg="StartContainer for \"76c561c6349876375a6512fc1af31dabaeaf592482dfe0bccc74b7c842be9af2\" returns successfully" Sep 4 17:12:02.872238 kubelet[3270]: I0904 17:12:02.872179 3270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zcq2w" podStartSLOduration=0.872087888 podStartE2EDuration="872.087888ms" podCreationTimestamp="2024-09-04 17:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:02.869967608 +0000 UTC m=+15.428855321" watchObservedRunningTime="2024-09-04 17:12:02.872087888 +0000 UTC m=+15.430975601" Sep 4 17:12:07.859093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761213815.mount: Deactivated successfully. Sep 4 17:12:10.433749 containerd[2042]: time="2024-09-04T17:12:10.433671050Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:10.437149 containerd[2042]: time="2024-09-04T17:12:10.434889950Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651522" Sep 4 17:12:10.438481 containerd[2042]: time="2024-09-04T17:12:10.438399470Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:10.442265 containerd[2042]: time="2024-09-04T17:12:10.442068050Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.830850635s" Sep 4 17:12:10.442265 containerd[2042]: time="2024-09-04T17:12:10.442130990Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 17:12:10.445044 containerd[2042]: time="2024-09-04T17:12:10.444868454Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 17:12:10.447860 containerd[2042]: time="2024-09-04T17:12:10.447789242Z" level=info msg="CreateContainer within sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:12:10.471313 containerd[2042]: time="2024-09-04T17:12:10.471234398Z" level=info msg="CreateContainer within sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3\"" Sep 4 17:12:10.472647 containerd[2042]: time="2024-09-04T17:12:10.472524914Z" level=info msg="StartContainer for \"0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3\"" Sep 4 17:12:10.528061 systemd[1]: Started cri-containerd-0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3.scope - libcontainer container 0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3. Sep 4 17:12:10.576793 containerd[2042]: time="2024-09-04T17:12:10.574393515Z" level=info msg="StartContainer for \"0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3\" returns successfully" Sep 4 17:12:10.599050 systemd[1]: cri-containerd-0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3.scope: Deactivated successfully. Sep 4 17:12:11.462606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3-rootfs.mount: Deactivated successfully. Sep 4 17:12:11.945958 containerd[2042]: time="2024-09-04T17:12:11.945871817Z" level=info msg="shim disconnected" id=0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3 namespace=k8s.io Sep 4 17:12:11.945958 containerd[2042]: time="2024-09-04T17:12:11.945951953Z" level=warning msg="cleaning up after shim disconnected" id=0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3 namespace=k8s.io Sep 4 17:12:11.948283 containerd[2042]: time="2024-09-04T17:12:11.945974393Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:12.277834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427431782.mount: Deactivated successfully. Sep 4 17:12:12.901692 containerd[2042]: time="2024-09-04T17:12:12.901615938Z" level=info msg="CreateContainer within sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:12:12.936133 containerd[2042]: time="2024-09-04T17:12:12.936056874Z" level=info msg="CreateContainer within sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb\"" Sep 4 17:12:12.937747 containerd[2042]: time="2024-09-04T17:12:12.937460430Z" level=info msg="StartContainer for \"43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb\"" Sep 4 17:12:13.030085 systemd[1]: run-containerd-runc-k8s.io-43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb-runc.7zzfC2.mount: Deactivated successfully. Sep 4 17:12:13.046338 systemd[1]: Started cri-containerd-43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb.scope - libcontainer container 43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb. Sep 4 17:12:13.125462 containerd[2042]: time="2024-09-04T17:12:13.125135595Z" level=info msg="StartContainer for \"43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb\" returns successfully" Sep 4 17:12:13.148614 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:12:13.149989 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:12:13.150107 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:12:13.159895 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:12:13.160386 systemd[1]: cri-containerd-43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb.scope: Deactivated successfully. Sep 4 17:12:13.226438 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:12:13.377957 containerd[2042]: time="2024-09-04T17:12:13.377866565Z" level=info msg="shim disconnected" id=43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb namespace=k8s.io Sep 4 17:12:13.377957 containerd[2042]: time="2024-09-04T17:12:13.377946689Z" level=warning msg="cleaning up after shim disconnected" id=43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb namespace=k8s.io Sep 4 17:12:13.378603 containerd[2042]: time="2024-09-04T17:12:13.377970593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:13.411372 containerd[2042]: time="2024-09-04T17:12:13.411199421Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:13.413012 containerd[2042]: time="2024-09-04T17:12:13.412948193Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138290" Sep 4 17:12:13.414221 containerd[2042]: time="2024-09-04T17:12:13.414106913Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:13.417610 containerd[2042]: time="2024-09-04T17:12:13.417401549Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.972467407s" Sep 4 17:12:13.417610 containerd[2042]: time="2024-09-04T17:12:13.417490745Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 17:12:13.422573 containerd[2042]: time="2024-09-04T17:12:13.421743905Z" level=info msg="CreateContainer within sandbox \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 17:12:13.445658 containerd[2042]: time="2024-09-04T17:12:13.445576793Z" level=info msg="CreateContainer within sandbox \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\"" Sep 4 17:12:13.446885 containerd[2042]: time="2024-09-04T17:12:13.446601377Z" level=info msg="StartContainer for \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\"" Sep 4 17:12:13.492047 systemd[1]: Started cri-containerd-2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47.scope - libcontainer container 2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47. Sep 4 17:12:13.544369 containerd[2042]: time="2024-09-04T17:12:13.544168277Z" level=info msg="StartContainer for \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\" returns successfully" Sep 4 17:12:13.918892 containerd[2042]: time="2024-09-04T17:12:13.918808579Z" level=info msg="CreateContainer within sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:12:13.933224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb-rootfs.mount: Deactivated successfully. Sep 4 17:12:13.936814 kubelet[3270]: I0904 17:12:13.936605 3270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-sqllz" podStartSLOduration=1.283492818 podStartE2EDuration="11.936546511s" podCreationTimestamp="2024-09-04 17:12:02 +0000 UTC" firstStartedPulling="2024-09-04 17:12:02.765260816 +0000 UTC m=+15.324148517" lastFinishedPulling="2024-09-04 17:12:13.418314509 +0000 UTC m=+25.977202210" observedRunningTime="2024-09-04 17:12:13.936435547 +0000 UTC m=+26.495323272" watchObservedRunningTime="2024-09-04 17:12:13.936546511 +0000 UTC m=+26.495434212" Sep 4 17:12:13.977755 containerd[2042]: time="2024-09-04T17:12:13.973100240Z" level=info msg="CreateContainer within sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056\"" Sep 4 17:12:13.977755 containerd[2042]: time="2024-09-04T17:12:13.976701548Z" level=info msg="StartContainer for \"d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056\"" Sep 4 17:12:14.052263 systemd[1]: Started cri-containerd-d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056.scope - libcontainer container d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056. Sep 4 17:12:14.155389 containerd[2042]: time="2024-09-04T17:12:14.155325076Z" level=info msg="StartContainer for \"d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056\" returns successfully" Sep 4 17:12:14.181133 systemd[1]: cri-containerd-d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056.scope: Deactivated successfully. Sep 4 17:12:14.257192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056-rootfs.mount: Deactivated successfully. Sep 4 17:12:14.269947 containerd[2042]: time="2024-09-04T17:12:14.269861093Z" level=info msg="shim disconnected" id=d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056 namespace=k8s.io Sep 4 17:12:14.269947 containerd[2042]: time="2024-09-04T17:12:14.269936813Z" level=warning msg="cleaning up after shim disconnected" id=d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056 namespace=k8s.io Sep 4 17:12:14.274024 containerd[2042]: time="2024-09-04T17:12:14.269959133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:14.923128 containerd[2042]: time="2024-09-04T17:12:14.923060684Z" level=info msg="CreateContainer within sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:12:14.946753 containerd[2042]: time="2024-09-04T17:12:14.944716472Z" level=info msg="CreateContainer within sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae\"" Sep 4 17:12:14.950365 containerd[2042]: time="2024-09-04T17:12:14.949174592Z" level=info msg="StartContainer for \"57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae\"" Sep 4 17:12:15.034412 systemd[1]: Started cri-containerd-57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae.scope - libcontainer container 57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae. Sep 4 17:12:15.107718 containerd[2042]: time="2024-09-04T17:12:15.107636645Z" level=info msg="StartContainer for \"57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae\" returns successfully" Sep 4 17:12:15.120672 systemd[1]: cri-containerd-57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae.scope: Deactivated successfully. Sep 4 17:12:15.187320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae-rootfs.mount: Deactivated successfully. Sep 4 17:12:15.190403 containerd[2042]: time="2024-09-04T17:12:15.190298166Z" level=info msg="shim disconnected" id=57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae namespace=k8s.io Sep 4 17:12:15.191035 containerd[2042]: time="2024-09-04T17:12:15.190398810Z" level=warning msg="cleaning up after shim disconnected" id=57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae namespace=k8s.io Sep 4 17:12:15.191035 containerd[2042]: time="2024-09-04T17:12:15.190444134Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:15.939387 containerd[2042]: time="2024-09-04T17:12:15.939284637Z" level=info msg="CreateContainer within sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:12:15.992552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1845097167.mount: Deactivated successfully. Sep 4 17:12:15.994255 containerd[2042]: time="2024-09-04T17:12:15.994177990Z" level=info msg="CreateContainer within sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\"" Sep 4 17:12:15.995159 containerd[2042]: time="2024-09-04T17:12:15.995082106Z" level=info msg="StartContainer for \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\"" Sep 4 17:12:16.048856 systemd[1]: run-containerd-runc-k8s.io-ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32-runc.kxeTna.mount: Deactivated successfully. Sep 4 17:12:16.061113 systemd[1]: Started cri-containerd-ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32.scope - libcontainer container ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32. Sep 4 17:12:16.121340 containerd[2042]: time="2024-09-04T17:12:16.121251522Z" level=info msg="StartContainer for \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\" returns successfully" Sep 4 17:12:16.277972 kubelet[3270]: I0904 17:12:16.276582 3270 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:12:16.325846 kubelet[3270]: I0904 17:12:16.323433 3270 topology_manager.go:215] "Topology Admit Handler" podUID="7350034f-bb15-4e2d-bb5d-fb0d71fdf227" podNamespace="kube-system" podName="coredns-76f75df574-rgsfv" Sep 4 17:12:16.333775 kubelet[3270]: I0904 17:12:16.333264 3270 topology_manager.go:215] "Topology Admit Handler" podUID="32ae8cea-6409-4eb8-8d5c-7d6905f1c3d6" podNamespace="kube-system" podName="coredns-76f75df574-b8x9w" Sep 4 17:12:16.344320 systemd[1]: Created slice kubepods-burstable-pod7350034f_bb15_4e2d_bb5d_fb0d71fdf227.slice - libcontainer container kubepods-burstable-pod7350034f_bb15_4e2d_bb5d_fb0d71fdf227.slice. Sep 4 17:12:16.367177 systemd[1]: Created slice kubepods-burstable-pod32ae8cea_6409_4eb8_8d5c_7d6905f1c3d6.slice - libcontainer container kubepods-burstable-pod32ae8cea_6409_4eb8_8d5c_7d6905f1c3d6.slice. Sep 4 17:12:16.390117 kubelet[3270]: I0904 17:12:16.389790 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32ae8cea-6409-4eb8-8d5c-7d6905f1c3d6-config-volume\") pod \"coredns-76f75df574-b8x9w\" (UID: \"32ae8cea-6409-4eb8-8d5c-7d6905f1c3d6\") " pod="kube-system/coredns-76f75df574-b8x9w" Sep 4 17:12:16.390117 kubelet[3270]: I0904 17:12:16.389874 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7350034f-bb15-4e2d-bb5d-fb0d71fdf227-config-volume\") pod \"coredns-76f75df574-rgsfv\" (UID: \"7350034f-bb15-4e2d-bb5d-fb0d71fdf227\") " pod="kube-system/coredns-76f75df574-rgsfv" Sep 4 17:12:16.390117 kubelet[3270]: I0904 17:12:16.389924 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gts5\" (UniqueName: \"kubernetes.io/projected/7350034f-bb15-4e2d-bb5d-fb0d71fdf227-kube-api-access-5gts5\") pod \"coredns-76f75df574-rgsfv\" (UID: \"7350034f-bb15-4e2d-bb5d-fb0d71fdf227\") " pod="kube-system/coredns-76f75df574-rgsfv" Sep 4 17:12:16.390117 kubelet[3270]: I0904 17:12:16.389974 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g75gx\" (UniqueName: \"kubernetes.io/projected/32ae8cea-6409-4eb8-8d5c-7d6905f1c3d6-kube-api-access-g75gx\") pod \"coredns-76f75df574-b8x9w\" (UID: \"32ae8cea-6409-4eb8-8d5c-7d6905f1c3d6\") " pod="kube-system/coredns-76f75df574-b8x9w" Sep 4 17:12:16.654608 containerd[2042]: time="2024-09-04T17:12:16.653908437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rgsfv,Uid:7350034f-bb15-4e2d-bb5d-fb0d71fdf227,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:16.676054 containerd[2042]: time="2024-09-04T17:12:16.675440313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b8x9w,Uid:32ae8cea-6409-4eb8-8d5c-7d6905f1c3d6,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:18.956106 (udev-worker)[4245]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:12:18.956290 (udev-worker)[4247]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:12:18.960588 systemd-networkd[1938]: cilium_host: Link UP Sep 4 17:12:18.962016 systemd-networkd[1938]: cilium_net: Link UP Sep 4 17:12:18.965576 systemd-networkd[1938]: cilium_net: Gained carrier Sep 4 17:12:18.966255 systemd-networkd[1938]: cilium_host: Gained carrier Sep 4 17:12:19.131934 (udev-worker)[4289]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:12:19.142329 systemd-networkd[1938]: cilium_vxlan: Link UP Sep 4 17:12:19.142836 systemd-networkd[1938]: cilium_vxlan: Gained carrier Sep 4 17:12:19.374078 systemd-networkd[1938]: cilium_net: Gained IPv6LL Sep 4 17:12:19.630776 kernel: NET: Registered PF_ALG protocol family Sep 4 17:12:19.838692 systemd-networkd[1938]: cilium_host: Gained IPv6LL Sep 4 17:12:20.980635 systemd-networkd[1938]: lxc_health: Link UP Sep 4 17:12:20.991650 systemd-networkd[1938]: lxc_health: Gained carrier Sep 4 17:12:21.117963 systemd-networkd[1938]: cilium_vxlan: Gained IPv6LL Sep 4 17:12:21.275561 systemd-networkd[1938]: lxc02997efc120c: Link UP Sep 4 17:12:21.283827 kernel: eth0: renamed from tmpf3a93 Sep 4 17:12:21.289490 systemd-networkd[1938]: lxc02997efc120c: Gained carrier Sep 4 17:12:21.754071 systemd-networkd[1938]: lxc6381abd2fe20: Link UP Sep 4 17:12:21.764778 kernel: eth0: renamed from tmp74bcb Sep 4 17:12:21.770816 systemd-networkd[1938]: lxc6381abd2fe20: Gained carrier Sep 4 17:12:22.424702 kubelet[3270]: I0904 17:12:22.424628 3270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bnvxw" podStartSLOduration=12.591151014 podStartE2EDuration="20.424567441s" podCreationTimestamp="2024-09-04 17:12:02 +0000 UTC" firstStartedPulling="2024-09-04 17:12:02.609201931 +0000 UTC m=+15.168089644" lastFinishedPulling="2024-09-04 17:12:10.44261837 +0000 UTC m=+23.001506071" observedRunningTime="2024-09-04 17:12:16.987493198 +0000 UTC m=+29.546380899" watchObservedRunningTime="2024-09-04 17:12:22.424567441 +0000 UTC m=+34.983455166" Sep 4 17:12:22.717977 systemd-networkd[1938]: lxc_health: Gained IPv6LL Sep 4 17:12:23.038535 systemd-networkd[1938]: lxc6381abd2fe20: Gained IPv6LL Sep 4 17:12:23.229961 systemd-networkd[1938]: lxc02997efc120c: Gained IPv6LL Sep 4 17:12:23.960619 systemd[1]: Started sshd@7-172.31.29.45:22-139.178.89.65:40544.service - OpenSSH per-connection server daemon (139.178.89.65:40544). Sep 4 17:12:24.153769 sshd[4644]: Accepted publickey for core from 139.178.89.65 port 40544 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:24.155094 sshd[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:24.163653 systemd-logind[2016]: New session 8 of user core. Sep 4 17:12:24.173247 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:12:24.489071 sshd[4644]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:24.495599 systemd-logind[2016]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:12:24.496263 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:12:24.499538 systemd[1]: sshd@7-172.31.29.45:22-139.178.89.65:40544.service: Deactivated successfully. Sep 4 17:12:24.511145 systemd-logind[2016]: Removed session 8. Sep 4 17:12:25.962846 ntpd[2009]: Listen normally on 7 cilium_host 192.168.0.123:123 Sep 4 17:12:25.964277 ntpd[2009]: 4 Sep 17:12:25 ntpd[2009]: Listen normally on 7 cilium_host 192.168.0.123:123 Sep 4 17:12:25.964277 ntpd[2009]: 4 Sep 17:12:25 ntpd[2009]: Listen normally on 8 cilium_net [fe80::20e0:7aff:fe1c:7bc8%4]:123 Sep 4 17:12:25.964277 ntpd[2009]: 4 Sep 17:12:25 ntpd[2009]: Listen normally on 9 cilium_host [fe80::44b:7aff:fe2f:c818%5]:123 Sep 4 17:12:25.964277 ntpd[2009]: 4 Sep 17:12:25 ntpd[2009]: Listen normally on 10 cilium_vxlan [fe80::2c73:20ff:fe9b:e6b7%6]:123 Sep 4 17:12:25.964277 ntpd[2009]: 4 Sep 17:12:25 ntpd[2009]: Listen normally on 11 lxc_health [fe80::64f7:c5ff:fe58:7a4b%8]:123 Sep 4 17:12:25.964277 ntpd[2009]: 4 Sep 17:12:25 ntpd[2009]: Listen normally on 12 lxc02997efc120c [fe80::c0fe:86ff:fef1:a9f%10]:123 Sep 4 17:12:25.964277 ntpd[2009]: 4 Sep 17:12:25 ntpd[2009]: Listen normally on 13 lxc6381abd2fe20 [fe80::6c63:71ff:fe5e:596e%12]:123 Sep 4 17:12:25.962982 ntpd[2009]: Listen normally on 8 cilium_net [fe80::20e0:7aff:fe1c:7bc8%4]:123 Sep 4 17:12:25.963075 ntpd[2009]: Listen normally on 9 cilium_host [fe80::44b:7aff:fe2f:c818%5]:123 Sep 4 17:12:25.963148 ntpd[2009]: Listen normally on 10 cilium_vxlan [fe80::2c73:20ff:fe9b:e6b7%6]:123 Sep 4 17:12:25.963216 ntpd[2009]: Listen normally on 11 lxc_health [fe80::64f7:c5ff:fe58:7a4b%8]:123 Sep 4 17:12:25.963292 ntpd[2009]: Listen normally on 12 lxc02997efc120c [fe80::c0fe:86ff:fef1:a9f%10]:123 Sep 4 17:12:25.963378 ntpd[2009]: Listen normally on 13 lxc6381abd2fe20 [fe80::6c63:71ff:fe5e:596e%12]:123 Sep 4 17:12:29.526314 systemd[1]: Started sshd@8-172.31.29.45:22-139.178.89.65:34466.service - OpenSSH per-connection server daemon (139.178.89.65:34466). Sep 4 17:12:29.714147 sshd[4663]: Accepted publickey for core from 139.178.89.65 port 34466 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:29.716235 sshd[4663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:29.730068 systemd-logind[2016]: New session 9 of user core. Sep 4 17:12:29.735035 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:12:30.018113 sshd[4663]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:30.026097 systemd[1]: sshd@8-172.31.29.45:22-139.178.89.65:34466.service: Deactivated successfully. Sep 4 17:12:30.032258 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:12:30.037584 systemd-logind[2016]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:12:30.043466 systemd-logind[2016]: Removed session 9. Sep 4 17:12:30.375889 containerd[2042]: time="2024-09-04T17:12:30.375689325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:30.376948 containerd[2042]: time="2024-09-04T17:12:30.376257081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:30.376948 containerd[2042]: time="2024-09-04T17:12:30.376335957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:30.377760 containerd[2042]: time="2024-09-04T17:12:30.376364265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:30.431556 systemd[1]: Started cri-containerd-74bcb9b0e95043bf09fe5d0030928159144f8ee6edeeec3511851e540fc814dc.scope - libcontainer container 74bcb9b0e95043bf09fe5d0030928159144f8ee6edeeec3511851e540fc814dc. Sep 4 17:12:30.456361 containerd[2042]: time="2024-09-04T17:12:30.454624197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:30.456361 containerd[2042]: time="2024-09-04T17:12:30.456290745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:30.459929 containerd[2042]: time="2024-09-04T17:12:30.456329481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:30.459929 containerd[2042]: time="2024-09-04T17:12:30.456354141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:30.527319 systemd[1]: Started cri-containerd-f3a93c3e639646b3d2451b6f8a4eae6a6cff90f91a67127a53b05022280c2ed6.scope - libcontainer container f3a93c3e639646b3d2451b6f8a4eae6a6cff90f91a67127a53b05022280c2ed6. Sep 4 17:12:30.565040 containerd[2042]: time="2024-09-04T17:12:30.564955210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rgsfv,Uid:7350034f-bb15-4e2d-bb5d-fb0d71fdf227,Namespace:kube-system,Attempt:0,} returns sandbox id \"74bcb9b0e95043bf09fe5d0030928159144f8ee6edeeec3511851e540fc814dc\"" Sep 4 17:12:30.574126 containerd[2042]: time="2024-09-04T17:12:30.573998950Z" level=info msg="CreateContainer within sandbox \"74bcb9b0e95043bf09fe5d0030928159144f8ee6edeeec3511851e540fc814dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:12:30.614599 containerd[2042]: time="2024-09-04T17:12:30.613549918Z" level=info msg="CreateContainer within sandbox \"74bcb9b0e95043bf09fe5d0030928159144f8ee6edeeec3511851e540fc814dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"485a0e3628468caada3e3ccd6bd7d70eda34803eef6efe5480baaff512a2046d\"" Sep 4 17:12:30.616875 containerd[2042]: time="2024-09-04T17:12:30.616448650Z" level=info msg="StartContainer for \"485a0e3628468caada3e3ccd6bd7d70eda34803eef6efe5480baaff512a2046d\"" Sep 4 17:12:30.693701 systemd[1]: Started cri-containerd-485a0e3628468caada3e3ccd6bd7d70eda34803eef6efe5480baaff512a2046d.scope - libcontainer container 485a0e3628468caada3e3ccd6bd7d70eda34803eef6efe5480baaff512a2046d. Sep 4 17:12:30.703452 containerd[2042]: time="2024-09-04T17:12:30.703073315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b8x9w,Uid:32ae8cea-6409-4eb8-8d5c-7d6905f1c3d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3a93c3e639646b3d2451b6f8a4eae6a6cff90f91a67127a53b05022280c2ed6\"" Sep 4 17:12:30.715331 containerd[2042]: time="2024-09-04T17:12:30.715273703Z" level=info msg="CreateContainer within sandbox \"f3a93c3e639646b3d2451b6f8a4eae6a6cff90f91a67127a53b05022280c2ed6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:12:30.749606 containerd[2042]: time="2024-09-04T17:12:30.749526803Z" level=info msg="CreateContainer within sandbox \"f3a93c3e639646b3d2451b6f8a4eae6a6cff90f91a67127a53b05022280c2ed6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"500aecd437af16374e284c751c772089ced009dea7d0a874cb7075c3c03a1a07\"" Sep 4 17:12:30.751988 containerd[2042]: time="2024-09-04T17:12:30.751633259Z" level=info msg="StartContainer for \"500aecd437af16374e284c751c772089ced009dea7d0a874cb7075c3c03a1a07\"" Sep 4 17:12:30.808863 containerd[2042]: time="2024-09-04T17:12:30.808693319Z" level=info msg="StartContainer for \"485a0e3628468caada3e3ccd6bd7d70eda34803eef6efe5480baaff512a2046d\" returns successfully" Sep 4 17:12:30.853225 systemd[1]: Started cri-containerd-500aecd437af16374e284c751c772089ced009dea7d0a874cb7075c3c03a1a07.scope - libcontainer container 500aecd437af16374e284c751c772089ced009dea7d0a874cb7075c3c03a1a07. Sep 4 17:12:30.938038 containerd[2042]: time="2024-09-04T17:12:30.937873764Z" level=info msg="StartContainer for \"500aecd437af16374e284c751c772089ced009dea7d0a874cb7075c3c03a1a07\" returns successfully" Sep 4 17:12:31.018614 kubelet[3270]: I0904 17:12:31.018437 3270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-b8x9w" podStartSLOduration=29.018377756 podStartE2EDuration="29.018377756s" podCreationTimestamp="2024-09-04 17:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:31.017398076 +0000 UTC m=+43.576285777" watchObservedRunningTime="2024-09-04 17:12:31.018377756 +0000 UTC m=+43.577265469" Sep 4 17:12:31.050945 kubelet[3270]: I0904 17:12:31.050127 3270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rgsfv" podStartSLOduration=29.050067632 podStartE2EDuration="29.050067632s" podCreationTimestamp="2024-09-04 17:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:31.048521576 +0000 UTC m=+43.607409361" watchObservedRunningTime="2024-09-04 17:12:31.050067632 +0000 UTC m=+43.608955333" Sep 4 17:12:35.069236 systemd[1]: Started sshd@9-172.31.29.45:22-139.178.89.65:34480.service - OpenSSH per-connection server daemon (139.178.89.65:34480). Sep 4 17:12:35.242192 sshd[4855]: Accepted publickey for core from 139.178.89.65 port 34480 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:35.245004 sshd[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:35.253128 systemd-logind[2016]: New session 10 of user core. Sep 4 17:12:35.260033 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:12:35.504788 sshd[4855]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:35.512499 systemd[1]: sshd@9-172.31.29.45:22-139.178.89.65:34480.service: Deactivated successfully. Sep 4 17:12:35.517666 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:12:35.519558 systemd-logind[2016]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:12:35.524221 systemd-logind[2016]: Removed session 10. Sep 4 17:12:40.544275 systemd[1]: Started sshd@10-172.31.29.45:22-139.178.89.65:36312.service - OpenSSH per-connection server daemon (139.178.89.65:36312). Sep 4 17:12:40.721787 sshd[4871]: Accepted publickey for core from 139.178.89.65 port 36312 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:40.724394 sshd[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:40.733852 systemd-logind[2016]: New session 11 of user core. Sep 4 17:12:40.743006 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:12:41.006103 sshd[4871]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:41.013027 systemd[1]: sshd@10-172.31.29.45:22-139.178.89.65:36312.service: Deactivated successfully. Sep 4 17:12:41.017901 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:12:41.021321 systemd-logind[2016]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:12:41.026427 systemd-logind[2016]: Removed session 11. Sep 4 17:12:41.049250 systemd[1]: Started sshd@11-172.31.29.45:22-139.178.89.65:36320.service - OpenSSH per-connection server daemon (139.178.89.65:36320). Sep 4 17:12:41.223045 sshd[4884]: Accepted publickey for core from 139.178.89.65 port 36320 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:41.226036 sshd[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:41.235708 systemd-logind[2016]: New session 12 of user core. Sep 4 17:12:41.244084 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:12:41.555429 sshd[4884]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:41.566906 systemd[1]: sshd@11-172.31.29.45:22-139.178.89.65:36320.service: Deactivated successfully. Sep 4 17:12:41.572835 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:12:41.579679 systemd-logind[2016]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:12:41.601437 systemd[1]: Started sshd@12-172.31.29.45:22-139.178.89.65:36328.service - OpenSSH per-connection server daemon (139.178.89.65:36328). Sep 4 17:12:41.606492 systemd-logind[2016]: Removed session 12. Sep 4 17:12:41.783529 sshd[4894]: Accepted publickey for core from 139.178.89.65 port 36328 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:41.786871 sshd[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:41.794619 systemd-logind[2016]: New session 13 of user core. Sep 4 17:12:41.802997 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:12:42.052154 sshd[4894]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:42.066154 systemd[1]: sshd@12-172.31.29.45:22-139.178.89.65:36328.service: Deactivated successfully. Sep 4 17:12:42.071164 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:12:42.072415 systemd-logind[2016]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:12:42.074947 systemd-logind[2016]: Removed session 13. Sep 4 17:12:47.090261 systemd[1]: Started sshd@13-172.31.29.45:22-139.178.89.65:36330.service - OpenSSH per-connection server daemon (139.178.89.65:36330). Sep 4 17:12:47.269493 sshd[4908]: Accepted publickey for core from 139.178.89.65 port 36330 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:47.272238 sshd[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:47.281113 systemd-logind[2016]: New session 14 of user core. Sep 4 17:12:47.288018 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:12:47.528198 sshd[4908]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:47.534926 systemd[1]: sshd@13-172.31.29.45:22-139.178.89.65:36330.service: Deactivated successfully. Sep 4 17:12:47.539245 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:12:47.540749 systemd-logind[2016]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:12:47.542558 systemd-logind[2016]: Removed session 14. Sep 4 17:12:52.571197 systemd[1]: Started sshd@14-172.31.29.45:22-139.178.89.65:33514.service - OpenSSH per-connection server daemon (139.178.89.65:33514). Sep 4 17:12:52.737753 sshd[4923]: Accepted publickey for core from 139.178.89.65 port 33514 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:52.740304 sshd[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:52.749272 systemd-logind[2016]: New session 15 of user core. Sep 4 17:12:52.755009 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:12:52.996635 sshd[4923]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:53.003170 systemd[1]: sshd@14-172.31.29.45:22-139.178.89.65:33514.service: Deactivated successfully. Sep 4 17:12:53.007393 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:12:53.009360 systemd-logind[2016]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:12:53.011919 systemd-logind[2016]: Removed session 15. Sep 4 17:12:58.037260 systemd[1]: Started sshd@15-172.31.29.45:22-139.178.89.65:53742.service - OpenSSH per-connection server daemon (139.178.89.65:53742). Sep 4 17:12:58.214800 sshd[4936]: Accepted publickey for core from 139.178.89.65 port 53742 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:58.217333 sshd[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:58.224473 systemd-logind[2016]: New session 16 of user core. Sep 4 17:12:58.232016 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:12:58.480202 sshd[4936]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:58.487351 systemd[1]: sshd@15-172.31.29.45:22-139.178.89.65:53742.service: Deactivated successfully. Sep 4 17:12:58.494491 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:12:58.497189 systemd-logind[2016]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:12:58.500635 systemd-logind[2016]: Removed session 16. Sep 4 17:12:58.523557 systemd[1]: Started sshd@16-172.31.29.45:22-139.178.89.65:53744.service - OpenSSH per-connection server daemon (139.178.89.65:53744). Sep 4 17:12:58.713479 sshd[4949]: Accepted publickey for core from 139.178.89.65 port 53744 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:58.716992 sshd[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:58.728986 systemd-logind[2016]: New session 17 of user core. Sep 4 17:12:58.740065 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:12:59.052810 sshd[4949]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:59.059034 systemd-logind[2016]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:12:59.059446 systemd[1]: sshd@16-172.31.29.45:22-139.178.89.65:53744.service: Deactivated successfully. Sep 4 17:12:59.064104 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:12:59.068435 systemd-logind[2016]: Removed session 17. Sep 4 17:12:59.091285 systemd[1]: Started sshd@17-172.31.29.45:22-139.178.89.65:53756.service - OpenSSH per-connection server daemon (139.178.89.65:53756). Sep 4 17:12:59.268237 sshd[4960]: Accepted publickey for core from 139.178.89.65 port 53756 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:59.271029 sshd[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:59.280036 systemd-logind[2016]: New session 18 of user core. Sep 4 17:12:59.290000 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:13:01.680710 sshd[4960]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:01.690664 systemd[1]: sshd@17-172.31.29.45:22-139.178.89.65:53756.service: Deactivated successfully. Sep 4 17:13:01.699036 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:13:01.704633 systemd-logind[2016]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:13:01.731255 systemd[1]: Started sshd@18-172.31.29.45:22-139.178.89.65:53758.service - OpenSSH per-connection server daemon (139.178.89.65:53758). Sep 4 17:13:01.734401 systemd-logind[2016]: Removed session 18. Sep 4 17:13:01.910347 sshd[4979]: Accepted publickey for core from 139.178.89.65 port 53758 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:01.913657 sshd[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:01.921970 systemd-logind[2016]: New session 19 of user core. Sep 4 17:13:01.930042 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:13:02.408130 sshd[4979]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:02.414498 systemd[1]: sshd@18-172.31.29.45:22-139.178.89.65:53758.service: Deactivated successfully. Sep 4 17:13:02.418472 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:13:02.420240 systemd-logind[2016]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:13:02.422584 systemd-logind[2016]: Removed session 19. Sep 4 17:13:02.444269 systemd[1]: Started sshd@19-172.31.29.45:22-139.178.89.65:53772.service - OpenSSH per-connection server daemon (139.178.89.65:53772). Sep 4 17:13:02.613256 sshd[4990]: Accepted publickey for core from 139.178.89.65 port 53772 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:02.615888 sshd[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:02.625126 systemd-logind[2016]: New session 20 of user core. Sep 4 17:13:02.631015 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:13:02.866307 sshd[4990]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:02.873154 systemd[1]: sshd@19-172.31.29.45:22-139.178.89.65:53772.service: Deactivated successfully. Sep 4 17:13:02.877534 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:13:02.879627 systemd-logind[2016]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:13:02.882812 systemd-logind[2016]: Removed session 20. Sep 4 17:13:07.026291 systemd[1]: Started sshd@20-172.31.29.45:22-18.175.239.228:21002.service - OpenSSH per-connection server daemon (18.175.239.228:21002). Sep 4 17:13:07.055165 sshd[5005]: banner exchange: Connection from 18.175.239.228 port 21002: invalid format Sep 4 17:13:07.057638 systemd[1]: sshd@20-172.31.29.45:22-18.175.239.228:21002.service: Deactivated successfully. Sep 4 17:13:07.913236 systemd[1]: Started sshd@21-172.31.29.45:22-139.178.89.65:40758.service - OpenSSH per-connection server daemon (139.178.89.65:40758). Sep 4 17:13:08.101541 sshd[5009]: Accepted publickey for core from 139.178.89.65 port 40758 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:08.106291 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:08.114826 systemd-logind[2016]: New session 21 of user core. Sep 4 17:13:08.120009 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:13:08.362563 sshd[5009]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:08.368777 systemd[1]: sshd@21-172.31.29.45:22-139.178.89.65:40758.service: Deactivated successfully. Sep 4 17:13:08.372529 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:13:08.373803 systemd-logind[2016]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:13:08.375788 systemd-logind[2016]: Removed session 21. Sep 4 17:13:13.400271 systemd[1]: Started sshd@22-172.31.29.45:22-139.178.89.65:40760.service - OpenSSH per-connection server daemon (139.178.89.65:40760). Sep 4 17:13:13.576143 sshd[5027]: Accepted publickey for core from 139.178.89.65 port 40760 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:13.578752 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:13.587469 systemd-logind[2016]: New session 22 of user core. Sep 4 17:13:13.591546 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:13:13.843385 sshd[5027]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:13.850010 systemd[1]: sshd@22-172.31.29.45:22-139.178.89.65:40760.service: Deactivated successfully. Sep 4 17:13:13.853896 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:13:13.855899 systemd-logind[2016]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:13:13.857816 systemd-logind[2016]: Removed session 22. Sep 4 17:13:18.888281 systemd[1]: Started sshd@23-172.31.29.45:22-139.178.89.65:41926.service - OpenSSH per-connection server daemon (139.178.89.65:41926). Sep 4 17:13:19.064701 sshd[5040]: Accepted publickey for core from 139.178.89.65 port 41926 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:19.067234 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:19.076014 systemd-logind[2016]: New session 23 of user core. Sep 4 17:13:19.081978 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:13:19.317695 sshd[5040]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:19.324490 systemd[1]: sshd@23-172.31.29.45:22-139.178.89.65:41926.service: Deactivated successfully. Sep 4 17:13:19.328394 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:13:19.329714 systemd-logind[2016]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:13:19.332081 systemd-logind[2016]: Removed session 23. Sep 4 17:13:24.358267 systemd[1]: Started sshd@24-172.31.29.45:22-139.178.89.65:41942.service - OpenSSH per-connection server daemon (139.178.89.65:41942). Sep 4 17:13:24.538641 sshd[5053]: Accepted publickey for core from 139.178.89.65 port 41942 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:24.541215 sshd[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:24.549896 systemd-logind[2016]: New session 24 of user core. Sep 4 17:13:24.559097 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:13:24.796198 sshd[5053]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:24.801416 systemd[1]: sshd@24-172.31.29.45:22-139.178.89.65:41942.service: Deactivated successfully. Sep 4 17:13:24.805434 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:13:24.810325 systemd-logind[2016]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:13:24.813185 systemd-logind[2016]: Removed session 24. Sep 4 17:13:24.833240 systemd[1]: Started sshd@25-172.31.29.45:22-139.178.89.65:41950.service - OpenSSH per-connection server daemon (139.178.89.65:41950). Sep 4 17:13:25.011201 sshd[5066]: Accepted publickey for core from 139.178.89.65 port 41950 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:25.013769 sshd[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:25.021342 systemd-logind[2016]: New session 25 of user core. Sep 4 17:13:25.031995 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:13:27.175154 containerd[2042]: time="2024-09-04T17:13:27.175081407Z" level=info msg="StopContainer for \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\" with timeout 30 (s)" Sep 4 17:13:27.177276 containerd[2042]: time="2024-09-04T17:13:27.176553291Z" level=info msg="Stop container \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\" with signal terminated" Sep 4 17:13:27.226557 containerd[2042]: time="2024-09-04T17:13:27.226476687Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:13:27.231017 systemd[1]: cri-containerd-2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47.scope: Deactivated successfully. Sep 4 17:13:27.243241 containerd[2042]: time="2024-09-04T17:13:27.243030543Z" level=info msg="StopContainer for \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\" with timeout 2 (s)" Sep 4 17:13:27.243827 containerd[2042]: time="2024-09-04T17:13:27.243777039Z" level=info msg="Stop container \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\" with signal terminated" Sep 4 17:13:27.267438 systemd-networkd[1938]: lxc_health: Link DOWN Sep 4 17:13:27.267451 systemd-networkd[1938]: lxc_health: Lost carrier Sep 4 17:13:27.293634 systemd[1]: cri-containerd-ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32.scope: Deactivated successfully. Sep 4 17:13:27.294119 systemd[1]: cri-containerd-ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32.scope: Consumed 14.782s CPU time. Sep 4 17:13:27.320038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47-rootfs.mount: Deactivated successfully. Sep 4 17:13:27.339788 containerd[2042]: time="2024-09-04T17:13:27.339609652Z" level=info msg="shim disconnected" id=2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47 namespace=k8s.io Sep 4 17:13:27.340042 containerd[2042]: time="2024-09-04T17:13:27.339815248Z" level=warning msg="cleaning up after shim disconnected" id=2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47 namespace=k8s.io Sep 4 17:13:27.340042 containerd[2042]: time="2024-09-04T17:13:27.339841852Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:27.348350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32-rootfs.mount: Deactivated successfully. Sep 4 17:13:27.363289 containerd[2042]: time="2024-09-04T17:13:27.363074932Z" level=info msg="shim disconnected" id=ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32 namespace=k8s.io Sep 4 17:13:27.363845 containerd[2042]: time="2024-09-04T17:13:27.363396124Z" level=warning msg="cleaning up after shim disconnected" id=ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32 namespace=k8s.io Sep 4 17:13:27.363845 containerd[2042]: time="2024-09-04T17:13:27.363421888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:27.374302 containerd[2042]: time="2024-09-04T17:13:27.374123332Z" level=info msg="StopContainer for \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\" returns successfully" Sep 4 17:13:27.375967 containerd[2042]: time="2024-09-04T17:13:27.375088228Z" level=info msg="StopPodSandbox for \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\"" Sep 4 17:13:27.375967 containerd[2042]: time="2024-09-04T17:13:27.375156760Z" level=info msg="Container to stop \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:13:27.381319 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92-shm.mount: Deactivated successfully. Sep 4 17:13:27.397388 systemd[1]: cri-containerd-9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92.scope: Deactivated successfully. Sep 4 17:13:27.407125 containerd[2042]: time="2024-09-04T17:13:27.406536052Z" level=info msg="StopContainer for \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\" returns successfully" Sep 4 17:13:27.409148 containerd[2042]: time="2024-09-04T17:13:27.408680584Z" level=info msg="StopPodSandbox for \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\"" Sep 4 17:13:27.409148 containerd[2042]: time="2024-09-04T17:13:27.408809428Z" level=info msg="Container to stop \"0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:13:27.409148 containerd[2042]: time="2024-09-04T17:13:27.408866464Z" level=info msg="Container to stop \"d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:13:27.409148 containerd[2042]: time="2024-09-04T17:13:27.408897832Z" level=info msg="Container to stop \"57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:13:27.409148 containerd[2042]: time="2024-09-04T17:13:27.408926296Z" level=info msg="Container to stop \"43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:13:27.409148 containerd[2042]: time="2024-09-04T17:13:27.408949888Z" level=info msg="Container to stop \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:13:27.421637 systemd[1]: cri-containerd-16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41.scope: Deactivated successfully. Sep 4 17:13:27.467095 containerd[2042]: time="2024-09-04T17:13:27.466623953Z" level=info msg="shim disconnected" id=9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92 namespace=k8s.io Sep 4 17:13:27.467095 containerd[2042]: time="2024-09-04T17:13:27.466706477Z" level=warning msg="cleaning up after shim disconnected" id=9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92 namespace=k8s.io Sep 4 17:13:27.467095 containerd[2042]: time="2024-09-04T17:13:27.466900649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:27.468385 containerd[2042]: time="2024-09-04T17:13:27.467872241Z" level=info msg="shim disconnected" id=16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41 namespace=k8s.io Sep 4 17:13:27.468385 containerd[2042]: time="2024-09-04T17:13:27.468061853Z" level=warning msg="cleaning up after shim disconnected" id=16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41 namespace=k8s.io Sep 4 17:13:27.468385 containerd[2042]: time="2024-09-04T17:13:27.468084041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:27.500679 containerd[2042]: time="2024-09-04T17:13:27.500299901Z" level=info msg="TearDown network for sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" successfully" Sep 4 17:13:27.500679 containerd[2042]: time="2024-09-04T17:13:27.500356577Z" level=info msg="StopPodSandbox for \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" returns successfully" Sep 4 17:13:27.505381 containerd[2042]: time="2024-09-04T17:13:27.505104161Z" level=info msg="TearDown network for sandbox \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\" successfully" Sep 4 17:13:27.505381 containerd[2042]: time="2024-09-04T17:13:27.505152317Z" level=info msg="StopPodSandbox for \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\" returns successfully" Sep 4 17:13:27.705125 kubelet[3270]: I0904 17:13:27.704680 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-xtables-lock\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.705125 kubelet[3270]: I0904 17:13:27.704758 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-lib-modules\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.705125 kubelet[3270]: I0904 17:13:27.704805 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-host-proc-sys-kernel\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.705125 kubelet[3270]: I0904 17:13:27.704846 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-host-proc-sys-net\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.705125 kubelet[3270]: I0904 17:13:27.704889 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-cni-path\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.705125 kubelet[3270]: I0904 17:13:27.704930 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-cilium-cgroup\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.707128 kubelet[3270]: I0904 17:13:27.704978 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88jdd\" (UniqueName: \"kubernetes.io/projected/5609af2a-ee77-4d03-863c-d1fb6c9489df-kube-api-access-88jdd\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.707128 kubelet[3270]: I0904 17:13:27.705025 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d96z5\" (UniqueName: \"kubernetes.io/projected/ae03a0f3-8501-4b20-b922-a1d3dc9e796e-kube-api-access-d96z5\") pod \"ae03a0f3-8501-4b20-b922-a1d3dc9e796e\" (UID: \"ae03a0f3-8501-4b20-b922-a1d3dc9e796e\") " Sep 4 17:13:27.707128 kubelet[3270]: I0904 17:13:27.705577 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5609af2a-ee77-4d03-863c-d1fb6c9489df-clustermesh-secrets\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.707128 kubelet[3270]: I0904 17:13:27.705642 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5609af2a-ee77-4d03-863c-d1fb6c9489df-cilium-config-path\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.707128 kubelet[3270]: I0904 17:13:27.705686 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-cilium-run\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.707128 kubelet[3270]: I0904 17:13:27.705770 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-hostproc\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.707451 kubelet[3270]: I0904 17:13:27.705814 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-etc-cni-netd\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.707451 kubelet[3270]: I0904 17:13:27.705887 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5609af2a-ee77-4d03-863c-d1fb6c9489df-hubble-tls\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.707451 kubelet[3270]: I0904 17:13:27.705926 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-bpf-maps\") pod \"5609af2a-ee77-4d03-863c-d1fb6c9489df\" (UID: \"5609af2a-ee77-4d03-863c-d1fb6c9489df\") " Sep 4 17:13:27.707451 kubelet[3270]: I0904 17:13:27.705972 3270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae03a0f3-8501-4b20-b922-a1d3dc9e796e-cilium-config-path\") pod \"ae03a0f3-8501-4b20-b922-a1d3dc9e796e\" (UID: \"ae03a0f3-8501-4b20-b922-a1d3dc9e796e\") " Sep 4 17:13:27.711903 kubelet[3270]: I0904 17:13:27.710815 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:13:27.711903 kubelet[3270]: I0904 17:13:27.710909 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:13:27.711903 kubelet[3270]: I0904 17:13:27.710951 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:13:27.711903 kubelet[3270]: I0904 17:13:27.710995 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:13:27.711903 kubelet[3270]: I0904 17:13:27.711035 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-cni-path" (OuterVolumeSpecName: "cni-path") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:13:27.712406 kubelet[3270]: I0904 17:13:27.711075 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:13:27.713107 kubelet[3270]: I0904 17:13:27.713060 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae03a0f3-8501-4b20-b922-a1d3dc9e796e-kube-api-access-d96z5" (OuterVolumeSpecName: "kube-api-access-d96z5") pod "ae03a0f3-8501-4b20-b922-a1d3dc9e796e" (UID: "ae03a0f3-8501-4b20-b922-a1d3dc9e796e"). InnerVolumeSpecName "kube-api-access-d96z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:13:27.716469 kubelet[3270]: I0904 17:13:27.716370 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:13:27.717425 kubelet[3270]: I0904 17:13:27.716952 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:13:27.717425 kubelet[3270]: I0904 17:13:27.717128 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-hostproc" (OuterVolumeSpecName: "hostproc") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:13:27.718522 kubelet[3270]: I0904 17:13:27.718220 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:13:27.720152 kubelet[3270]: I0904 17:13:27.720090 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae03a0f3-8501-4b20-b922-a1d3dc9e796e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae03a0f3-8501-4b20-b922-a1d3dc9e796e" (UID: "ae03a0f3-8501-4b20-b922-a1d3dc9e796e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:13:27.720389 kubelet[3270]: I0904 17:13:27.720257 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5609af2a-ee77-4d03-863c-d1fb6c9489df-kube-api-access-88jdd" (OuterVolumeSpecName: "kube-api-access-88jdd") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "kube-api-access-88jdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:13:27.725565 kubelet[3270]: I0904 17:13:27.725459 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5609af2a-ee77-4d03-863c-d1fb6c9489df-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:13:27.729309 kubelet[3270]: I0904 17:13:27.729257 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5609af2a-ee77-4d03-863c-d1fb6c9489df-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:13:27.729785 kubelet[3270]: I0904 17:13:27.729621 3270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5609af2a-ee77-4d03-863c-d1fb6c9489df-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5609af2a-ee77-4d03-863c-d1fb6c9489df" (UID: "5609af2a-ee77-4d03-863c-d1fb6c9489df"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:13:27.739038 systemd[1]: Removed slice kubepods-besteffort-podae03a0f3_8501_4b20_b922_a1d3dc9e796e.slice - libcontainer container kubepods-besteffort-podae03a0f3_8501_4b20_b922_a1d3dc9e796e.slice. Sep 4 17:13:27.806416 kubelet[3270]: I0904 17:13:27.806352 3270 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-cilium-run\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.806416 kubelet[3270]: I0904 17:13:27.806414 3270 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-hostproc\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.806633 kubelet[3270]: I0904 17:13:27.806442 3270 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5609af2a-ee77-4d03-863c-d1fb6c9489df-hubble-tls\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.806633 kubelet[3270]: I0904 17:13:27.806467 3270 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-etc-cni-netd\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.806633 kubelet[3270]: I0904 17:13:27.806493 3270 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-bpf-maps\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.806633 kubelet[3270]: I0904 17:13:27.806518 3270 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae03a0f3-8501-4b20-b922-a1d3dc9e796e-cilium-config-path\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.806633 kubelet[3270]: I0904 17:13:27.806542 3270 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-cilium-cgroup\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.806633 kubelet[3270]: I0904 17:13:27.806566 3270 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-xtables-lock\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.806633 kubelet[3270]: I0904 17:13:27.806588 3270 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-lib-modules\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.806633 kubelet[3270]: I0904 17:13:27.806613 3270 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-host-proc-sys-kernel\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.807084 kubelet[3270]: I0904 17:13:27.806636 3270 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-host-proc-sys-net\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.807084 kubelet[3270]: I0904 17:13:27.806664 3270 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5609af2a-ee77-4d03-863c-d1fb6c9489df-cni-path\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.807084 kubelet[3270]: I0904 17:13:27.806689 3270 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-88jdd\" (UniqueName: \"kubernetes.io/projected/5609af2a-ee77-4d03-863c-d1fb6c9489df-kube-api-access-88jdd\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.807084 kubelet[3270]: I0904 17:13:27.806740 3270 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d96z5\" (UniqueName: \"kubernetes.io/projected/ae03a0f3-8501-4b20-b922-a1d3dc9e796e-kube-api-access-d96z5\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.807084 kubelet[3270]: I0904 17:13:27.806770 3270 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5609af2a-ee77-4d03-863c-d1fb6c9489df-clustermesh-secrets\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.807084 kubelet[3270]: I0904 17:13:27.806796 3270 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5609af2a-ee77-4d03-863c-d1fb6c9489df-cilium-config-path\") on node \"ip-172-31-29-45\" DevicePath \"\"" Sep 4 17:13:27.920491 kubelet[3270]: E0904 17:13:27.920387 3270 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:13:28.142193 kubelet[3270]: I0904 17:13:28.142104 3270 scope.go:117] "RemoveContainer" containerID="2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47" Sep 4 17:13:28.146204 containerd[2042]: time="2024-09-04T17:13:28.144865108Z" level=info msg="RemoveContainer for \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\"" Sep 4 17:13:28.157271 containerd[2042]: time="2024-09-04T17:13:28.157202380Z" level=info msg="RemoveContainer for \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\" returns successfully" Sep 4 17:13:28.158483 kubelet[3270]: I0904 17:13:28.158444 3270 scope.go:117] "RemoveContainer" containerID="2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47" Sep 4 17:13:28.160107 containerd[2042]: time="2024-09-04T17:13:28.159333196Z" level=error msg="ContainerStatus for \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\": not found" Sep 4 17:13:28.160558 kubelet[3270]: E0904 17:13:28.160526 3270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\": not found" containerID="2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47" Sep 4 17:13:28.160845 kubelet[3270]: I0904 17:13:28.160818 3270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47"} err="failed to get container status \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b29d3122450f64d2cc45e65386da922e5b3ba07baca872f1e7d10f115c78f47\": not found" Sep 4 17:13:28.161352 kubelet[3270]: I0904 17:13:28.161125 3270 scope.go:117] "RemoveContainer" containerID="ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32" Sep 4 17:13:28.167227 containerd[2042]: time="2024-09-04T17:13:28.166239292Z" level=info msg="RemoveContainer for \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\"" Sep 4 17:13:28.176572 containerd[2042]: time="2024-09-04T17:13:28.175454596Z" level=info msg="RemoveContainer for \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\" returns successfully" Sep 4 17:13:28.176697 systemd[1]: Removed slice kubepods-burstable-pod5609af2a_ee77_4d03_863c_d1fb6c9489df.slice - libcontainer container kubepods-burstable-pod5609af2a_ee77_4d03_863c_d1fb6c9489df.slice. Sep 4 17:13:28.177287 systemd[1]: kubepods-burstable-pod5609af2a_ee77_4d03_863c_d1fb6c9489df.slice: Consumed 14.930s CPU time. Sep 4 17:13:28.179292 kubelet[3270]: I0904 17:13:28.179109 3270 scope.go:117] "RemoveContainer" containerID="57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae" Sep 4 17:13:28.183607 containerd[2042]: time="2024-09-04T17:13:28.182806348Z" level=info msg="RemoveContainer for \"57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae\"" Sep 4 17:13:28.186557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92-rootfs.mount: Deactivated successfully. Sep 4 17:13:28.186883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41-rootfs.mount: Deactivated successfully. Sep 4 17:13:28.187057 systemd[1]: var-lib-kubelet-pods-ae03a0f3\x2d8501\x2d4b20\x2db922\x2da1d3dc9e796e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd96z5.mount: Deactivated successfully. Sep 4 17:13:28.187213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41-shm.mount: Deactivated successfully. Sep 4 17:13:28.187361 systemd[1]: var-lib-kubelet-pods-5609af2a\x2dee77\x2d4d03\x2d863c\x2dd1fb6c9489df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d88jdd.mount: Deactivated successfully. Sep 4 17:13:28.187518 systemd[1]: var-lib-kubelet-pods-5609af2a\x2dee77\x2d4d03\x2d863c\x2dd1fb6c9489df-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 17:13:28.187691 systemd[1]: var-lib-kubelet-pods-5609af2a\x2dee77\x2d4d03\x2d863c\x2dd1fb6c9489df-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 17:13:28.197931 containerd[2042]: time="2024-09-04T17:13:28.195708976Z" level=info msg="RemoveContainer for \"57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae\" returns successfully" Sep 4 17:13:28.198511 kubelet[3270]: I0904 17:13:28.197059 3270 scope.go:117] "RemoveContainer" containerID="d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056" Sep 4 17:13:28.202146 containerd[2042]: time="2024-09-04T17:13:28.201796024Z" level=info msg="RemoveContainer for \"d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056\"" Sep 4 17:13:28.207900 containerd[2042]: time="2024-09-04T17:13:28.207688300Z" level=info msg="RemoveContainer for \"d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056\" returns successfully" Sep 4 17:13:28.209290 kubelet[3270]: I0904 17:13:28.208902 3270 scope.go:117] "RemoveContainer" containerID="43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb" Sep 4 17:13:28.212480 containerd[2042]: time="2024-09-04T17:13:28.212322124Z" level=info msg="RemoveContainer for \"43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb\"" Sep 4 17:13:28.217071 containerd[2042]: time="2024-09-04T17:13:28.217011832Z" level=info msg="RemoveContainer for \"43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb\" returns successfully" Sep 4 17:13:28.217583 kubelet[3270]: I0904 17:13:28.217382 3270 scope.go:117] "RemoveContainer" containerID="0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3" Sep 4 17:13:28.219437 containerd[2042]: time="2024-09-04T17:13:28.219392596Z" level=info msg="RemoveContainer for \"0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3\"" Sep 4 17:13:28.224523 containerd[2042]: time="2024-09-04T17:13:28.224409736Z" level=info msg="RemoveContainer for \"0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3\" returns successfully" Sep 4 17:13:28.224820 kubelet[3270]: I0904 17:13:28.224773 3270 scope.go:117] "RemoveContainer" containerID="ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32" Sep 4 17:13:28.225505 containerd[2042]: time="2024-09-04T17:13:28.225349216Z" level=error msg="ContainerStatus for \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\": not found" Sep 4 17:13:28.225666 kubelet[3270]: E0904 17:13:28.225624 3270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\": not found" containerID="ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32" Sep 4 17:13:28.225768 kubelet[3270]: I0904 17:13:28.225684 3270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32"} err="failed to get container status \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec40372f820773c17e1bb0ff6c40dbf93c508e924a47bead69e21366d5bd0f32\": not found" Sep 4 17:13:28.225768 kubelet[3270]: I0904 17:13:28.225707 3270 scope.go:117] "RemoveContainer" containerID="57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae" Sep 4 17:13:28.226244 containerd[2042]: time="2024-09-04T17:13:28.226081816Z" level=error msg="ContainerStatus for \"57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae\": not found" Sep 4 17:13:28.226450 kubelet[3270]: E0904 17:13:28.226396 3270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae\": not found" containerID="57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae" Sep 4 17:13:28.226450 kubelet[3270]: I0904 17:13:28.226445 3270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae"} err="failed to get container status \"57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae\": rpc error: code = NotFound desc = an error occurred when try to find container \"57d9052d5eea222815ebe53b209e2740023c4b8e0adc5803616dcfb76b09ceae\": not found" Sep 4 17:13:28.226954 kubelet[3270]: I0904 17:13:28.226472 3270 scope.go:117] "RemoveContainer" containerID="d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056" Sep 4 17:13:28.227121 containerd[2042]: time="2024-09-04T17:13:28.226851208Z" level=error msg="ContainerStatus for \"d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056\": not found" Sep 4 17:13:28.227309 kubelet[3270]: E0904 17:13:28.227067 3270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056\": not found" containerID="d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056" Sep 4 17:13:28.227309 kubelet[3270]: I0904 17:13:28.227113 3270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056"} err="failed to get container status \"d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1940f93ddb0a3e7f5571862e4f27ecfc5383b25cb1ed1ca22095c6a40153056\": not found" Sep 4 17:13:28.227309 kubelet[3270]: I0904 17:13:28.227134 3270 scope.go:117] "RemoveContainer" containerID="43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb" Sep 4 17:13:28.227800 containerd[2042]: time="2024-09-04T17:13:28.227697880Z" level=error msg="ContainerStatus for \"43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb\": not found" Sep 4 17:13:28.227993 kubelet[3270]: E0904 17:13:28.227928 3270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb\": not found" containerID="43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb" Sep 4 17:13:28.227993 kubelet[3270]: I0904 17:13:28.227980 3270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb"} err="failed to get container status \"43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"43eb1c350a686e995be4845f217683d57e9f532de81e5a07b6a621a69fc2d1fb\": not found" Sep 4 17:13:28.228326 kubelet[3270]: I0904 17:13:28.228009 3270 scope.go:117] "RemoveContainer" containerID="0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3" Sep 4 17:13:28.228520 containerd[2042]: time="2024-09-04T17:13:28.228297268Z" level=error msg="ContainerStatus for \"0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3\": not found" Sep 4 17:13:28.228684 kubelet[3270]: E0904 17:13:28.228642 3270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3\": not found" containerID="0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3" Sep 4 17:13:28.228797 kubelet[3270]: I0904 17:13:28.228689 3270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3"} err="failed to get container status \"0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a6fc06e5ef68aa171d702cbb53de97a414609ec73cabedabb9498274fae2ae3\": not found" Sep 4 17:13:29.117083 sshd[5066]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:29.124622 systemd[1]: sshd@25-172.31.29.45:22-139.178.89.65:41950.service: Deactivated successfully. Sep 4 17:13:29.129393 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:13:29.129695 systemd[1]: session-25.scope: Consumed 1.395s CPU time. Sep 4 17:13:29.131670 systemd-logind[2016]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:13:29.134195 systemd-logind[2016]: Removed session 25. Sep 4 17:13:29.155269 systemd[1]: Started sshd@26-172.31.29.45:22-139.178.89.65:60876.service - OpenSSH per-connection server daemon (139.178.89.65:60876). Sep 4 17:13:29.332466 sshd[5226]: Accepted publickey for core from 139.178.89.65 port 60876 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:29.335092 sshd[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:29.343177 systemd-logind[2016]: New session 26 of user core. Sep 4 17:13:29.353030 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:13:29.666166 kubelet[3270]: I0904 17:13:29.664094 3270 setters.go:568] "Node became not ready" node="ip-172-31-29-45" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-04T17:13:29Z","lastTransitionTime":"2024-09-04T17:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 17:13:29.709516 kubelet[3270]: I0904 17:13:29.709479 3270 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5609af2a-ee77-4d03-863c-d1fb6c9489df" path="/var/lib/kubelet/pods/5609af2a-ee77-4d03-863c-d1fb6c9489df/volumes" Sep 4 17:13:29.712006 kubelet[3270]: I0904 17:13:29.711794 3270 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ae03a0f3-8501-4b20-b922-a1d3dc9e796e" path="/var/lib/kubelet/pods/ae03a0f3-8501-4b20-b922-a1d3dc9e796e/volumes" Sep 4 17:13:29.963463 ntpd[2009]: Deleting interface #11 lxc_health, fe80::64f7:c5ff:fe58:7a4b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=64 secs Sep 4 17:13:29.964142 ntpd[2009]: 4 Sep 17:13:29 ntpd[2009]: Deleting interface #11 lxc_health, fe80::64f7:c5ff:fe58:7a4b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=64 secs Sep 4 17:13:30.805453 sshd[5226]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:30.813839 kubelet[3270]: I0904 17:13:30.812015 3270 topology_manager.go:215] "Topology Admit Handler" podUID="05b6e69a-0dac-494c-9d2a-f5c5108d72ee" podNamespace="kube-system" podName="cilium-75shd" Sep 4 17:13:30.813839 kubelet[3270]: E0904 17:13:30.812104 3270 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5609af2a-ee77-4d03-863c-d1fb6c9489df" containerName="cilium-agent" Sep 4 17:13:30.813839 kubelet[3270]: E0904 17:13:30.812158 3270 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5609af2a-ee77-4d03-863c-d1fb6c9489df" containerName="mount-cgroup" Sep 4 17:13:30.813839 kubelet[3270]: E0904 17:13:30.812178 3270 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5609af2a-ee77-4d03-863c-d1fb6c9489df" containerName="mount-bpf-fs" Sep 4 17:13:30.813839 kubelet[3270]: E0904 17:13:30.812197 3270 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5609af2a-ee77-4d03-863c-d1fb6c9489df" containerName="apply-sysctl-overwrites" Sep 4 17:13:30.813839 kubelet[3270]: E0904 17:13:30.812216 3270 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae03a0f3-8501-4b20-b922-a1d3dc9e796e" containerName="cilium-operator" Sep 4 17:13:30.813839 kubelet[3270]: E0904 17:13:30.812235 3270 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5609af2a-ee77-4d03-863c-d1fb6c9489df" containerName="clean-cilium-state" Sep 4 17:13:30.813839 kubelet[3270]: I0904 17:13:30.812285 3270 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae03a0f3-8501-4b20-b922-a1d3dc9e796e" containerName="cilium-operator" Sep 4 17:13:30.813839 kubelet[3270]: I0904 17:13:30.812303 3270 memory_manager.go:354] "RemoveStaleState removing state" podUID="5609af2a-ee77-4d03-863c-d1fb6c9489df" containerName="cilium-agent" Sep 4 17:13:30.816391 systemd[1]: sshd@26-172.31.29.45:22-139.178.89.65:60876.service: Deactivated successfully. Sep 4 17:13:30.829932 kubelet[3270]: W0904 17:13:30.829553 3270 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-29-45" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-45' and this object Sep 4 17:13:30.829932 kubelet[3270]: W0904 17:13:30.829553 3270 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-29-45" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-45' and this object Sep 4 17:13:30.829932 kubelet[3270]: E0904 17:13:30.829619 3270 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-29-45" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-45' and this object Sep 4 17:13:30.829932 kubelet[3270]: E0904 17:13:30.829640 3270 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-29-45" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-45' and this object Sep 4 17:13:30.829932 kubelet[3270]: W0904 17:13:30.829699 3270 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-29-45" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-45' and this object Sep 4 17:13:30.830273 kubelet[3270]: W0904 17:13:30.829716 3270 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-29-45" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-45' and this object Sep 4 17:13:30.830273 kubelet[3270]: E0904 17:13:30.829753 3270 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-29-45" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-45' and this object Sep 4 17:13:30.830273 kubelet[3270]: E0904 17:13:30.829765 3270 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-29-45" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-45' and this object Sep 4 17:13:30.830195 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:13:30.830497 systemd[1]: session-26.scope: Consumed 1.278s CPU time. Sep 4 17:13:30.836858 systemd-logind[2016]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:13:30.869408 systemd[1]: Started sshd@27-172.31.29.45:22-139.178.89.65:60886.service - OpenSSH per-connection server daemon (139.178.89.65:60886). Sep 4 17:13:30.873487 systemd-logind[2016]: Removed session 26. Sep 4 17:13:30.898077 systemd[1]: Created slice kubepods-burstable-pod05b6e69a_0dac_494c_9d2a_f5c5108d72ee.slice - libcontainer container kubepods-burstable-pod05b6e69a_0dac_494c_9d2a_f5c5108d72ee.slice. Sep 4 17:13:30.923258 kubelet[3270]: I0904 17:13:30.923211 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-cni-path\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.923768 kubelet[3270]: I0904 17:13:30.923624 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-hostproc\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.924041 kubelet[3270]: I0904 17:13:30.923902 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-hubble-tls\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.924268 kubelet[3270]: I0904 17:13:30.924203 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-bpf-maps\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.924921 kubelet[3270]: I0904 17:13:30.924505 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-cilium-cgroup\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.925384 kubelet[3270]: I0904 17:13:30.925184 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-lib-modules\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.926228 kubelet[3270]: I0904 17:13:30.926034 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-host-proc-sys-kernel\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.926228 kubelet[3270]: I0904 17:13:30.926151 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-clustermesh-secrets\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.927300 kubelet[3270]: I0904 17:13:30.927033 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-cilium-config-path\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.927300 kubelet[3270]: I0904 17:13:30.927127 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-xtables-lock\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.927300 kubelet[3270]: I0904 17:13:30.927251 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-host-proc-sys-net\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.928218 kubelet[3270]: I0904 17:13:30.927875 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssnn7\" (UniqueName: \"kubernetes.io/projected/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-kube-api-access-ssnn7\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.929586 kubelet[3270]: I0904 17:13:30.929111 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-etc-cni-netd\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.929586 kubelet[3270]: I0904 17:13:30.929201 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-cilium-run\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:30.929586 kubelet[3270]: I0904 17:13:30.929277 3270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-cilium-ipsec-secrets\") pod \"cilium-75shd\" (UID: \"05b6e69a-0dac-494c-9d2a-f5c5108d72ee\") " pod="kube-system/cilium-75shd" Sep 4 17:13:31.080874 sshd[5238]: Accepted publickey for core from 139.178.89.65 port 60886 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:31.085976 sshd[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:31.096092 systemd-logind[2016]: New session 27 of user core. Sep 4 17:13:31.106890 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:13:31.232863 sshd[5238]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:31.239982 systemd[1]: sshd@27-172.31.29.45:22-139.178.89.65:60886.service: Deactivated successfully. Sep 4 17:13:31.243152 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:13:31.245717 systemd-logind[2016]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:13:31.247945 systemd-logind[2016]: Removed session 27. Sep 4 17:13:31.275286 systemd[1]: Started sshd@28-172.31.29.45:22-139.178.89.65:60894.service - OpenSSH per-connection server daemon (139.178.89.65:60894). Sep 4 17:13:31.458197 sshd[5247]: Accepted publickey for core from 139.178.89.65 port 60894 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:31.461318 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:31.468793 systemd-logind[2016]: New session 28 of user core. Sep 4 17:13:31.475005 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 17:13:32.032259 kubelet[3270]: E0904 17:13:32.031885 3270 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Sep 4 17:13:32.032259 kubelet[3270]: E0904 17:13:32.032037 3270 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-cilium-ipsec-secrets podName:05b6e69a-0dac-494c-9d2a-f5c5108d72ee nodeName:}" failed. No retries permitted until 2024-09-04 17:13:32.531998187 +0000 UTC m=+105.090885888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-cilium-ipsec-secrets") pod "cilium-75shd" (UID: "05b6e69a-0dac-494c-9d2a-f5c5108d72ee") : failed to sync secret cache: timed out waiting for the condition Sep 4 17:13:32.032259 kubelet[3270]: E0904 17:13:32.032218 3270 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:13:32.033439 kubelet[3270]: E0904 17:13:32.032278 3270 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-cilium-config-path podName:05b6e69a-0dac-494c-9d2a-f5c5108d72ee nodeName:}" failed. No retries permitted until 2024-09-04 17:13:32.532260423 +0000 UTC m=+105.091148124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-cilium-config-path") pod "cilium-75shd" (UID: "05b6e69a-0dac-494c-9d2a-f5c5108d72ee") : failed to sync configmap cache: timed out waiting for the condition Sep 4 17:13:32.033439 kubelet[3270]: E0904 17:13:32.032309 3270 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 4 17:13:32.033439 kubelet[3270]: E0904 17:13:32.032356 3270 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-clustermesh-secrets podName:05b6e69a-0dac-494c-9d2a-f5c5108d72ee nodeName:}" failed. No retries permitted until 2024-09-04 17:13:32.532339791 +0000 UTC m=+105.091227492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-clustermesh-secrets") pod "cilium-75shd" (UID: "05b6e69a-0dac-494c-9d2a-f5c5108d72ee") : failed to sync secret cache: timed out waiting for the condition Sep 4 17:13:32.034520 kubelet[3270]: E0904 17:13:32.034229 3270 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 4 17:13:32.034520 kubelet[3270]: E0904 17:13:32.034306 3270 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-75shd: failed to sync secret cache: timed out waiting for the condition Sep 4 17:13:32.034520 kubelet[3270]: E0904 17:13:32.034418 3270 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-hubble-tls podName:05b6e69a-0dac-494c-9d2a-f5c5108d72ee nodeName:}" failed. No retries permitted until 2024-09-04 17:13:32.534389859 +0000 UTC m=+105.093277560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/05b6e69a-0dac-494c-9d2a-f5c5108d72ee-hubble-tls") pod "cilium-75shd" (UID: "05b6e69a-0dac-494c-9d2a-f5c5108d72ee") : failed to sync secret cache: timed out waiting for the condition Sep 4 17:13:32.710084 containerd[2042]: time="2024-09-04T17:13:32.710006135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75shd,Uid:05b6e69a-0dac-494c-9d2a-f5c5108d72ee,Namespace:kube-system,Attempt:0,}" Sep 4 17:13:32.751047 containerd[2042]: time="2024-09-04T17:13:32.750839555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:13:32.751047 containerd[2042]: time="2024-09-04T17:13:32.750973307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:32.752139 containerd[2042]: time="2024-09-04T17:13:32.751944107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:13:32.752139 containerd[2042]: time="2024-09-04T17:13:32.752005643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:32.794057 systemd[1]: Started cri-containerd-a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af.scope - libcontainer container a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af. Sep 4 17:13:32.834520 containerd[2042]: time="2024-09-04T17:13:32.834458099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75shd,Uid:05b6e69a-0dac-494c-9d2a-f5c5108d72ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af\"" Sep 4 17:13:32.840285 containerd[2042]: time="2024-09-04T17:13:32.840028031Z" level=info msg="CreateContainer within sandbox \"a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:13:32.863947 containerd[2042]: time="2024-09-04T17:13:32.863886299Z" level=info msg="CreateContainer within sandbox \"a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"16eae938cbccbf31622de2966c1720fe6a5d7fe39776ec08f1066a58e1997899\"" Sep 4 17:13:32.865054 containerd[2042]: time="2024-09-04T17:13:32.865003499Z" level=info msg="StartContainer for \"16eae938cbccbf31622de2966c1720fe6a5d7fe39776ec08f1066a58e1997899\"" Sep 4 17:13:32.908039 systemd[1]: Started cri-containerd-16eae938cbccbf31622de2966c1720fe6a5d7fe39776ec08f1066a58e1997899.scope - libcontainer container 16eae938cbccbf31622de2966c1720fe6a5d7fe39776ec08f1066a58e1997899. Sep 4 17:13:32.922859 kubelet[3270]: E0904 17:13:32.922689 3270 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:13:32.952394 containerd[2042]: time="2024-09-04T17:13:32.952268172Z" level=info msg="StartContainer for \"16eae938cbccbf31622de2966c1720fe6a5d7fe39776ec08f1066a58e1997899\" returns successfully" Sep 4 17:13:32.969862 systemd[1]: cri-containerd-16eae938cbccbf31622de2966c1720fe6a5d7fe39776ec08f1066a58e1997899.scope: Deactivated successfully. Sep 4 17:13:33.024713 containerd[2042]: time="2024-09-04T17:13:33.024403316Z" level=info msg="shim disconnected" id=16eae938cbccbf31622de2966c1720fe6a5d7fe39776ec08f1066a58e1997899 namespace=k8s.io Sep 4 17:13:33.024713 containerd[2042]: time="2024-09-04T17:13:33.024481616Z" level=warning msg="cleaning up after shim disconnected" id=16eae938cbccbf31622de2966c1720fe6a5d7fe39776ec08f1066a58e1997899 namespace=k8s.io Sep 4 17:13:33.024713 containerd[2042]: time="2024-09-04T17:13:33.024503768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:33.184058 containerd[2042]: time="2024-09-04T17:13:33.183295329Z" level=info msg="CreateContainer within sandbox \"a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:13:33.203337 containerd[2042]: time="2024-09-04T17:13:33.203132901Z" level=info msg="CreateContainer within sandbox \"a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"00232c55d3d08824d0456e7482f5b5b36690dd154fdfe0890e42352fc32b043b\"" Sep 4 17:13:33.208944 containerd[2042]: time="2024-09-04T17:13:33.206495805Z" level=info msg="StartContainer for \"00232c55d3d08824d0456e7482f5b5b36690dd154fdfe0890e42352fc32b043b\"" Sep 4 17:13:33.256163 systemd[1]: Started cri-containerd-00232c55d3d08824d0456e7482f5b5b36690dd154fdfe0890e42352fc32b043b.scope - libcontainer container 00232c55d3d08824d0456e7482f5b5b36690dd154fdfe0890e42352fc32b043b. Sep 4 17:13:33.301104 containerd[2042]: time="2024-09-04T17:13:33.301036366Z" level=info msg="StartContainer for \"00232c55d3d08824d0456e7482f5b5b36690dd154fdfe0890e42352fc32b043b\" returns successfully" Sep 4 17:13:33.314264 systemd[1]: cri-containerd-00232c55d3d08824d0456e7482f5b5b36690dd154fdfe0890e42352fc32b043b.scope: Deactivated successfully. Sep 4 17:13:33.358145 containerd[2042]: time="2024-09-04T17:13:33.358074250Z" level=info msg="shim disconnected" id=00232c55d3d08824d0456e7482f5b5b36690dd154fdfe0890e42352fc32b043b namespace=k8s.io Sep 4 17:13:33.358493 containerd[2042]: time="2024-09-04T17:13:33.358448350Z" level=warning msg="cleaning up after shim disconnected" id=00232c55d3d08824d0456e7482f5b5b36690dd154fdfe0890e42352fc32b043b namespace=k8s.io Sep 4 17:13:33.358659 containerd[2042]: time="2024-09-04T17:13:33.358582774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:33.703857 kubelet[3270]: E0904 17:13:33.703418 3270 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-rgsfv" podUID="7350034f-bb15-4e2d-bb5d-fb0d71fdf227" Sep 4 17:13:34.187538 containerd[2042]: time="2024-09-04T17:13:34.187439842Z" level=info msg="CreateContainer within sandbox \"a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:13:34.218677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3328587966.mount: Deactivated successfully. Sep 4 17:13:34.220672 containerd[2042]: time="2024-09-04T17:13:34.220146082Z" level=info msg="CreateContainer within sandbox \"a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a2d22a83c605a332c4fe6a9947769680a8a8c19dc98875ddc7ce262c0f924939\"" Sep 4 17:13:34.222755 containerd[2042]: time="2024-09-04T17:13:34.221061898Z" level=info msg="StartContainer for \"a2d22a83c605a332c4fe6a9947769680a8a8c19dc98875ddc7ce262c0f924939\"" Sep 4 17:13:34.285030 systemd[1]: Started cri-containerd-a2d22a83c605a332c4fe6a9947769680a8a8c19dc98875ddc7ce262c0f924939.scope - libcontainer container a2d22a83c605a332c4fe6a9947769680a8a8c19dc98875ddc7ce262c0f924939. Sep 4 17:13:34.331984 containerd[2042]: time="2024-09-04T17:13:34.331399763Z" level=info msg="StartContainer for \"a2d22a83c605a332c4fe6a9947769680a8a8c19dc98875ddc7ce262c0f924939\" returns successfully" Sep 4 17:13:34.338523 systemd[1]: cri-containerd-a2d22a83c605a332c4fe6a9947769680a8a8c19dc98875ddc7ce262c0f924939.scope: Deactivated successfully. Sep 4 17:13:34.388078 containerd[2042]: time="2024-09-04T17:13:34.388004675Z" level=info msg="shim disconnected" id=a2d22a83c605a332c4fe6a9947769680a8a8c19dc98875ddc7ce262c0f924939 namespace=k8s.io Sep 4 17:13:34.388526 containerd[2042]: time="2024-09-04T17:13:34.388448747Z" level=warning msg="cleaning up after shim disconnected" id=a2d22a83c605a332c4fe6a9947769680a8a8c19dc98875ddc7ce262c0f924939 namespace=k8s.io Sep 4 17:13:34.388661 containerd[2042]: time="2024-09-04T17:13:34.388634651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:34.564159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2d22a83c605a332c4fe6a9947769680a8a8c19dc98875ddc7ce262c0f924939-rootfs.mount: Deactivated successfully. Sep 4 17:13:35.193024 containerd[2042]: time="2024-09-04T17:13:35.192961679Z" level=info msg="CreateContainer within sandbox \"a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:13:35.224713 containerd[2042]: time="2024-09-04T17:13:35.224608859Z" level=info msg="CreateContainer within sandbox \"a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a9f2b11f3b93814d2d0b95149802e088bbbc02367085310728432c4698186a58\"" Sep 4 17:13:35.228596 containerd[2042]: time="2024-09-04T17:13:35.227402903Z" level=info msg="StartContainer for \"a9f2b11f3b93814d2d0b95149802e088bbbc02367085310728432c4698186a58\"" Sep 4 17:13:35.286011 systemd[1]: Started cri-containerd-a9f2b11f3b93814d2d0b95149802e088bbbc02367085310728432c4698186a58.scope - libcontainer container a9f2b11f3b93814d2d0b95149802e088bbbc02367085310728432c4698186a58. Sep 4 17:13:35.329506 systemd[1]: cri-containerd-a9f2b11f3b93814d2d0b95149802e088bbbc02367085310728432c4698186a58.scope: Deactivated successfully. Sep 4 17:13:35.338440 containerd[2042]: time="2024-09-04T17:13:35.338285112Z" level=info msg="StartContainer for \"a9f2b11f3b93814d2d0b95149802e088bbbc02367085310728432c4698186a58\" returns successfully" Sep 4 17:13:35.386776 containerd[2042]: time="2024-09-04T17:13:35.386608296Z" level=info msg="shim disconnected" id=a9f2b11f3b93814d2d0b95149802e088bbbc02367085310728432c4698186a58 namespace=k8s.io Sep 4 17:13:35.386776 containerd[2042]: time="2024-09-04T17:13:35.386684856Z" level=warning msg="cleaning up after shim disconnected" id=a9f2b11f3b93814d2d0b95149802e088bbbc02367085310728432c4698186a58 namespace=k8s.io Sep 4 17:13:35.386776 containerd[2042]: time="2024-09-04T17:13:35.386705604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:35.564285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9f2b11f3b93814d2d0b95149802e088bbbc02367085310728432c4698186a58-rootfs.mount: Deactivated successfully. Sep 4 17:13:35.704950 kubelet[3270]: E0904 17:13:35.703264 3270 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-rgsfv" podUID="7350034f-bb15-4e2d-bb5d-fb0d71fdf227" Sep 4 17:13:36.205959 containerd[2042]: time="2024-09-04T17:13:36.205237500Z" level=info msg="CreateContainer within sandbox \"a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:13:36.242036 containerd[2042]: time="2024-09-04T17:13:36.241870932Z" level=info msg="CreateContainer within sandbox \"a6fc03a9f43a9caca45a8470449ec3a5f6aeff9a5f685922be4a9bf0750859af\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b3a30d8ae8c425b35099c3bbef4ef792a6acd8857a53618f77ed44998ce3e94d\"" Sep 4 17:13:36.244694 containerd[2042]: time="2024-09-04T17:13:36.243055128Z" level=info msg="StartContainer for \"b3a30d8ae8c425b35099c3bbef4ef792a6acd8857a53618f77ed44998ce3e94d\"" Sep 4 17:13:36.314040 systemd[1]: Started cri-containerd-b3a30d8ae8c425b35099c3bbef4ef792a6acd8857a53618f77ed44998ce3e94d.scope - libcontainer container b3a30d8ae8c425b35099c3bbef4ef792a6acd8857a53618f77ed44998ce3e94d. Sep 4 17:13:36.366258 containerd[2042]: time="2024-09-04T17:13:36.366188845Z" level=info msg="StartContainer for \"b3a30d8ae8c425b35099c3bbef4ef792a6acd8857a53618f77ed44998ce3e94d\" returns successfully" Sep 4 17:13:36.566327 systemd[1]: run-containerd-runc-k8s.io-b3a30d8ae8c425b35099c3bbef4ef792a6acd8857a53618f77ed44998ce3e94d-runc.dFnZ19.mount: Deactivated successfully. Sep 4 17:13:37.128798 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 17:13:37.703925 kubelet[3270]: E0904 17:13:37.703804 3270 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-rgsfv" podUID="7350034f-bb15-4e2d-bb5d-fb0d71fdf227" Sep 4 17:13:41.261227 systemd-networkd[1938]: lxc_health: Link UP Sep 4 17:13:41.271709 systemd-networkd[1938]: lxc_health: Gained carrier Sep 4 17:13:41.278331 (udev-worker)[6079]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:13:42.525912 systemd-networkd[1938]: lxc_health: Gained IPv6LL Sep 4 17:13:42.791999 kubelet[3270]: I0904 17:13:42.791839 3270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-75shd" podStartSLOduration=12.791779137 podStartE2EDuration="12.791779137s" podCreationTimestamp="2024-09-04 17:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:13:37.238051045 +0000 UTC m=+109.796938782" watchObservedRunningTime="2024-09-04 17:13:42.791779137 +0000 UTC m=+115.350666850" Sep 4 17:13:44.751832 systemd[1]: run-containerd-runc-k8s.io-b3a30d8ae8c425b35099c3bbef4ef792a6acd8857a53618f77ed44998ce3e94d-runc.QWT0LE.mount: Deactivated successfully. Sep 4 17:13:44.962928 ntpd[2009]: Listen normally on 14 lxc_health [fe80::6884:51ff:fe9f:a03b%14]:123 Sep 4 17:13:44.964633 ntpd[2009]: 4 Sep 17:13:44 ntpd[2009]: Listen normally on 14 lxc_health [fe80::6884:51ff:fe9f:a03b%14]:123 Sep 4 17:13:47.730480 containerd[2042]: time="2024-09-04T17:13:47.730324825Z" level=info msg="StopPodSandbox for \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\"" Sep 4 17:13:47.731073 containerd[2042]: time="2024-09-04T17:13:47.730574269Z" level=info msg="TearDown network for sandbox \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\" successfully" Sep 4 17:13:47.731073 containerd[2042]: time="2024-09-04T17:13:47.730640713Z" level=info msg="StopPodSandbox for \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\" returns successfully" Sep 4 17:13:47.732771 containerd[2042]: time="2024-09-04T17:13:47.731252665Z" level=info msg="RemovePodSandbox for \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\"" Sep 4 17:13:47.732771 containerd[2042]: time="2024-09-04T17:13:47.731320957Z" level=info msg="Forcibly stopping sandbox \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\"" Sep 4 17:13:47.732771 containerd[2042]: time="2024-09-04T17:13:47.731476141Z" level=info msg="TearDown network for sandbox \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\" successfully" Sep 4 17:13:47.737067 containerd[2042]: time="2024-09-04T17:13:47.736985509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:13:47.737245 containerd[2042]: time="2024-09-04T17:13:47.737096317Z" level=info msg="RemovePodSandbox \"9a9e6dbeee98e3aab096f695db4ee60176f19f8ba068997543fa3c9c31229a92\" returns successfully" Sep 4 17:13:47.738007 containerd[2042]: time="2024-09-04T17:13:47.737942809Z" level=info msg="StopPodSandbox for \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\"" Sep 4 17:13:47.738350 containerd[2042]: time="2024-09-04T17:13:47.738090265Z" level=info msg="TearDown network for sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" successfully" Sep 4 17:13:47.738350 containerd[2042]: time="2024-09-04T17:13:47.738163705Z" level=info msg="StopPodSandbox for \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" returns successfully" Sep 4 17:13:47.739294 containerd[2042]: time="2024-09-04T17:13:47.739183849Z" level=info msg="RemovePodSandbox for \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\"" Sep 4 17:13:47.739446 containerd[2042]: time="2024-09-04T17:13:47.739278613Z" level=info msg="Forcibly stopping sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\"" Sep 4 17:13:47.739561 containerd[2042]: time="2024-09-04T17:13:47.739514245Z" level=info msg="TearDown network for sandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" successfully" Sep 4 17:13:47.745282 containerd[2042]: time="2024-09-04T17:13:47.745157701Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:13:47.745444 containerd[2042]: time="2024-09-04T17:13:47.745308709Z" level=info msg="RemovePodSandbox \"16992916952ef00fbbc69ea84472e12008311f62aa207553a2b6cb0b1667ef41\" returns successfully" Sep 4 17:13:49.378310 systemd[1]: run-containerd-runc-k8s.io-b3a30d8ae8c425b35099c3bbef4ef792a6acd8857a53618f77ed44998ce3e94d-runc.6zGtZE.mount: Deactivated successfully. Sep 4 17:13:49.503027 sshd[5247]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:49.511165 systemd[1]: sshd@28-172.31.29.45:22-139.178.89.65:60894.service: Deactivated successfully. Sep 4 17:13:49.516611 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 17:13:49.521137 systemd-logind[2016]: Session 28 logged out. Waiting for processes to exit. Sep 4 17:13:49.526882 systemd-logind[2016]: Removed session 28. Sep 4 17:14:03.630012 systemd[1]: cri-containerd-8509ab4c74586351cbf2b10ea981bb6f44f328ae52a2962d88b51cc0435cc79e.scope: Deactivated successfully. Sep 4 17:14:03.630492 systemd[1]: cri-containerd-8509ab4c74586351cbf2b10ea981bb6f44f328ae52a2962d88b51cc0435cc79e.scope: Consumed 5.554s CPU time, 22.3M memory peak, 0B memory swap peak. Sep 4 17:14:03.675871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8509ab4c74586351cbf2b10ea981bb6f44f328ae52a2962d88b51cc0435cc79e-rootfs.mount: Deactivated successfully. Sep 4 17:14:03.689760 containerd[2042]: time="2024-09-04T17:14:03.689644924Z" level=info msg="shim disconnected" id=8509ab4c74586351cbf2b10ea981bb6f44f328ae52a2962d88b51cc0435cc79e namespace=k8s.io Sep 4 17:14:03.689760 containerd[2042]: time="2024-09-04T17:14:03.689752888Z" level=warning msg="cleaning up after shim disconnected" id=8509ab4c74586351cbf2b10ea981bb6f44f328ae52a2962d88b51cc0435cc79e namespace=k8s.io Sep 4 17:14:03.691127 containerd[2042]: time="2024-09-04T17:14:03.689777008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:14:04.288560 kubelet[3270]: I0904 17:14:04.288377 3270 scope.go:117] "RemoveContainer" containerID="8509ab4c74586351cbf2b10ea981bb6f44f328ae52a2962d88b51cc0435cc79e" Sep 4 17:14:04.293767 containerd[2042]: time="2024-09-04T17:14:04.293504223Z" level=info msg="CreateContainer within sandbox \"b7eba1390fcd94681f91528afc0b583a526027918ce850aef899029374b26f46\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 17:14:04.317702 containerd[2042]: time="2024-09-04T17:14:04.317634076Z" level=info msg="CreateContainer within sandbox \"b7eba1390fcd94681f91528afc0b583a526027918ce850aef899029374b26f46\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d35bfd3c1740e87983cbc8fc5f5686ef5ad545b31ee1419dd9513a4d1cd350c3\"" Sep 4 17:14:04.322311 containerd[2042]: time="2024-09-04T17:14:04.322049008Z" level=info msg="StartContainer for \"d35bfd3c1740e87983cbc8fc5f5686ef5ad545b31ee1419dd9513a4d1cd350c3\"" Sep 4 17:14:04.380048 systemd[1]: Started cri-containerd-d35bfd3c1740e87983cbc8fc5f5686ef5ad545b31ee1419dd9513a4d1cd350c3.scope - libcontainer container d35bfd3c1740e87983cbc8fc5f5686ef5ad545b31ee1419dd9513a4d1cd350c3. Sep 4 17:14:04.450452 containerd[2042]: time="2024-09-04T17:14:04.450389668Z" level=info msg="StartContainer for \"d35bfd3c1740e87983cbc8fc5f5686ef5ad545b31ee1419dd9513a4d1cd350c3\" returns successfully" Sep 4 17:14:08.873183 systemd[1]: cri-containerd-eacdf63cd8ac9498504a29647d5f3b33c17be01a9a8bb6e8d4860565744c7897.scope: Deactivated successfully. Sep 4 17:14:08.874041 systemd[1]: cri-containerd-eacdf63cd8ac9498504a29647d5f3b33c17be01a9a8bb6e8d4860565744c7897.scope: Consumed 3.607s CPU time, 16.1M memory peak, 0B memory swap peak. Sep 4 17:14:08.913503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eacdf63cd8ac9498504a29647d5f3b33c17be01a9a8bb6e8d4860565744c7897-rootfs.mount: Deactivated successfully. Sep 4 17:14:08.929897 containerd[2042]: time="2024-09-04T17:14:08.929757995Z" level=info msg="shim disconnected" id=eacdf63cd8ac9498504a29647d5f3b33c17be01a9a8bb6e8d4860565744c7897 namespace=k8s.io Sep 4 17:14:08.930898 containerd[2042]: time="2024-09-04T17:14:08.929881379Z" level=warning msg="cleaning up after shim disconnected" id=eacdf63cd8ac9498504a29647d5f3b33c17be01a9a8bb6e8d4860565744c7897 namespace=k8s.io Sep 4 17:14:08.930898 containerd[2042]: time="2024-09-04T17:14:08.929929487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:14:09.309463 kubelet[3270]: I0904 17:14:09.308798 3270 scope.go:117] "RemoveContainer" containerID="eacdf63cd8ac9498504a29647d5f3b33c17be01a9a8bb6e8d4860565744c7897" Sep 4 17:14:09.313249 containerd[2042]: time="2024-09-04T17:14:09.312855368Z" level=info msg="CreateContainer within sandbox \"9054e155ef05c9c64ae948b0ba183d663efd632d9148242b90119c24fe3e9950\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 17:14:09.336326 containerd[2042]: time="2024-09-04T17:14:09.336191865Z" level=info msg="CreateContainer within sandbox \"9054e155ef05c9c64ae948b0ba183d663efd632d9148242b90119c24fe3e9950\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c132c6ec83f633ab1f1f3b148250bfe6f1c760afbbdfc15be5589733b4a4cf46\"" Sep 4 17:14:09.336326 containerd[2042]: time="2024-09-04T17:14:09.336951189Z" level=info msg="StartContainer for \"c132c6ec83f633ab1f1f3b148250bfe6f1c760afbbdfc15be5589733b4a4cf46\"" Sep 4 17:14:09.391046 systemd[1]: Started cri-containerd-c132c6ec83f633ab1f1f3b148250bfe6f1c760afbbdfc15be5589733b4a4cf46.scope - libcontainer container c132c6ec83f633ab1f1f3b148250bfe6f1c760afbbdfc15be5589733b4a4cf46. Sep 4 17:14:09.453325 containerd[2042]: time="2024-09-04T17:14:09.453174273Z" level=info msg="StartContainer for \"c132c6ec83f633ab1f1f3b148250bfe6f1c760afbbdfc15be5589733b4a4cf46\" returns successfully" Sep 4 17:14:10.133216 kubelet[3270]: E0904 17:14:10.133112 3270 request.go:1116] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Sep 4 17:14:10.133386 kubelet[3270]: E0904 17:14:10.133277 3270 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)"