Jul 2 08:07:18.178577 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 2 08:07:18.178623 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 08:07:18.178649 kernel: KASLR disabled due to lack of seed Jul 2 08:07:18.178666 kernel: efi: EFI v2.7 by EDK II Jul 2 08:07:18.178682 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Jul 2 08:07:18.178697 kernel: ACPI: Early table checksum verification disabled Jul 2 08:07:18.178715 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 2 08:07:18.178730 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 2 08:07:18.178746 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 08:07:18.178761 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 2 08:07:18.178782 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 08:07:18.178798 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 2 08:07:18.178813 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 2 08:07:18.178828 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 2 08:07:18.178847 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 08:07:18.178868 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 2 08:07:18.178885 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 2 08:07:18.178901 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 2 08:07:18.178917 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 2 08:07:18.178933 kernel: printk: bootconsole [uart0] enabled Jul 2 08:07:18.178949 kernel: NUMA: Failed to initialise from firmware Jul 2 08:07:18.178966 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 08:07:18.178982 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 2 08:07:18.178998 kernel: Zone ranges: Jul 2 08:07:18.179014 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 2 08:07:18.179030 kernel: DMA32 empty Jul 2 08:07:18.179051 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 2 08:07:18.179067 kernel: Movable zone start for each node Jul 2 08:07:18.179083 kernel: Early memory node ranges Jul 2 08:07:18.179099 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 2 08:07:18.179115 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 2 08:07:18.179131 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 2 08:07:18.179147 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 2 08:07:18.179163 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 2 08:07:18.179179 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 2 08:07:18.179195 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 2 08:07:18.179211 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 2 08:07:18.179227 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 08:07:18.179248 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 2 08:07:18.179265 kernel: psci: probing for conduit method from ACPI. Jul 2 08:07:18.179289 kernel: psci: PSCIv1.0 detected in firmware. Jul 2 08:07:18.179306 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 08:07:18.179324 kernel: psci: Trusted OS migration not required Jul 2 08:07:18.179346 kernel: psci: SMC Calling Convention v1.1 Jul 2 08:07:18.179363 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 08:07:18.179380 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 08:07:18.179398 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 08:07:18.179415 kernel: Detected PIPT I-cache on CPU0 Jul 2 08:07:18.179432 kernel: CPU features: detected: GIC system register CPU interface Jul 2 08:07:18.179449 kernel: CPU features: detected: Spectre-v2 Jul 2 08:07:18.181512 kernel: CPU features: detected: Spectre-v3a Jul 2 08:07:18.181556 kernel: CPU features: detected: Spectre-BHB Jul 2 08:07:18.181574 kernel: CPU features: detected: ARM erratum 1742098 Jul 2 08:07:18.181592 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 2 08:07:18.181621 kernel: alternatives: applying boot alternatives Jul 2 08:07:18.181642 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:07:18.181660 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:07:18.181678 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 08:07:18.181695 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:07:18.181712 kernel: Fallback order for Node 0: 0 Jul 2 08:07:18.181729 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 2 08:07:18.181747 kernel: Policy zone: Normal Jul 2 08:07:18.181764 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:07:18.181781 kernel: software IO TLB: area num 2. Jul 2 08:07:18.181798 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 2 08:07:18.181821 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Jul 2 08:07:18.181838 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:07:18.181856 kernel: trace event string verifier disabled Jul 2 08:07:18.181873 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 08:07:18.181891 kernel: rcu: RCU event tracing is enabled. Jul 2 08:07:18.181908 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:07:18.181926 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 08:07:18.181943 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:07:18.181960 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:07:18.181977 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:07:18.181994 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 08:07:18.182034 kernel: GICv3: 96 SPIs implemented Jul 2 08:07:18.182054 kernel: GICv3: 0 Extended SPIs implemented Jul 2 08:07:18.182071 kernel: Root IRQ handler: gic_handle_irq Jul 2 08:07:18.182088 kernel: GICv3: GICv3 features: 16 PPIs Jul 2 08:07:18.182106 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 2 08:07:18.182123 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 2 08:07:18.182140 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 08:07:18.182158 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jul 2 08:07:18.182175 kernel: GICv3: using LPI property table @0x00000004000e0000 Jul 2 08:07:18.182192 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 2 08:07:18.182210 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jul 2 08:07:18.182227 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 08:07:18.182250 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 2 08:07:18.182268 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 2 08:07:18.182285 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 2 08:07:18.182303 kernel: Console: colour dummy device 80x25 Jul 2 08:07:18.182321 kernel: printk: console [tty1] enabled Jul 2 08:07:18.182339 kernel: ACPI: Core revision 20230628 Jul 2 08:07:18.182357 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 2 08:07:18.182374 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:07:18.182392 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 08:07:18.182409 kernel: SELinux: Initializing. Jul 2 08:07:18.182431 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:07:18.182449 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:07:18.182482 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:07:18.182505 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:07:18.182523 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:07:18.182540 kernel: rcu: Max phase no-delay instances is 400. Jul 2 08:07:18.182558 kernel: Platform MSI: ITS@0x10080000 domain created Jul 2 08:07:18.182575 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 2 08:07:18.182593 kernel: Remapping and enabling EFI services. Jul 2 08:07:18.182617 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:07:18.182634 kernel: Detected PIPT I-cache on CPU1 Jul 2 08:07:18.182652 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 2 08:07:18.182669 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jul 2 08:07:18.182686 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 2 08:07:18.182704 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:07:18.182721 kernel: SMP: Total of 2 processors activated. Jul 2 08:07:18.182738 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 08:07:18.182756 kernel: CPU features: detected: 32-bit EL1 Support Jul 2 08:07:18.182778 kernel: CPU features: detected: CRC32 instructions Jul 2 08:07:18.182796 kernel: CPU: All CPU(s) started at EL1 Jul 2 08:07:18.182826 kernel: alternatives: applying system-wide alternatives Jul 2 08:07:18.182848 kernel: devtmpfs: initialized Jul 2 08:07:18.182867 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:07:18.182885 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:07:18.182903 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:07:18.182921 kernel: SMBIOS 3.0.0 present. Jul 2 08:07:18.182939 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 2 08:07:18.182962 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:07:18.182981 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 08:07:18.183000 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 08:07:18.184696 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 08:07:18.184728 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:07:18.184747 kernel: audit: type=2000 audit(0.293:1): state=initialized audit_enabled=0 res=1 Jul 2 08:07:18.184765 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:07:18.184795 kernel: cpuidle: using governor menu Jul 2 08:07:18.184814 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 08:07:18.184832 kernel: ASID allocator initialised with 65536 entries Jul 2 08:07:18.184850 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:07:18.184869 kernel: Serial: AMBA PL011 UART driver Jul 2 08:07:18.184887 kernel: Modules: 17600 pages in range for non-PLT usage Jul 2 08:07:18.184906 kernel: Modules: 509120 pages in range for PLT usage Jul 2 08:07:18.184924 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:07:18.184942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 08:07:18.184965 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 08:07:18.184984 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 08:07:18.185002 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:07:18.185021 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 08:07:18.185039 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 08:07:18.185057 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 08:07:18.185075 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:07:18.185093 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:07:18.185112 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:07:18.185134 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:07:18.185153 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:07:18.185171 kernel: ACPI: Interpreter enabled Jul 2 08:07:18.185189 kernel: ACPI: Using GIC for interrupt routing Jul 2 08:07:18.185207 kernel: ACPI: MCFG table detected, 1 entries Jul 2 08:07:18.185226 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 2 08:07:18.185576 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:07:18.185797 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 08:07:18.186005 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 08:07:18.186227 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 2 08:07:18.186428 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 2 08:07:18.186454 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 2 08:07:18.188521 kernel: acpiphp: Slot [1] registered Jul 2 08:07:18.188553 kernel: acpiphp: Slot [2] registered Jul 2 08:07:18.188572 kernel: acpiphp: Slot [3] registered Jul 2 08:07:18.188591 kernel: acpiphp: Slot [4] registered Jul 2 08:07:18.188609 kernel: acpiphp: Slot [5] registered Jul 2 08:07:18.188639 kernel: acpiphp: Slot [6] registered Jul 2 08:07:18.188659 kernel: acpiphp: Slot [7] registered Jul 2 08:07:18.188677 kernel: acpiphp: Slot [8] registered Jul 2 08:07:18.188696 kernel: acpiphp: Slot [9] registered Jul 2 08:07:18.188714 kernel: acpiphp: Slot [10] registered Jul 2 08:07:18.188733 kernel: acpiphp: Slot [11] registered Jul 2 08:07:18.188751 kernel: acpiphp: Slot [12] registered Jul 2 08:07:18.188770 kernel: acpiphp: Slot [13] registered Jul 2 08:07:18.188788 kernel: acpiphp: Slot [14] registered Jul 2 08:07:18.188813 kernel: acpiphp: Slot [15] registered Jul 2 08:07:18.188834 kernel: acpiphp: Slot [16] registered Jul 2 08:07:18.188855 kernel: acpiphp: Slot [17] registered Jul 2 08:07:18.188874 kernel: acpiphp: Slot [18] registered Jul 2 08:07:18.188892 kernel: acpiphp: Slot [19] registered Jul 2 08:07:18.188910 kernel: acpiphp: Slot [20] registered Jul 2 08:07:18.188930 kernel: acpiphp: Slot [21] registered Jul 2 08:07:18.188949 kernel: acpiphp: Slot [22] registered Jul 2 08:07:18.188969 kernel: acpiphp: Slot [23] registered Jul 2 08:07:18.188988 kernel: acpiphp: Slot [24] registered Jul 2 08:07:18.189013 kernel: acpiphp: Slot [25] registered Jul 2 08:07:18.189032 kernel: acpiphp: Slot [26] registered Jul 2 08:07:18.189051 kernel: acpiphp: Slot [27] registered Jul 2 08:07:18.189070 kernel: acpiphp: Slot [28] registered Jul 2 08:07:18.189088 kernel: acpiphp: Slot [29] registered Jul 2 08:07:18.189107 kernel: acpiphp: Slot [30] registered Jul 2 08:07:18.189125 kernel: acpiphp: Slot [31] registered Jul 2 08:07:18.189144 kernel: PCI host bridge to bus 0000:00 Jul 2 08:07:18.189435 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 2 08:07:18.189699 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 08:07:18.189893 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 2 08:07:18.190113 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 2 08:07:18.190366 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 2 08:07:18.193692 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 2 08:07:18.193931 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 2 08:07:18.194203 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 08:07:18.194421 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 2 08:07:18.194683 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:07:18.194911 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 08:07:18.195123 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 2 08:07:18.196566 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 2 08:07:18.196830 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 2 08:07:18.197056 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:07:18.197263 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 2 08:07:18.198519 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 2 08:07:18.198767 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 2 08:07:18.198974 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 2 08:07:18.199183 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 2 08:07:18.199371 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 2 08:07:18.202507 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 08:07:18.202723 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 2 08:07:18.202750 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 08:07:18.202770 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 08:07:18.202788 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 08:07:18.202807 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 08:07:18.202825 kernel: iommu: Default domain type: Translated Jul 2 08:07:18.202844 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 08:07:18.202871 kernel: efivars: Registered efivars operations Jul 2 08:07:18.202890 kernel: vgaarb: loaded Jul 2 08:07:18.202908 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 08:07:18.202926 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:07:18.202945 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:07:18.202963 kernel: pnp: PnP ACPI init Jul 2 08:07:18.203171 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 2 08:07:18.203199 kernel: pnp: PnP ACPI: found 1 devices Jul 2 08:07:18.203223 kernel: NET: Registered PF_INET protocol family Jul 2 08:07:18.203242 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 08:07:18.203261 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 08:07:18.203279 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:07:18.203297 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:07:18.203316 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 08:07:18.203334 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 08:07:18.203352 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:07:18.203371 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:07:18.203394 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:07:18.203412 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:07:18.203430 kernel: kvm [1]: HYP mode not available Jul 2 08:07:18.203449 kernel: Initialise system trusted keyrings Jul 2 08:07:18.203546 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 08:07:18.203570 kernel: Key type asymmetric registered Jul 2 08:07:18.203589 kernel: Asymmetric key parser 'x509' registered Jul 2 08:07:18.203607 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 08:07:18.203625 kernel: io scheduler mq-deadline registered Jul 2 08:07:18.203650 kernel: io scheduler kyber registered Jul 2 08:07:18.203669 kernel: io scheduler bfq registered Jul 2 08:07:18.203889 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 2 08:07:18.203917 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 08:07:18.203936 kernel: ACPI: button: Power Button [PWRB] Jul 2 08:07:18.203954 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 2 08:07:18.203972 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 08:07:18.203991 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:07:18.204016 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 2 08:07:18.204232 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 2 08:07:18.204259 kernel: printk: console [ttyS0] disabled Jul 2 08:07:18.204278 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 2 08:07:18.204296 kernel: printk: console [ttyS0] enabled Jul 2 08:07:18.204314 kernel: printk: bootconsole [uart0] disabled Jul 2 08:07:18.204332 kernel: thunder_xcv, ver 1.0 Jul 2 08:07:18.204351 kernel: thunder_bgx, ver 1.0 Jul 2 08:07:18.204369 kernel: nicpf, ver 1.0 Jul 2 08:07:18.204386 kernel: nicvf, ver 1.0 Jul 2 08:07:18.206865 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 08:07:18.207087 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T08:07:17 UTC (1719907637) Jul 2 08:07:18.207114 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 08:07:18.207133 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 2 08:07:18.207152 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 08:07:18.207171 kernel: watchdog: Hard watchdog permanently disabled Jul 2 08:07:18.207189 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:07:18.207207 kernel: Segment Routing with IPv6 Jul 2 08:07:18.207237 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:07:18.207255 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:07:18.207273 kernel: Key type dns_resolver registered Jul 2 08:07:18.207292 kernel: registered taskstats version 1 Jul 2 08:07:18.207310 kernel: Loading compiled-in X.509 certificates Jul 2 08:07:18.207328 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 08:07:18.207346 kernel: Key type .fscrypt registered Jul 2 08:07:18.207364 kernel: Key type fscrypt-provisioning registered Jul 2 08:07:18.207382 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:07:18.207405 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:07:18.207423 kernel: ima: No architecture policies found Jul 2 08:07:18.207442 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 08:07:18.207460 kernel: clk: Disabling unused clocks Jul 2 08:07:18.207502 kernel: Freeing unused kernel memory: 39040K Jul 2 08:07:18.207521 kernel: Run /init as init process Jul 2 08:07:18.207540 kernel: with arguments: Jul 2 08:07:18.207558 kernel: /init Jul 2 08:07:18.207576 kernel: with environment: Jul 2 08:07:18.207600 kernel: HOME=/ Jul 2 08:07:18.207618 kernel: TERM=linux Jul 2 08:07:18.207636 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:07:18.207659 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:07:18.207682 systemd[1]: Detected virtualization amazon. Jul 2 08:07:18.207702 systemd[1]: Detected architecture arm64. Jul 2 08:07:18.207721 systemd[1]: Running in initrd. Jul 2 08:07:18.207741 systemd[1]: No hostname configured, using default hostname. Jul 2 08:07:18.207766 systemd[1]: Hostname set to . Jul 2 08:07:18.207787 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:07:18.207806 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:07:18.207826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:07:18.207846 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:07:18.207867 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 08:07:18.207888 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:07:18.207913 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 08:07:18.207934 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 08:07:18.207957 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 08:07:18.207977 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 08:07:18.207997 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:07:18.208017 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:07:18.208037 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:07:18.208061 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:07:18.208082 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:07:18.208101 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:07:18.208121 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:07:18.208141 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:07:18.208161 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:07:18.208182 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:07:18.208201 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:07:18.208221 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:07:18.208247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:07:18.208267 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:07:18.208287 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 08:07:18.208307 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:07:18.208327 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 08:07:18.208346 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:07:18.208366 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:07:18.208386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:07:18.208411 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:07:18.210517 systemd-journald[251]: Collecting audit messages is disabled. Jul 2 08:07:18.210594 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 08:07:18.210616 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:07:18.210646 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:07:18.210669 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:07:18.210689 systemd-journald[251]: Journal started Jul 2 08:07:18.210732 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2465b28de5cc6923a28acffde86e85) is 8.0M, max 75.3M, 67.3M free. Jul 2 08:07:18.210188 systemd-modules-load[252]: Inserted module 'overlay' Jul 2 08:07:18.229681 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:07:18.230620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:07:18.239868 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:07:18.252115 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:07:18.252153 kernel: Bridge firewalling registered Jul 2 08:07:18.247608 systemd-modules-load[252]: Inserted module 'br_netfilter' Jul 2 08:07:18.252009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:07:18.267725 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:07:18.283759 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:07:18.297790 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:07:18.303631 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:07:18.320550 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:07:18.331097 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:07:18.344602 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 08:07:18.360670 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:07:18.376495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:07:18.390738 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:07:18.402966 dracut-cmdline[285]: dracut-dracut-053 Jul 2 08:07:18.410732 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:07:18.490743 systemd-resolved[290]: Positive Trust Anchors: Jul 2 08:07:18.490777 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:07:18.490838 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:07:18.552508 kernel: SCSI subsystem initialized Jul 2 08:07:18.559657 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:07:18.572559 kernel: iscsi: registered transport (tcp) Jul 2 08:07:18.595799 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:07:18.595869 kernel: QLogic iSCSI HBA Driver Jul 2 08:07:18.703744 kernel: random: crng init done Jul 2 08:07:18.703804 systemd-resolved[290]: Defaulting to hostname 'linux'. Jul 2 08:07:18.709025 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:07:18.717743 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:07:18.730526 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 08:07:18.743815 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 08:07:18.778715 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:07:18.778791 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:07:18.780494 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 08:07:18.845524 kernel: raid6: neonx8 gen() 6789 MB/s Jul 2 08:07:18.862498 kernel: raid6: neonx4 gen() 6590 MB/s Jul 2 08:07:18.879497 kernel: raid6: neonx2 gen() 5491 MB/s Jul 2 08:07:18.896497 kernel: raid6: neonx1 gen() 3975 MB/s Jul 2 08:07:18.913497 kernel: raid6: int64x8 gen() 3839 MB/s Jul 2 08:07:18.930497 kernel: raid6: int64x4 gen() 3729 MB/s Jul 2 08:07:18.947497 kernel: raid6: int64x2 gen() 3624 MB/s Jul 2 08:07:18.965131 kernel: raid6: int64x1 gen() 2780 MB/s Jul 2 08:07:18.965163 kernel: raid6: using algorithm neonx8 gen() 6789 MB/s Jul 2 08:07:18.983114 kernel: raid6: .... xor() 4893 MB/s, rmw enabled Jul 2 08:07:18.983152 kernel: raid6: using neon recovery algorithm Jul 2 08:07:18.990502 kernel: xor: measuring software checksum speed Jul 2 08:07:18.992498 kernel: 8regs : 11029 MB/sec Jul 2 08:07:18.994501 kernel: 32regs : 11924 MB/sec Jul 2 08:07:18.996276 kernel: arm64_neon : 9279 MB/sec Jul 2 08:07:18.996315 kernel: xor: using function: 32regs (11924 MB/sec) Jul 2 08:07:19.080512 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 08:07:19.099405 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:07:19.113777 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:07:19.146857 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jul 2 08:07:19.155276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:07:19.183051 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 08:07:19.204175 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Jul 2 08:07:19.259255 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:07:19.272799 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:07:19.394296 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:07:19.412780 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 08:07:19.468916 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 08:07:19.479402 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:07:19.489725 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:07:19.495377 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:07:19.512784 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 08:07:19.562612 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:07:19.599705 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 08:07:19.599782 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 2 08:07:19.643262 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 08:07:19.643562 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 08:07:19.643805 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:65:ca:a9:1b:bd Jul 2 08:07:19.606454 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:07:19.606714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:07:19.610245 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:07:19.613089 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:07:19.613333 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:07:19.616091 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:07:19.634053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:07:19.647923 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:07:19.689921 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 2 08:07:19.689990 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 08:07:19.696648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:07:19.710430 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 08:07:19.713659 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:07:19.713725 kernel: GPT:9289727 != 16777215 Jul 2 08:07:19.713750 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:07:19.714684 kernel: GPT:9289727 != 16777215 Jul 2 08:07:19.716208 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:07:19.716279 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:07:19.722702 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:07:19.769934 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:07:19.838552 kernel: BTRFS: device fsid 9b0eb482-485a-4aff-8de4-e09ff146eadf devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (526) Jul 2 08:07:19.844534 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (536) Jul 2 08:07:19.874359 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 08:07:19.935107 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 08:07:19.981836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 08:07:20.000947 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 08:07:20.003610 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 08:07:20.019734 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 08:07:20.037239 disk-uuid[661]: Primary Header is updated. Jul 2 08:07:20.037239 disk-uuid[661]: Secondary Entries is updated. Jul 2 08:07:20.037239 disk-uuid[661]: Secondary Header is updated. Jul 2 08:07:20.046499 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:07:20.054566 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:07:20.073490 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:07:21.077506 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:07:21.077866 disk-uuid[662]: The operation has completed successfully. Jul 2 08:07:21.262779 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:07:21.262996 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 08:07:21.322769 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 08:07:21.334099 sh[1005]: Success Jul 2 08:07:21.363500 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 08:07:21.468871 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 08:07:21.475672 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 08:07:21.484590 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 08:07:21.517227 kernel: BTRFS info (device dm-0): first mount of filesystem 9b0eb482-485a-4aff-8de4-e09ff146eadf Jul 2 08:07:21.517288 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:07:21.518896 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 08:07:21.520065 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 08:07:21.521051 kernel: BTRFS info (device dm-0): using free space tree Jul 2 08:07:21.578496 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 08:07:21.610665 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 08:07:21.615775 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 08:07:21.628743 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 08:07:21.637713 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 08:07:21.655169 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:07:21.655223 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:07:21.657516 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:07:21.663502 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:07:21.683414 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:07:21.688545 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:07:21.709435 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 08:07:21.725894 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 08:07:21.813013 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:07:21.838765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:07:21.907650 systemd-networkd[1197]: lo: Link UP Jul 2 08:07:21.907674 systemd-networkd[1197]: lo: Gained carrier Jul 2 08:07:21.910090 systemd-networkd[1197]: Enumeration completed Jul 2 08:07:21.910807 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:07:21.910814 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:07:21.913040 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:07:21.919171 systemd[1]: Reached target network.target - Network. Jul 2 08:07:21.926755 systemd-networkd[1197]: eth0: Link UP Jul 2 08:07:21.926763 systemd-networkd[1197]: eth0: Gained carrier Jul 2 08:07:21.926782 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:07:21.970565 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.20.19/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 08:07:22.121757 ignition[1120]: Ignition 2.18.0 Jul 2 08:07:22.121788 ignition[1120]: Stage: fetch-offline Jul 2 08:07:22.123692 ignition[1120]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:22.123720 ignition[1120]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:22.125632 ignition[1120]: Ignition finished successfully Jul 2 08:07:22.133121 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:07:22.155789 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 08:07:22.179513 ignition[1209]: Ignition 2.18.0 Jul 2 08:07:22.179535 ignition[1209]: Stage: fetch Jul 2 08:07:22.181451 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:22.181509 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:22.181677 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:22.204130 ignition[1209]: PUT result: OK Jul 2 08:07:22.207581 ignition[1209]: parsed url from cmdline: "" Jul 2 08:07:22.207711 ignition[1209]: no config URL provided Jul 2 08:07:22.207733 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:07:22.207759 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:07:22.207813 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:22.216817 ignition[1209]: PUT result: OK Jul 2 08:07:22.216899 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 08:07:22.221152 ignition[1209]: GET result: OK Jul 2 08:07:22.221292 ignition[1209]: parsing config with SHA512: fddf3a1675208b78e1e7c6d2e41f34125658954459e22e8212e1a40b6f9fc1ed0a3137091f64f5aa715136d614691ae36e2d0dd00df08df27c617ef2ce1d3926 Jul 2 08:07:22.230720 unknown[1209]: fetched base config from "system" Jul 2 08:07:22.230737 unknown[1209]: fetched base config from "system" Jul 2 08:07:22.236588 ignition[1209]: fetch: fetch complete Jul 2 08:07:22.230750 unknown[1209]: fetched user config from "aws" Jul 2 08:07:22.236609 ignition[1209]: fetch: fetch passed Jul 2 08:07:22.236714 ignition[1209]: Ignition finished successfully Jul 2 08:07:22.249101 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 08:07:22.272909 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 08:07:22.297868 ignition[1216]: Ignition 2.18.0 Jul 2 08:07:22.298378 ignition[1216]: Stage: kargs Jul 2 08:07:22.299028 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:22.299052 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:22.299217 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:22.302345 ignition[1216]: PUT result: OK Jul 2 08:07:22.313316 ignition[1216]: kargs: kargs passed Jul 2 08:07:22.313493 ignition[1216]: Ignition finished successfully Jul 2 08:07:22.318626 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 08:07:22.337885 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 08:07:22.362034 ignition[1223]: Ignition 2.18.0 Jul 2 08:07:22.362055 ignition[1223]: Stage: disks Jul 2 08:07:22.363167 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:22.363587 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:22.364120 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:22.368084 ignition[1223]: PUT result: OK Jul 2 08:07:22.378247 ignition[1223]: disks: disks passed Jul 2 08:07:22.378627 ignition[1223]: Ignition finished successfully Jul 2 08:07:22.383636 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 08:07:22.389052 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 08:07:22.391875 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:07:22.396627 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:07:22.406050 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:07:22.410478 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:07:22.424876 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 08:07:22.477147 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 08:07:22.488559 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 08:07:22.500008 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 08:07:22.592560 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9aacfbff-cef8-4758-afb5-6310e7c6c5e6 r/w with ordered data mode. Quota mode: none. Jul 2 08:07:22.593601 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 08:07:22.598796 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 08:07:22.625655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:07:22.633742 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 08:07:22.642308 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 08:07:22.652422 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1251) Jul 2 08:07:22.652462 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:07:22.644034 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:07:22.667126 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:07:22.667180 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:07:22.644088 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:07:22.670282 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 08:07:22.687414 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 08:07:22.695453 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:07:22.697990 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:07:23.047528 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:07:23.056296 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:07:23.065194 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:07:23.074506 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:07:23.343924 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 08:07:23.355815 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 08:07:23.357143 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 08:07:23.385127 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 08:07:23.388545 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:07:23.415283 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 08:07:23.435499 ignition[1365]: INFO : Ignition 2.18.0 Jul 2 08:07:23.435499 ignition[1365]: INFO : Stage: mount Jul 2 08:07:23.440527 ignition[1365]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:23.440527 ignition[1365]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:23.440527 ignition[1365]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:23.448839 ignition[1365]: INFO : PUT result: OK Jul 2 08:07:23.456729 ignition[1365]: INFO : mount: mount passed Jul 2 08:07:23.458810 ignition[1365]: INFO : Ignition finished successfully Jul 2 08:07:23.463001 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 08:07:23.481758 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 08:07:23.612865 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:07:23.631893 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1376) Jul 2 08:07:23.631955 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:07:23.633522 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:07:23.633560 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:07:23.638502 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:07:23.642245 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:07:23.675346 ignition[1392]: INFO : Ignition 2.18.0 Jul 2 08:07:23.675346 ignition[1392]: INFO : Stage: files Jul 2 08:07:23.679290 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:23.679290 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:23.679290 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:23.680579 systemd-networkd[1197]: eth0: Gained IPv6LL Jul 2 08:07:23.689230 ignition[1392]: INFO : PUT result: OK Jul 2 08:07:23.693322 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:07:23.695777 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:07:23.695777 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:07:23.726488 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:07:23.729429 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:07:23.729429 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:07:23.727975 unknown[1392]: wrote ssh authorized keys file for user: core Jul 2 08:07:23.741193 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:07:23.745549 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 08:07:23.797455 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:07:23.879882 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:07:23.879882 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:07:23.888591 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 08:07:24.364918 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 08:07:24.511834 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:07:24.511834 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:07:24.524945 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 08:07:24.841595 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 08:07:25.180049 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:07:25.180049 ignition[1392]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 08:07:25.191536 ignition[1392]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:07:25.199657 ignition[1392]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:07:25.199657 ignition[1392]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 08:07:25.199657 ignition[1392]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:07:25.199657 ignition[1392]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:07:25.199657 ignition[1392]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:07:25.199657 ignition[1392]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:07:25.199657 ignition[1392]: INFO : files: files passed Jul 2 08:07:25.199657 ignition[1392]: INFO : Ignition finished successfully Jul 2 08:07:25.222580 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 08:07:25.243895 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 08:07:25.251704 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 08:07:25.260619 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:07:25.260854 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 08:07:25.293355 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:07:25.293355 initrd-setup-root-after-ignition[1422]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:07:25.302378 initrd-setup-root-after-ignition[1426]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:07:25.308509 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:07:25.315294 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 08:07:25.326745 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 08:07:25.391196 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:07:25.394147 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 08:07:25.398376 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 08:07:25.409649 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 08:07:25.414496 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 08:07:25.426820 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 08:07:25.462564 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:07:25.473901 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 08:07:25.500225 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:07:25.503638 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:07:25.512836 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 08:07:25.514859 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:07:25.515181 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:07:25.523345 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 08:07:25.526431 systemd[1]: Stopped target basic.target - Basic System. Jul 2 08:07:25.532784 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 08:07:25.535565 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:07:25.542805 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 08:07:25.546004 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 08:07:25.552431 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:07:25.555454 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 08:07:25.560302 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 08:07:25.566779 systemd[1]: Stopped target swap.target - Swaps. Jul 2 08:07:25.568861 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:07:25.569489 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:07:25.577489 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:07:25.580258 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:07:25.587648 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 08:07:25.589695 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:07:25.594823 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:07:25.595223 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 08:07:25.601245 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:07:25.601949 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:07:25.610544 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:07:25.610931 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 08:07:25.622912 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 08:07:25.625276 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:07:25.642378 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:07:25.656951 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 08:07:25.659286 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:07:25.659615 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:07:25.668918 ignition[1446]: INFO : Ignition 2.18.0 Jul 2 08:07:25.668918 ignition[1446]: INFO : Stage: umount Jul 2 08:07:25.672612 ignition[1446]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:25.672612 ignition[1446]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:25.672612 ignition[1446]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:25.678083 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:07:25.678422 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:07:25.688588 ignition[1446]: INFO : PUT result: OK Jul 2 08:07:25.694005 ignition[1446]: INFO : umount: umount passed Jul 2 08:07:25.694005 ignition[1446]: INFO : Ignition finished successfully Jul 2 08:07:25.703441 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:07:25.706163 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 08:07:25.717123 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:07:25.723567 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 08:07:25.732298 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:07:25.732446 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 08:07:25.738182 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:07:25.738726 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 08:07:25.754172 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:07:25.754256 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 08:07:25.763337 systemd[1]: Stopped target network.target - Network. Jul 2 08:07:25.765542 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:07:25.777908 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:07:25.780249 systemd[1]: Stopped target paths.target - Path Units. Jul 2 08:07:25.781939 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:07:25.794153 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:07:25.796957 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 08:07:25.799032 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 08:07:25.801214 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:07:25.801291 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:07:25.803540 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:07:25.803610 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:07:25.805901 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:07:25.805998 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 08:07:25.808265 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 08:07:25.808340 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 08:07:25.810950 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 08:07:25.813100 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 08:07:25.817575 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:07:25.818600 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:07:25.818782 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 08:07:25.855440 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:07:25.855625 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 08:07:25.860172 systemd-networkd[1197]: eth0: DHCPv6 lease lost Jul 2 08:07:25.866302 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:07:25.866580 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 08:07:25.870308 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:07:25.870525 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 08:07:25.886665 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:07:25.886923 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:07:25.897615 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 08:07:25.899491 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:07:25.899620 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:07:25.902846 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:07:25.902952 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:07:25.916111 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:07:25.916218 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 08:07:25.921324 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 08:07:25.921410 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:07:25.943010 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:07:25.965172 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:07:25.965388 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 08:07:25.978722 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:07:25.979217 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:07:25.987665 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:07:25.987754 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 08:07:25.990215 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:07:25.990280 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:07:25.992632 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:07:25.992714 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:07:25.995616 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:07:25.995697 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 08:07:26.014616 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:07:26.014703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:07:26.036811 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 08:07:26.039533 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:07:26.039646 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:07:26.042665 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:07:26.042751 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:07:26.070299 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:07:26.071358 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 08:07:26.074599 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 08:07:26.096930 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 08:07:26.114328 systemd[1]: Switching root. Jul 2 08:07:26.155051 systemd-journald[251]: Journal stopped Jul 2 08:07:29.044089 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jul 2 08:07:29.047554 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:07:29.047616 kernel: SELinux: policy capability open_perms=1 Jul 2 08:07:29.047648 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:07:29.047688 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:07:29.047719 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:07:29.047755 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:07:29.047785 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:07:29.047816 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:07:29.047846 kernel: audit: type=1403 audit(1719907647.304:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:07:29.047887 systemd[1]: Successfully loaded SELinux policy in 64.340ms. Jul 2 08:07:29.047936 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.662ms. Jul 2 08:07:29.047972 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:07:29.048003 systemd[1]: Detected virtualization amazon. Jul 2 08:07:29.048032 systemd[1]: Detected architecture arm64. Jul 2 08:07:29.048066 systemd[1]: Detected first boot. Jul 2 08:07:29.048098 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:07:29.048129 zram_generator::config[1489]: No configuration found. Jul 2 08:07:29.048161 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:07:29.048193 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:07:29.048226 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 08:07:29.048259 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:07:29.048320 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 08:07:29.048361 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 08:07:29.048397 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 08:07:29.048426 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 08:07:29.048457 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 08:07:29.048539 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 08:07:29.048574 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 08:07:29.048605 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 08:07:29.048634 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:07:29.048670 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:07:29.048702 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 08:07:29.048732 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 08:07:29.048765 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 08:07:29.048798 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:07:29.048827 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 08:07:29.048856 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:07:29.048888 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 08:07:29.048919 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 08:07:29.048956 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 08:07:29.048988 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 08:07:29.049019 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:07:29.049051 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:07:29.049085 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:07:29.049116 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:07:29.049145 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 08:07:29.049177 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 08:07:29.049212 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:07:29.049241 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:07:29.049271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:07:29.049302 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 08:07:29.049331 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 08:07:29.049363 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 08:07:29.049392 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 08:07:29.049421 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 08:07:29.049453 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 08:07:29.052570 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 08:07:29.052619 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:07:29.052655 systemd[1]: Reached target machines.target - Containers. Jul 2 08:07:29.052687 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 08:07:29.052718 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:07:29.052748 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:07:29.052777 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 08:07:29.052807 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:07:29.052843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:07:29.052876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:07:29.052907 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 08:07:29.052937 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:07:29.052967 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:07:29.052999 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:07:29.053030 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 08:07:29.053061 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:07:29.053092 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:07:29.053126 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:07:29.053155 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:07:29.053187 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 08:07:29.053219 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 08:07:29.053250 kernel: loop: module loaded Jul 2 08:07:29.053281 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:07:29.053309 kernel: fuse: init (API version 7.39) Jul 2 08:07:29.053337 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:07:29.053366 systemd[1]: Stopped verity-setup.service. Jul 2 08:07:29.053408 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 08:07:29.053440 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 08:07:29.053487 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 08:07:29.053522 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 08:07:29.053553 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 08:07:29.053582 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 08:07:29.053623 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:07:29.053660 kernel: ACPI: bus type drm_connector registered Jul 2 08:07:29.053691 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:07:29.053723 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 08:07:29.053755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:07:29.053784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:07:29.053861 systemd-journald[1566]: Collecting audit messages is disabled. Jul 2 08:07:29.053919 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:07:29.053955 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:07:29.054007 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:07:29.054040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:07:29.054068 systemd-journald[1566]: Journal started Jul 2 08:07:29.054115 systemd-journald[1566]: Runtime Journal (/run/log/journal/ec2465b28de5cc6923a28acffde86e85) is 8.0M, max 75.3M, 67.3M free. Jul 2 08:07:28.436449 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:07:28.507736 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 08:07:28.508606 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:07:29.064654 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:07:29.070669 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:07:29.070993 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 08:07:29.076043 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:07:29.076350 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:07:29.081325 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:07:29.086676 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 08:07:29.092607 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 08:07:29.121312 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 08:07:29.134829 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 08:07:29.154853 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 08:07:29.161397 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:07:29.161487 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:07:29.167445 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 08:07:29.183801 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 08:07:29.193539 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 08:07:29.199558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:07:29.208824 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 08:07:29.216733 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 08:07:29.221337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:07:29.230326 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 08:07:29.234840 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:07:29.238048 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:07:29.246785 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 08:07:29.256268 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 08:07:29.261821 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 08:07:29.272232 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 08:07:29.277733 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 08:07:29.283726 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:07:29.290314 systemd-journald[1566]: Time spent on flushing to /var/log/journal/ec2465b28de5cc6923a28acffde86e85 is 97.979ms for 910 entries. Jul 2 08:07:29.290314 systemd-journald[1566]: System Journal (/var/log/journal/ec2465b28de5cc6923a28acffde86e85) is 8.0M, max 195.6M, 187.6M free. Jul 2 08:07:29.441809 systemd-journald[1566]: Received client request to flush runtime journal. Jul 2 08:07:29.441908 kernel: loop0: detected capacity change from 0 to 51896 Jul 2 08:07:29.441963 kernel: block loop0: the capability attribute has been deprecated. Jul 2 08:07:29.300073 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 08:07:29.308934 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 08:07:29.327709 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 08:07:29.333070 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 08:07:29.350868 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 08:07:29.383573 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:07:29.414684 udevadm[1623]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 08:07:29.449064 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 08:07:29.472904 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:07:29.479507 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:07:29.482604 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 08:07:29.492913 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 08:07:29.510821 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:07:29.531504 kernel: loop1: detected capacity change from 0 to 59672 Jul 2 08:07:29.585562 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Jul 2 08:07:29.585595 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Jul 2 08:07:29.595254 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:07:29.641288 kernel: loop2: detected capacity change from 0 to 193208 Jul 2 08:07:29.681511 kernel: loop3: detected capacity change from 0 to 113672 Jul 2 08:07:29.793523 kernel: loop4: detected capacity change from 0 to 51896 Jul 2 08:07:29.812529 kernel: loop5: detected capacity change from 0 to 59672 Jul 2 08:07:29.823509 kernel: loop6: detected capacity change from 0 to 193208 Jul 2 08:07:29.857044 kernel: loop7: detected capacity change from 0 to 113672 Jul 2 08:07:29.870788 (sd-merge)[1643]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 08:07:29.871775 (sd-merge)[1643]: Merged extensions into '/usr'. Jul 2 08:07:29.879103 systemd[1]: Reloading requested from client PID 1616 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 08:07:29.879127 systemd[1]: Reloading... Jul 2 08:07:30.010560 zram_generator::config[1664]: No configuration found. Jul 2 08:07:30.323175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:07:30.438810 systemd[1]: Reloading finished in 558 ms. Jul 2 08:07:30.481576 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 08:07:30.517894 systemd[1]: Starting ensure-sysext.service... Jul 2 08:07:30.533953 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:07:30.547659 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 08:07:30.564062 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:07:30.569340 systemd[1]: Reloading requested from client PID 1718 ('systemctl') (unit ensure-sysext.service)... Jul 2 08:07:30.569373 systemd[1]: Reloading... Jul 2 08:07:30.622237 systemd-tmpfiles[1719]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:07:30.624704 systemd-tmpfiles[1719]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 08:07:30.634444 systemd-tmpfiles[1719]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:07:30.637520 systemd-tmpfiles[1719]: ACLs are not supported, ignoring. Jul 2 08:07:30.637846 systemd-tmpfiles[1719]: ACLs are not supported, ignoring. Jul 2 08:07:30.649043 systemd-tmpfiles[1719]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:07:30.650142 systemd-tmpfiles[1719]: Skipping /boot Jul 2 08:07:30.682083 systemd-tmpfiles[1719]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:07:30.682112 systemd-tmpfiles[1719]: Skipping /boot Jul 2 08:07:30.708426 systemd-udevd[1721]: Using default interface naming scheme 'v255'. Jul 2 08:07:30.788872 zram_generator::config[1746]: No configuration found. Jul 2 08:07:30.842289 ldconfig[1611]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:07:30.953542 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1801) Jul 2 08:07:31.046437 (udev-worker)[1791]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:07:31.130782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:07:31.213536 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1802) Jul 2 08:07:31.316800 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 08:07:31.317997 systemd[1]: Reloading finished in 745 ms. Jul 2 08:07:31.353613 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:07:31.359864 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 08:07:31.383416 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:07:31.491270 systemd[1]: Finished ensure-sysext.service. Jul 2 08:07:31.510299 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 08:07:31.514058 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 08:07:31.525820 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:07:31.540744 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 08:07:31.543874 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:07:31.549815 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 08:07:31.558859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:07:31.565714 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:07:31.572774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:07:31.580784 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:07:31.584866 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:07:31.588153 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 08:07:31.597722 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 08:07:31.610815 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:07:31.626803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:07:31.629618 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 08:07:31.637939 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 08:07:31.645012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:07:31.693513 lvm[1918]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:07:31.717649 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 08:07:31.720301 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:07:31.720629 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:07:31.721848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:07:31.722127 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:07:31.737744 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:07:31.754900 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 08:07:31.758969 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:07:31.760033 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:07:31.760407 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:07:31.765716 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:07:31.766146 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:07:31.771633 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 08:07:31.786491 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:07:31.806619 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 08:07:31.823572 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 08:07:31.824290 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:07:31.838700 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 08:07:31.848666 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 08:07:31.858271 lvm[1954]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:07:31.864832 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 08:07:31.879219 augenrules[1958]: No rules Jul 2 08:07:31.880786 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:07:31.918625 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 08:07:31.924502 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 08:07:31.950295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:07:31.955530 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 08:07:32.052302 systemd-networkd[1929]: lo: Link UP Jul 2 08:07:32.052858 systemd-networkd[1929]: lo: Gained carrier Jul 2 08:07:32.055828 systemd-networkd[1929]: Enumeration completed Jul 2 08:07:32.056173 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:07:32.057209 systemd-networkd[1929]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:07:32.057874 systemd-networkd[1929]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:07:32.063935 systemd-networkd[1929]: eth0: Link UP Jul 2 08:07:32.064391 systemd-networkd[1929]: eth0: Gained carrier Jul 2 08:07:32.064565 systemd-networkd[1929]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:07:32.072175 systemd-resolved[1930]: Positive Trust Anchors: Jul 2 08:07:32.072216 systemd-resolved[1930]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:07:32.072277 systemd-resolved[1930]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:07:32.072897 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 08:07:32.082592 systemd-networkd[1929]: eth0: DHCPv4 address 172.31.20.19/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 08:07:32.086785 systemd-resolved[1930]: Defaulting to hostname 'linux'. Jul 2 08:07:32.090226 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:07:32.092888 systemd[1]: Reached target network.target - Network. Jul 2 08:07:32.094885 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:07:32.097386 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:07:32.099992 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 08:07:32.102805 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 08:07:32.105877 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 08:07:32.108584 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 08:07:32.111437 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 08:07:32.114348 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:07:32.114402 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:07:32.116360 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:07:32.119357 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 08:07:32.124661 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 08:07:32.137352 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 08:07:32.141029 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 08:07:32.143829 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:07:32.146037 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:07:32.148638 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:07:32.148690 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:07:32.151086 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 08:07:32.156819 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 08:07:32.172596 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 08:07:32.176949 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 08:07:32.183836 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 08:07:32.186443 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 08:07:32.196857 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 08:07:32.207834 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 08:07:32.212918 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 08:07:32.220726 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 08:07:32.229838 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 08:07:32.240311 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 08:07:32.256516 jq[1982]: false Jul 2 08:07:32.268286 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 08:07:32.271838 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:07:32.273684 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:07:32.297511 extend-filesystems[1983]: Found loop4 Jul 2 08:07:32.297511 extend-filesystems[1983]: Found loop5 Jul 2 08:07:32.297511 extend-filesystems[1983]: Found loop6 Jul 2 08:07:32.277853 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 08:07:32.342692 extend-filesystems[1983]: Found loop7 Jul 2 08:07:32.342692 extend-filesystems[1983]: Found nvme0n1 Jul 2 08:07:32.342692 extend-filesystems[1983]: Found nvme0n1p1 Jul 2 08:07:32.342692 extend-filesystems[1983]: Found nvme0n1p2 Jul 2 08:07:32.342692 extend-filesystems[1983]: Found nvme0n1p3 Jul 2 08:07:32.342692 extend-filesystems[1983]: Found usr Jul 2 08:07:32.342692 extend-filesystems[1983]: Found nvme0n1p4 Jul 2 08:07:32.342692 extend-filesystems[1983]: Found nvme0n1p6 Jul 2 08:07:32.342692 extend-filesystems[1983]: Found nvme0n1p7 Jul 2 08:07:32.342692 extend-filesystems[1983]: Found nvme0n1p9 Jul 2 08:07:32.342692 extend-filesystems[1983]: Checking size of /dev/nvme0n1p9 Jul 2 08:07:32.294657 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 08:07:32.362338 dbus-daemon[1981]: [system] SELinux support is enabled Jul 2 08:07:32.304298 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:07:32.305821 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 08:07:32.384622 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 08:07:32.411587 jq[1997]: true Jul 2 08:07:32.405692 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:07:32.412889 dbus-daemon[1981]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1929 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 08:07:32.405782 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 08:07:32.411691 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:07:32.411735 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 08:07:32.436615 extend-filesystems[1983]: Resized partition /dev/nvme0n1p9 Jul 2 08:07:32.455636 extend-filesystems[2018]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 08:07:32.460667 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 08:07:32.467856 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:07:32.468255 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 08:07:32.473208 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:07:32.476695 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 08:07:32.484547 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 08:07:32.500314 (ntainerd)[2020]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 08:07:32.508378 update_engine[1995]: I0702 08:07:32.507746 1995 main.cc:92] Flatcar Update Engine starting Jul 2 08:07:32.511645 update_engine[1995]: I0702 08:07:32.511584 1995 update_check_scheduler.cc:74] Next update check in 6m26s Jul 2 08:07:32.516809 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 08:07:32.521022 systemd[1]: Started update-engine.service - Update Engine. Jul 2 08:07:32.530251 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 08:07:32.556549 tar[2000]: linux-arm64/helm Jul 2 08:07:32.570276 jq[2015]: true Jul 2 08:07:32.616517 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 08:07:32.658383 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 08:07:32.664330 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 08:07:32.664330 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 08:07:32.664274 ntpd[1985]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 08:07:32.669861 extend-filesystems[2018]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 08:07:32.669861 extend-filesystems[2018]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 08:07:32.669861 extend-filesystems[2018]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: ---------------------------------------------------- Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: corporation. Support and training for ntp-4 are Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: available at https://www.nwtime.org/support Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: ---------------------------------------------------- Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: proto: precision = 0.108 usec (-23) Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: basedate set to 2024-06-19 Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: gps base set to 2024-06-23 (week 2320) Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: Listen normally on 3 eth0 172.31.20.19:123 Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: Listen normally on 4 lo [::1]:123 Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: bind(21) AF_INET6 fe80::465:caff:fea9:1bbd%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: unable to create socket on eth0 (5) for fe80::465:caff:fea9:1bbd%2#123 Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: failed to init interface for address fe80::465:caff:fea9:1bbd%2 Jul 2 08:07:32.699547 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: Listening on routing socket on fd #21 for interface updates Jul 2 08:07:32.664319 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 08:07:32.700379 extend-filesystems[1983]: Resized filesystem in /dev/nvme0n1p9 Jul 2 08:07:32.670823 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:07:32.664339 ntpd[1985]: ---------------------------------------------------- Jul 2 08:07:32.671216 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 08:07:32.664360 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Jul 2 08:07:32.664379 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 08:07:32.664397 ntpd[1985]: corporation. Support and training for ntp-4 are Jul 2 08:07:32.664415 ntpd[1985]: available at https://www.nwtime.org/support Jul 2 08:07:32.664433 ntpd[1985]: ---------------------------------------------------- Jul 2 08:07:32.674397 ntpd[1985]: proto: precision = 0.108 usec (-23) Jul 2 08:07:32.678103 ntpd[1985]: basedate set to 2024-06-19 Jul 2 08:07:32.678141 ntpd[1985]: gps base set to 2024-06-23 (week 2320) Jul 2 08:07:32.689751 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 08:07:32.689835 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 08:07:32.690112 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 08:07:32.690175 ntpd[1985]: Listen normally on 3 eth0 172.31.20.19:123 Jul 2 08:07:32.690245 ntpd[1985]: Listen normally on 4 lo [::1]:123 Jul 2 08:07:32.690328 ntpd[1985]: bind(21) AF_INET6 fe80::465:caff:fea9:1bbd%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:07:32.690365 ntpd[1985]: unable to create socket on eth0 (5) for fe80::465:caff:fea9:1bbd%2#123 Jul 2 08:07:32.690392 ntpd[1985]: failed to init interface for address fe80::465:caff:fea9:1bbd%2 Jul 2 08:07:32.690442 ntpd[1985]: Listening on routing socket on fd #21 for interface updates Jul 2 08:07:32.726192 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:07:32.726641 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:07:32.726641 ntpd[1985]: 2 Jul 08:07:32 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:07:32.726254 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:07:32.751985 systemd-logind[1991]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 08:07:32.752547 systemd-logind[1991]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 2 08:07:32.755913 systemd-logind[1991]: New seat seat0. Jul 2 08:07:32.759523 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 08:07:32.830838 coreos-metadata[1980]: Jul 02 08:07:32.830 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 08:07:32.836367 coreos-metadata[1980]: Jul 02 08:07:32.835 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 08:07:32.836726 coreos-metadata[1980]: Jul 02 08:07:32.836 INFO Fetch successful Jul 2 08:07:32.836795 coreos-metadata[1980]: Jul 02 08:07:32.836 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 08:07:32.845531 coreos-metadata[1980]: Jul 02 08:07:32.839 INFO Fetch successful Jul 2 08:07:32.845531 coreos-metadata[1980]: Jul 02 08:07:32.839 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 08:07:32.845531 coreos-metadata[1980]: Jul 02 08:07:32.844 INFO Fetch successful Jul 2 08:07:32.845531 coreos-metadata[1980]: Jul 02 08:07:32.844 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 08:07:32.845531 coreos-metadata[1980]: Jul 02 08:07:32.845 INFO Fetch successful Jul 2 08:07:32.845947 coreos-metadata[1980]: Jul 02 08:07:32.845 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 08:07:32.850963 coreos-metadata[1980]: Jul 02 08:07:32.849 INFO Fetch failed with 404: resource not found Jul 2 08:07:32.850963 coreos-metadata[1980]: Jul 02 08:07:32.849 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 08:07:32.851405 coreos-metadata[1980]: Jul 02 08:07:32.851 INFO Fetch successful Jul 2 08:07:32.852209 bash[2058]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:07:32.853396 coreos-metadata[1980]: Jul 02 08:07:32.853 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 08:07:32.856500 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 08:07:32.862225 coreos-metadata[1980]: Jul 02 08:07:32.862 INFO Fetch successful Jul 2 08:07:32.871935 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1782) Jul 2 08:07:32.872025 coreos-metadata[1980]: Jul 02 08:07:32.867 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 08:07:32.872025 coreos-metadata[1980]: Jul 02 08:07:32.871 INFO Fetch successful Jul 2 08:07:32.872025 coreos-metadata[1980]: Jul 02 08:07:32.871 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 08:07:32.879595 coreos-metadata[1980]: Jul 02 08:07:32.873 INFO Fetch successful Jul 2 08:07:32.879595 coreos-metadata[1980]: Jul 02 08:07:32.873 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 08:07:32.878038 systemd[1]: Starting sshkeys.service... Jul 2 08:07:32.885529 coreos-metadata[1980]: Jul 02 08:07:32.880 INFO Fetch successful Jul 2 08:07:32.919297 locksmithd[2028]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:07:32.975033 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 08:07:32.980503 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 08:07:33.004784 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 08:07:33.063954 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 08:07:33.083966 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 08:07:33.137027 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 08:07:33.137297 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 08:07:33.148282 dbus-daemon[1981]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2026 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 08:07:33.160228 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 08:07:33.184918 polkitd[2146]: Started polkitd version 121 Jul 2 08:07:33.206580 polkitd[2146]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 08:07:33.206698 polkitd[2146]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 08:07:33.236262 polkitd[2146]: Finished loading, compiling and executing 2 rules Jul 2 08:07:33.249401 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 08:07:33.251183 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 08:07:33.252531 polkitd[2146]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 08:07:33.280342 systemd-resolved[1930]: System hostname changed to 'ip-172-31-20-19'. Jul 2 08:07:33.280990 systemd-hostnamed[2026]: Hostname set to (transient) Jul 2 08:07:33.284570 sshd_keygen[2027]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:07:33.396512 coreos-metadata[2115]: Jul 02 08:07:33.395 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 08:07:33.402516 coreos-metadata[2115]: Jul 02 08:07:33.401 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 08:07:33.404296 coreos-metadata[2115]: Jul 02 08:07:33.404 INFO Fetch successful Jul 2 08:07:33.404408 coreos-metadata[2115]: Jul 02 08:07:33.404 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 08:07:33.408603 coreos-metadata[2115]: Jul 02 08:07:33.406 INFO Fetch successful Jul 2 08:07:33.414939 unknown[2115]: wrote ssh authorized keys file for user: core Jul 2 08:07:33.469111 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 08:07:33.487479 update-ssh-keys[2181]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:07:33.489798 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 08:07:33.507605 systemd[1]: Finished sshkeys.service. Jul 2 08:07:33.528966 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 08:07:33.537029 systemd[1]: Started sshd@0-172.31.20.19:22-139.178.89.65:33934.service - OpenSSH per-connection server daemon (139.178.89.65:33934). Jul 2 08:07:33.569726 containerd[2020]: time="2024-07-02T08:07:33.568742063Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 08:07:33.583853 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:07:33.587129 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 08:07:33.601131 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 08:07:33.652583 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 08:07:33.669773 ntpd[1985]: bind(24) AF_INET6 fe80::465:caff:fea9:1bbd%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:07:33.669844 ntpd[1985]: unable to create socket on eth0 (6) for fe80::465:caff:fea9:1bbd%2#123 Jul 2 08:07:33.674867 ntpd[1985]: 2 Jul 08:07:33 ntpd[1985]: bind(24) AF_INET6 fe80::465:caff:fea9:1bbd%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:07:33.674867 ntpd[1985]: 2 Jul 08:07:33 ntpd[1985]: unable to create socket on eth0 (6) for fe80::465:caff:fea9:1bbd%2#123 Jul 2 08:07:33.674867 ntpd[1985]: 2 Jul 08:07:33 ntpd[1985]: failed to init interface for address fe80::465:caff:fea9:1bbd%2 Jul 2 08:07:33.669873 ntpd[1985]: failed to init interface for address fe80::465:caff:fea9:1bbd%2 Jul 2 08:07:33.675042 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 08:07:33.689275 containerd[2020]: time="2024-07-02T08:07:33.683131596Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 08:07:33.689275 containerd[2020]: time="2024-07-02T08:07:33.683200800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:33.692111 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 08:07:33.696929 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 08:07:33.703317 containerd[2020]: time="2024-07-02T08:07:33.696378720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:07:33.703317 containerd[2020]: time="2024-07-02T08:07:33.697707012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:33.703317 containerd[2020]: time="2024-07-02T08:07:33.698118468Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:07:33.703317 containerd[2020]: time="2024-07-02T08:07:33.698157684Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:07:33.703317 containerd[2020]: time="2024-07-02T08:07:33.698376792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:33.703317 containerd[2020]: time="2024-07-02T08:07:33.698534724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:07:33.703317 containerd[2020]: time="2024-07-02T08:07:33.698564220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:33.703317 containerd[2020]: time="2024-07-02T08:07:33.698717280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:33.703317 containerd[2020]: time="2024-07-02T08:07:33.699114264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:33.703317 containerd[2020]: time="2024-07-02T08:07:33.699149988Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:07:33.703317 containerd[2020]: time="2024-07-02T08:07:33.699176664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:33.704361 containerd[2020]: time="2024-07-02T08:07:33.699394656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:07:33.704361 containerd[2020]: time="2024-07-02T08:07:33.699425868Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:07:33.704361 containerd[2020]: time="2024-07-02T08:07:33.699617208Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:07:33.704361 containerd[2020]: time="2024-07-02T08:07:33.699643284Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:07:33.721428 containerd[2020]: time="2024-07-02T08:07:33.721316184Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.721666476Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.721708128Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.721789332Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.721831236Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.721857912Z" level=info msg="NRI interface is disabled by configuration." Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.721887516Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.722147748Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.722196600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.722228040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.722259828Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.722295168Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.722332068Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.722363136Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:07:33.722710 containerd[2020]: time="2024-07-02T08:07:33.722392932Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.722425632Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.722455692Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.722526948Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.722558052Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.722744160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.723131484Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.723178512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.723209736Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.723254796Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.723352572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.723385068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.723416568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.723444048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.723920 containerd[2020]: time="2024-07-02T08:07:33.723855768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.725739 containerd[2020]: time="2024-07-02T08:07:33.723997188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.725739 containerd[2020]: time="2024-07-02T08:07:33.724045668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.725739 containerd[2020]: time="2024-07-02T08:07:33.724076640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.725739 containerd[2020]: time="2024-07-02T08:07:33.724111236Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:07:33.725739 containerd[2020]: time="2024-07-02T08:07:33.724402896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.725739 containerd[2020]: time="2024-07-02T08:07:33.724439028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.725739 containerd[2020]: time="2024-07-02T08:07:33.724489512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.725739 containerd[2020]: time="2024-07-02T08:07:33.724525152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.725739 containerd[2020]: time="2024-07-02T08:07:33.724555224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.725739 containerd[2020]: time="2024-07-02T08:07:33.724587120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.725739 containerd[2020]: time="2024-07-02T08:07:33.724617408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.725739 containerd[2020]: time="2024-07-02T08:07:33.724644132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:07:33.727838 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.725100936Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.725207364Z" level=info msg="Connect containerd service" Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.725402472Z" level=info msg="using legacy CRI server" Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.725422992Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.725617584Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.726792552Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.726886032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.726926136Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.726951960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.726981468Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.727049412Z" level=info msg="Start subscribing containerd event" Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.727148148Z" level=info msg="Start recovering state" Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.727272456Z" level=info msg="Start event monitor" Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.727296732Z" level=info msg="Start snapshots syncer" Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.727318896Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.727338408Z" level=info msg="Start streaming server" Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.727518960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.727607748Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:07:33.728779 containerd[2020]: time="2024-07-02T08:07:33.727711788Z" level=info msg="containerd successfully booted in 0.161579s" Jul 2 08:07:33.802605 sshd[2191]: Accepted publickey for core from 139.178.89.65 port 33934 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:33.805412 sshd[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:33.824145 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 08:07:33.837666 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 08:07:33.849181 systemd-logind[1991]: New session 1 of user core. Jul 2 08:07:33.880661 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 08:07:33.894838 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 08:07:33.912854 (systemd)[2206]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:33.984715 systemd-networkd[1929]: eth0: Gained IPv6LL Jul 2 08:07:34.001567 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 08:07:34.006634 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 08:07:34.021275 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 08:07:34.036757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:07:34.044956 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 08:07:34.177259 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 08:07:34.218519 amazon-ssm-agent[2213]: Initializing new seelog logger Jul 2 08:07:34.218976 amazon-ssm-agent[2213]: New Seelog Logger Creation Complete Jul 2 08:07:34.218976 amazon-ssm-agent[2213]: 2024/07/02 08:07:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:34.218976 amazon-ssm-agent[2213]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:34.220557 amazon-ssm-agent[2213]: 2024/07/02 08:07:34 processing appconfig overrides Jul 2 08:07:34.220255 systemd[2206]: Queued start job for default target default.target. Jul 2 08:07:34.227866 amazon-ssm-agent[2213]: 2024/07/02 08:07:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:34.227866 amazon-ssm-agent[2213]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:34.227866 amazon-ssm-agent[2213]: 2024/07/02 08:07:34 processing appconfig overrides Jul 2 08:07:34.227866 amazon-ssm-agent[2213]: 2024/07/02 08:07:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:34.227866 amazon-ssm-agent[2213]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:34.227866 amazon-ssm-agent[2213]: 2024/07/02 08:07:34 processing appconfig overrides Jul 2 08:07:34.229847 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO Proxy environment variables: Jul 2 08:07:34.229875 systemd[2206]: Created slice app.slice - User Application Slice. Jul 2 08:07:34.229931 systemd[2206]: Reached target paths.target - Paths. Jul 2 08:07:34.229985 systemd[2206]: Reached target timers.target - Timers. Jul 2 08:07:34.233846 systemd[2206]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 08:07:34.238900 amazon-ssm-agent[2213]: 2024/07/02 08:07:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:34.238900 amazon-ssm-agent[2213]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:34.238900 amazon-ssm-agent[2213]: 2024/07/02 08:07:34 processing appconfig overrides Jul 2 08:07:34.273755 systemd[2206]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 08:07:34.273861 systemd[2206]: Reached target sockets.target - Sockets. Jul 2 08:07:34.273894 systemd[2206]: Reached target basic.target - Basic System. Jul 2 08:07:34.274005 systemd[2206]: Reached target default.target - Main User Target. Jul 2 08:07:34.274069 systemd[2206]: Startup finished in 345ms. Jul 2 08:07:34.274085 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 08:07:34.289717 tar[2000]: linux-arm64/LICENSE Jul 2 08:07:34.289717 tar[2000]: linux-arm64/README.md Jul 2 08:07:34.292796 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 08:07:34.327677 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO https_proxy: Jul 2 08:07:34.332075 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 08:07:34.429615 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO http_proxy: Jul 2 08:07:34.475024 systemd[1]: Started sshd@1-172.31.20.19:22-139.178.89.65:33940.service - OpenSSH per-connection server daemon (139.178.89.65:33940). Jul 2 08:07:34.527842 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO no_proxy: Jul 2 08:07:34.626518 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO Checking if agent identity type OnPrem can be assumed Jul 2 08:07:34.707142 sshd[2237]: Accepted publickey for core from 139.178.89.65 port 33940 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:34.710065 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:34.725359 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO Checking if agent identity type EC2 can be assumed Jul 2 08:07:34.724856 systemd-logind[1991]: New session 2 of user core. Jul 2 08:07:34.729368 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 08:07:34.824638 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO Agent will take identity from EC2 Jul 2 08:07:34.873699 sshd[2237]: pam_unix(sshd:session): session closed for user core Jul 2 08:07:34.880282 systemd-logind[1991]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:07:34.881149 systemd[1]: sshd@1-172.31.20.19:22-139.178.89.65:33940.service: Deactivated successfully. Jul 2 08:07:34.886446 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:07:34.912778 systemd-logind[1991]: Removed session 2. Jul 2 08:07:34.919988 systemd[1]: Started sshd@2-172.31.20.19:22-139.178.89.65:33950.service - OpenSSH per-connection server daemon (139.178.89.65:33950). Jul 2 08:07:34.923573 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:07:35.022853 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:07:35.121505 sshd[2245]: Accepted publickey for core from 139.178.89.65 port 33950 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:35.124604 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:07:35.128135 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:35.147071 systemd-logind[1991]: New session 3 of user core. Jul 2 08:07:35.154754 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 08:07:35.223811 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 08:07:35.303098 sshd[2245]: pam_unix(sshd:session): session closed for user core Jul 2 08:07:35.314336 systemd[1]: sshd@2-172.31.20.19:22-139.178.89.65:33950.service: Deactivated successfully. Jul 2 08:07:35.321274 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:07:35.324722 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 2 08:07:35.326891 systemd-logind[1991]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:07:35.331364 systemd-logind[1991]: Removed session 3. Jul 2 08:07:35.410807 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 08:07:35.410807 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 08:07:35.410807 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO [Registrar] Starting registrar module Jul 2 08:07:35.410807 amazon-ssm-agent[2213]: 2024-07-02 08:07:34 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 08:07:35.410807 amazon-ssm-agent[2213]: 2024-07-02 08:07:35 INFO [EC2Identity] EC2 registration was successful. Jul 2 08:07:35.410807 amazon-ssm-agent[2213]: 2024-07-02 08:07:35 INFO [CredentialRefresher] credentialRefresher has started Jul 2 08:07:35.410807 amazon-ssm-agent[2213]: 2024-07-02 08:07:35 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 08:07:35.410807 amazon-ssm-agent[2213]: 2024-07-02 08:07:35 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 08:07:35.411783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:07:35.415752 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 08:07:35.418784 systemd[1]: Startup finished in 1.139s (kernel) + 9.494s (initrd) + 8.176s (userspace) = 18.810s. Jul 2 08:07:35.425120 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:07:35.426132 amazon-ssm-agent[2213]: 2024-07-02 08:07:35 INFO [CredentialRefresher] Next credential rotation will be in 30.358323942833334 minutes Jul 2 08:07:36.177247 kubelet[2257]: E0702 08:07:36.177112 2257 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:07:36.182040 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:07:36.182395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:07:36.182934 systemd[1]: kubelet.service: Consumed 1.279s CPU time. Jul 2 08:07:36.436725 amazon-ssm-agent[2213]: 2024-07-02 08:07:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 08:07:36.537501 amazon-ssm-agent[2213]: 2024-07-02 08:07:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2270) started Jul 2 08:07:36.638552 amazon-ssm-agent[2213]: 2024-07-02 08:07:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 08:07:36.669742 ntpd[1985]: Listen normally on 7 eth0 [fe80::465:caff:fea9:1bbd%2]:123 Jul 2 08:07:36.670217 ntpd[1985]: 2 Jul 08:07:36 ntpd[1985]: Listen normally on 7 eth0 [fe80::465:caff:fea9:1bbd%2]:123 Jul 2 08:07:39.396378 systemd-resolved[1930]: Clock change detected. Flushing caches. Jul 2 08:07:45.072481 systemd[1]: Started sshd@3-172.31.20.19:22-139.178.89.65:34194.service - OpenSSH per-connection server daemon (139.178.89.65:34194). Jul 2 08:07:45.245918 sshd[2281]: Accepted publickey for core from 139.178.89.65 port 34194 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:45.248442 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:45.255815 systemd-logind[1991]: New session 4 of user core. Jul 2 08:07:45.265118 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 08:07:45.392575 sshd[2281]: pam_unix(sshd:session): session closed for user core Jul 2 08:07:45.399600 systemd[1]: sshd@3-172.31.20.19:22-139.178.89.65:34194.service: Deactivated successfully. Jul 2 08:07:45.403463 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:07:45.405869 systemd-logind[1991]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:07:45.407836 systemd-logind[1991]: Removed session 4. Jul 2 08:07:45.424441 systemd[1]: Started sshd@4-172.31.20.19:22-139.178.89.65:34196.service - OpenSSH per-connection server daemon (139.178.89.65:34196). Jul 2 08:07:45.599522 sshd[2288]: Accepted publickey for core from 139.178.89.65 port 34196 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:45.602072 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:45.610225 systemd-logind[1991]: New session 5 of user core. Jul 2 08:07:45.620162 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 08:07:45.735790 sshd[2288]: pam_unix(sshd:session): session closed for user core Jul 2 08:07:45.743699 systemd[1]: sshd@4-172.31.20.19:22-139.178.89.65:34196.service: Deactivated successfully. Jul 2 08:07:45.748256 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:07:45.750150 systemd-logind[1991]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:07:45.752109 systemd-logind[1991]: Removed session 5. Jul 2 08:07:45.782383 systemd[1]: Started sshd@5-172.31.20.19:22-139.178.89.65:34202.service - OpenSSH per-connection server daemon (139.178.89.65:34202). Jul 2 08:07:45.918530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:07:45.928303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:07:45.951663 sshd[2295]: Accepted publickey for core from 139.178.89.65 port 34202 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:45.955194 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:45.965489 systemd-logind[1991]: New session 6 of user core. Jul 2 08:07:45.973242 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 08:07:46.105260 sshd[2295]: pam_unix(sshd:session): session closed for user core Jul 2 08:07:46.110857 systemd[1]: sshd@5-172.31.20.19:22-139.178.89.65:34202.service: Deactivated successfully. Jul 2 08:07:46.114741 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:07:46.116348 systemd-logind[1991]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:07:46.119792 systemd-logind[1991]: Removed session 6. Jul 2 08:07:46.148195 systemd[1]: Started sshd@6-172.31.20.19:22-139.178.89.65:34210.service - OpenSSH per-connection server daemon (139.178.89.65:34210). Jul 2 08:07:46.343434 sshd[2305]: Accepted publickey for core from 139.178.89.65 port 34210 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:46.348029 sshd[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:46.361040 systemd-logind[1991]: New session 7 of user core. Jul 2 08:07:46.364812 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 08:07:46.392241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:07:46.406534 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:07:46.506456 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 08:07:46.507119 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:07:46.525037 sudo[2319]: pam_unix(sudo:session): session closed for user root Jul 2 08:07:46.537161 kubelet[2313]: E0702 08:07:46.537060 2313 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:07:46.545776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:07:46.546182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:07:46.549393 sshd[2305]: pam_unix(sshd:session): session closed for user core Jul 2 08:07:46.556435 systemd[1]: sshd@6-172.31.20.19:22-139.178.89.65:34210.service: Deactivated successfully. Jul 2 08:07:46.562112 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:07:46.566639 systemd-logind[1991]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:07:46.568786 systemd-logind[1991]: Removed session 7. Jul 2 08:07:46.598448 systemd[1]: Started sshd@7-172.31.20.19:22-139.178.89.65:34220.service - OpenSSH per-connection server daemon (139.178.89.65:34220). Jul 2 08:07:46.772037 sshd[2327]: Accepted publickey for core from 139.178.89.65 port 34220 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:46.774807 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:46.782209 systemd-logind[1991]: New session 8 of user core. Jul 2 08:07:46.791254 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 08:07:46.901561 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 08:07:46.903739 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:07:46.911979 sudo[2331]: pam_unix(sudo:session): session closed for user root Jul 2 08:07:46.922954 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 08:07:46.923542 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:07:46.954475 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 08:07:46.958434 auditctl[2334]: No rules Jul 2 08:07:46.959230 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 08:07:46.959742 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 08:07:46.965175 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:07:47.032773 augenrules[2352]: No rules Jul 2 08:07:47.035125 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:07:47.039299 sudo[2330]: pam_unix(sudo:session): session closed for user root Jul 2 08:07:47.063719 sshd[2327]: pam_unix(sshd:session): session closed for user core Jul 2 08:07:47.072550 systemd[1]: sshd@7-172.31.20.19:22-139.178.89.65:34220.service: Deactivated successfully. Jul 2 08:07:47.077258 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:07:47.079043 systemd-logind[1991]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:07:47.080963 systemd-logind[1991]: Removed session 8. Jul 2 08:07:47.105422 systemd[1]: Started sshd@8-172.31.20.19:22-139.178.89.65:34234.service - OpenSSH per-connection server daemon (139.178.89.65:34234). Jul 2 08:07:47.290619 sshd[2360]: Accepted publickey for core from 139.178.89.65 port 34234 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:47.293379 sshd[2360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:47.301325 systemd-logind[1991]: New session 9 of user core. Jul 2 08:07:47.311186 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 08:07:47.417509 sudo[2363]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:07:47.418089 sudo[2363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:07:47.586408 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 08:07:47.603448 (dockerd)[2372]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 08:07:47.990332 dockerd[2372]: time="2024-07-02T08:07:47.990247898Z" level=info msg="Starting up" Jul 2 08:07:48.025284 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1569797769-merged.mount: Deactivated successfully. Jul 2 08:07:48.068469 dockerd[2372]: time="2024-07-02T08:07:48.068410307Z" level=info msg="Loading containers: start." Jul 2 08:07:48.240187 kernel: Initializing XFRM netlink socket Jul 2 08:07:48.275230 (udev-worker)[2385]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:07:48.374374 systemd-networkd[1929]: docker0: Link UP Jul 2 08:07:48.394672 dockerd[2372]: time="2024-07-02T08:07:48.394586352Z" level=info msg="Loading containers: done." Jul 2 08:07:48.488945 dockerd[2372]: time="2024-07-02T08:07:48.488767057Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:07:48.489173 dockerd[2372]: time="2024-07-02T08:07:48.489138217Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 08:07:48.489414 dockerd[2372]: time="2024-07-02T08:07:48.489361021Z" level=info msg="Daemon has completed initialization" Jul 2 08:07:48.545447 dockerd[2372]: time="2024-07-02T08:07:48.542383393Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:07:48.544344 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 08:07:49.020351 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2438086738-merged.mount: Deactivated successfully. Jul 2 08:07:49.618695 containerd[2020]: time="2024-07-02T08:07:49.618614486Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 08:07:50.295675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920207895.mount: Deactivated successfully. Jul 2 08:07:51.970331 containerd[2020]: time="2024-07-02T08:07:51.970235742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:51.972783 containerd[2020]: time="2024-07-02T08:07:51.972713970Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671538" Jul 2 08:07:51.975034 containerd[2020]: time="2024-07-02T08:07:51.974926326Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:51.982618 containerd[2020]: time="2024-07-02T08:07:51.982534674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:51.985157 containerd[2020]: time="2024-07-02T08:07:51.984809106Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 2.366126976s" Jul 2 08:07:51.985157 containerd[2020]: time="2024-07-02T08:07:51.984906162Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 2 08:07:52.029154 containerd[2020]: time="2024-07-02T08:07:52.029101034Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 08:07:53.794545 containerd[2020]: time="2024-07-02T08:07:53.793987543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:53.796376 containerd[2020]: time="2024-07-02T08:07:53.796243051Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893118" Jul 2 08:07:53.797409 containerd[2020]: time="2024-07-02T08:07:53.797333563Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:53.804298 containerd[2020]: time="2024-07-02T08:07:53.804168739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:53.812717 containerd[2020]: time="2024-07-02T08:07:53.811852147Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.782684273s" Jul 2 08:07:53.812717 containerd[2020]: time="2024-07-02T08:07:53.811977235Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 2 08:07:53.856955 containerd[2020]: time="2024-07-02T08:07:53.856844371Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 08:07:55.019870 containerd[2020]: time="2024-07-02T08:07:55.019761845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:55.022453 containerd[2020]: time="2024-07-02T08:07:55.022268057Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358438" Jul 2 08:07:55.023328 containerd[2020]: time="2024-07-02T08:07:55.023189009Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:55.031132 containerd[2020]: time="2024-07-02T08:07:55.031047665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:55.033061 containerd[2020]: time="2024-07-02T08:07:55.032779649Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 1.175813778s" Jul 2 08:07:55.033061 containerd[2020]: time="2024-07-02T08:07:55.032870273Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 2 08:07:55.078165 containerd[2020]: time="2024-07-02T08:07:55.077971061Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 08:07:56.433946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2630208290.mount: Deactivated successfully. Jul 2 08:07:56.626690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:07:56.639364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:07:57.340274 containerd[2020]: time="2024-07-02T08:07:57.339442377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:57.350201 containerd[2020]: time="2024-07-02T08:07:57.350142033Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772461" Jul 2 08:07:57.363777 containerd[2020]: time="2024-07-02T08:07:57.361104093Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:57.388848 containerd[2020]: time="2024-07-02T08:07:57.388388997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:57.397644 containerd[2020]: time="2024-07-02T08:07:57.397551453Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 2.31950064s" Jul 2 08:07:57.397644 containerd[2020]: time="2024-07-02T08:07:57.397640973Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 08:07:57.491367 containerd[2020]: time="2024-07-02T08:07:57.491310525Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:07:57.518216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:07:57.532590 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:07:57.654086 kubelet[2604]: E0702 08:07:57.653978 2604 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:07:57.658781 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:07:57.659149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:07:58.037366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount343698312.mount: Deactivated successfully. Jul 2 08:07:58.044904 containerd[2020]: time="2024-07-02T08:07:58.044819960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:58.046533 containerd[2020]: time="2024-07-02T08:07:58.046465448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jul 2 08:07:58.048201 containerd[2020]: time="2024-07-02T08:07:58.048092768Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:58.054793 containerd[2020]: time="2024-07-02T08:07:58.054671948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:58.057413 containerd[2020]: time="2024-07-02T08:07:58.057237176Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 565.498359ms" Jul 2 08:07:58.057413 containerd[2020]: time="2024-07-02T08:07:58.057299144Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 08:07:58.104842 containerd[2020]: time="2024-07-02T08:07:58.104779304Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 08:07:58.687946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4049351600.mount: Deactivated successfully. Jul 2 08:08:01.011312 containerd[2020]: time="2024-07-02T08:08:01.011248883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:01.014294 containerd[2020]: time="2024-07-02T08:08:01.014216279Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jul 2 08:08:01.015271 containerd[2020]: time="2024-07-02T08:08:01.015185051Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:01.022982 containerd[2020]: time="2024-07-02T08:08:01.022926875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:01.026306 containerd[2020]: time="2024-07-02T08:08:01.025706147Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.920863459s" Jul 2 08:08:01.026306 containerd[2020]: time="2024-07-02T08:08:01.025792151Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 08:08:01.067222 containerd[2020]: time="2024-07-02T08:08:01.067173719Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 08:08:01.620423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4050305817.mount: Deactivated successfully. Jul 2 08:08:02.362916 containerd[2020]: time="2024-07-02T08:08:02.361309142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:02.377172 containerd[2020]: time="2024-07-02T08:08:02.377091266Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Jul 2 08:08:02.396725 containerd[2020]: time="2024-07-02T08:08:02.394727522Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:02.421478 containerd[2020]: time="2024-07-02T08:08:02.421414046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:02.423475 containerd[2020]: time="2024-07-02T08:08:02.423399494Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.355983855s" Jul 2 08:08:02.423700 containerd[2020]: time="2024-07-02T08:08:02.423662174Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 2 08:08:03.020080 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 08:08:07.878080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 08:08:07.887388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:08:08.343310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:08:08.353511 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:08:08.444815 kubelet[2751]: E0702 08:08:08.444717 2751 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:08:08.450348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:08:08.450716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:08:09.642038 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:08:09.652461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:08:09.700485 systemd[1]: Reloading requested from client PID 2765 ('systemctl') (unit session-9.scope)... Jul 2 08:08:09.700963 systemd[1]: Reloading... Jul 2 08:08:09.937943 zram_generator::config[2806]: No configuration found. Jul 2 08:08:10.228247 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:08:10.404736 systemd[1]: Reloading finished in 702 ms. Jul 2 08:08:10.487153 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 08:08:10.487329 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 08:08:10.487859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:08:10.497206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:08:10.956135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:08:10.959302 (kubelet)[2865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:08:11.054627 kubelet[2865]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:08:11.055389 kubelet[2865]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:08:11.055389 kubelet[2865]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:08:11.059340 kubelet[2865]: I0702 08:08:11.059267 2865 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:08:12.208078 kubelet[2865]: I0702 08:08:12.208023 2865 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 08:08:12.208078 kubelet[2865]: I0702 08:08:12.208068 2865 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:08:12.208732 kubelet[2865]: I0702 08:08:12.208423 2865 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 08:08:12.247155 kubelet[2865]: I0702 08:08:12.246859 2865 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:08:12.249232 kubelet[2865]: E0702 08:08:12.249160 2865 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:12.261551 kubelet[2865]: W0702 08:08:12.261495 2865 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 08:08:12.262888 kubelet[2865]: I0702 08:08:12.262826 2865 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:08:12.263773 kubelet[2865]: I0702 08:08:12.263713 2865 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:08:12.264419 kubelet[2865]: I0702 08:08:12.264299 2865 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:08:12.264683 kubelet[2865]: I0702 08:08:12.264455 2865 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:08:12.264683 kubelet[2865]: I0702 08:08:12.264514 2865 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:08:12.264899 kubelet[2865]: I0702 08:08:12.264824 2865 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:08:12.268169 kubelet[2865]: I0702 08:08:12.268106 2865 kubelet.go:393] "Attempting to sync node with API server" Jul 2 08:08:12.268169 kubelet[2865]: I0702 08:08:12.268164 2865 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:08:12.268381 kubelet[2865]: I0702 08:08:12.268254 2865 kubelet.go:309] "Adding apiserver pod source" Jul 2 08:08:12.268381 kubelet[2865]: I0702 08:08:12.268281 2865 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:08:12.271088 kubelet[2865]: W0702 08:08:12.270726 2865 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-19&limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:12.271088 kubelet[2865]: E0702 08:08:12.270810 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-19&limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:12.271960 kubelet[2865]: W0702 08:08:12.271163 2865 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.20.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:12.271960 kubelet[2865]: E0702 08:08:12.271227 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:12.271960 kubelet[2865]: I0702 08:08:12.271394 2865 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:08:12.278230 kubelet[2865]: W0702 08:08:12.277217 2865 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:08:12.279678 kubelet[2865]: I0702 08:08:12.279357 2865 server.go:1232] "Started kubelet" Jul 2 08:08:12.280128 kubelet[2865]: I0702 08:08:12.280052 2865 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:08:12.281525 kubelet[2865]: I0702 08:08:12.281460 2865 server.go:462] "Adding debug handlers to kubelet server" Jul 2 08:08:12.284940 kubelet[2865]: I0702 08:08:12.284706 2865 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 08:08:12.286264 kubelet[2865]: I0702 08:08:12.285433 2865 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:08:12.286264 kubelet[2865]: I0702 08:08:12.285450 2865 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:08:12.287283 kubelet[2865]: E0702 08:08:12.286814 2865 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-20-19.17de56ef04c54517", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-20-19", UID:"ip-172-31-20-19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-20-19"}, FirstTimestamp:time.Date(2024, time.July, 2, 8, 8, 12, 279317783, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 8, 8, 12, 279317783, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-20-19"}': 'Post "https://172.31.20.19:6443/api/v1/namespaces/default/events": dial tcp 172.31.20.19:6443: connect: connection refused'(may retry after sleeping) Jul 2 08:08:12.287283 kubelet[2865]: E0702 08:08:12.287184 2865 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 08:08:12.287283 kubelet[2865]: E0702 08:08:12.287223 2865 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:08:12.296041 kubelet[2865]: I0702 08:08:12.296004 2865 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:08:12.297094 kubelet[2865]: I0702 08:08:12.296369 2865 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:08:12.297094 kubelet[2865]: I0702 08:08:12.296844 2865 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:08:12.298160 kubelet[2865]: E0702 08:08:12.297816 2865 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-19?timeout=10s\": dial tcp 172.31.20.19:6443: connect: connection refused" interval="200ms" Jul 2 08:08:12.299859 kubelet[2865]: W0702 08:08:12.299780 2865 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:12.300208 kubelet[2865]: E0702 08:08:12.300170 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:12.329977 kubelet[2865]: I0702 08:08:12.329757 2865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:08:12.332396 kubelet[2865]: I0702 08:08:12.332361 2865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:08:12.332550 kubelet[2865]: I0702 08:08:12.332531 2865 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:08:12.332670 kubelet[2865]: I0702 08:08:12.332649 2865 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 08:08:12.332938 kubelet[2865]: E0702 08:08:12.332857 2865 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:08:12.356828 kubelet[2865]: W0702 08:08:12.356759 2865 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:12.356828 kubelet[2865]: E0702 08:08:12.356833 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:12.377570 kubelet[2865]: I0702 08:08:12.377518 2865 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:08:12.377570 kubelet[2865]: I0702 08:08:12.377559 2865 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:08:12.377786 kubelet[2865]: I0702 08:08:12.377593 2865 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:08:12.380663 kubelet[2865]: I0702 08:08:12.380611 2865 policy_none.go:49] "None policy: Start" Jul 2 08:08:12.381809 kubelet[2865]: I0702 08:08:12.381772 2865 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 08:08:12.381809 kubelet[2865]: I0702 08:08:12.381824 2865 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:08:12.391109 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 08:08:12.403033 kubelet[2865]: I0702 08:08:12.402941 2865 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-20-19" Jul 2 08:08:12.403493 kubelet[2865]: E0702 08:08:12.403444 2865 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.20.19:6443/api/v1/nodes\": dial tcp 172.31.20.19:6443: connect: connection refused" node="ip-172-31-20-19" Jul 2 08:08:12.410029 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 08:08:12.417215 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 08:08:12.428318 kubelet[2865]: I0702 08:08:12.427653 2865 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:08:12.428318 kubelet[2865]: I0702 08:08:12.428073 2865 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:08:12.429313 kubelet[2865]: E0702 08:08:12.429265 2865 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-19\" not found" Jul 2 08:08:12.433300 kubelet[2865]: I0702 08:08:12.433263 2865 topology_manager.go:215] "Topology Admit Handler" podUID="ef643070e3c6b0a01f0a13537f0ceae2" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-19" Jul 2 08:08:12.436859 kubelet[2865]: I0702 08:08:12.436786 2865 topology_manager.go:215] "Topology Admit Handler" podUID="43f81513cc73c0e9e543e5b2b16146f7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-19" Jul 2 08:08:12.439769 kubelet[2865]: I0702 08:08:12.439511 2865 topology_manager.go:215] "Topology Admit Handler" podUID="4e01883ba7c4a22495d9e8d6053a513c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-19" Jul 2 08:08:12.453826 systemd[1]: Created slice kubepods-burstable-podef643070e3c6b0a01f0a13537f0ceae2.slice - libcontainer container kubepods-burstable-podef643070e3c6b0a01f0a13537f0ceae2.slice. Jul 2 08:08:12.474373 systemd[1]: Created slice kubepods-burstable-pod4e01883ba7c4a22495d9e8d6053a513c.slice - libcontainer container kubepods-burstable-pod4e01883ba7c4a22495d9e8d6053a513c.slice. Jul 2 08:08:12.486348 systemd[1]: Created slice kubepods-burstable-pod43f81513cc73c0e9e543e5b2b16146f7.slice - libcontainer container kubepods-burstable-pod43f81513cc73c0e9e543e5b2b16146f7.slice. Jul 2 08:08:12.498739 kubelet[2865]: E0702 08:08:12.498677 2865 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-19?timeout=10s\": dial tcp 172.31.20.19:6443: connect: connection refused" interval="400ms" Jul 2 08:08:12.501607 kubelet[2865]: I0702 08:08:12.500993 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e01883ba7c4a22495d9e8d6053a513c-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"4e01883ba7c4a22495d9e8d6053a513c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Jul 2 08:08:12.501607 kubelet[2865]: I0702 08:08:12.501049 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e01883ba7c4a22495d9e8d6053a513c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"4e01883ba7c4a22495d9e8d6053a513c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Jul 2 08:08:12.501607 kubelet[2865]: I0702 08:08:12.501100 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43f81513cc73c0e9e543e5b2b16146f7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-19\" (UID: \"43f81513cc73c0e9e543e5b2b16146f7\") " pod="kube-system/kube-apiserver-ip-172-31-20-19" Jul 2 08:08:12.501607 kubelet[2865]: I0702 08:08:12.501144 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43f81513cc73c0e9e543e5b2b16146f7-ca-certs\") pod \"kube-apiserver-ip-172-31-20-19\" (UID: \"43f81513cc73c0e9e543e5b2b16146f7\") " pod="kube-system/kube-apiserver-ip-172-31-20-19" Jul 2 08:08:12.501607 kubelet[2865]: I0702 08:08:12.501185 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43f81513cc73c0e9e543e5b2b16146f7-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-19\" (UID: \"43f81513cc73c0e9e543e5b2b16146f7\") " pod="kube-system/kube-apiserver-ip-172-31-20-19" Jul 2 08:08:12.501919 kubelet[2865]: I0702 08:08:12.501232 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4e01883ba7c4a22495d9e8d6053a513c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"4e01883ba7c4a22495d9e8d6053a513c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Jul 2 08:08:12.501919 kubelet[2865]: I0702 08:08:12.501273 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e01883ba7c4a22495d9e8d6053a513c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"4e01883ba7c4a22495d9e8d6053a513c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Jul 2 08:08:12.501919 kubelet[2865]: I0702 08:08:12.501334 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e01883ba7c4a22495d9e8d6053a513c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"4e01883ba7c4a22495d9e8d6053a513c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Jul 2 08:08:12.501919 kubelet[2865]: I0702 08:08:12.501380 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef643070e3c6b0a01f0a13537f0ceae2-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-19\" (UID: \"ef643070e3c6b0a01f0a13537f0ceae2\") " pod="kube-system/kube-scheduler-ip-172-31-20-19" Jul 2 08:08:12.605625 kubelet[2865]: I0702 08:08:12.605582 2865 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-20-19" Jul 2 08:08:12.606217 kubelet[2865]: E0702 08:08:12.606187 2865 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.20.19:6443/api/v1/nodes\": dial tcp 172.31.20.19:6443: connect: connection refused" node="ip-172-31-20-19" Jul 2 08:08:12.769178 containerd[2020]: time="2024-07-02T08:08:12.768964441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-19,Uid:ef643070e3c6b0a01f0a13537f0ceae2,Namespace:kube-system,Attempt:0,}" Jul 2 08:08:12.787152 containerd[2020]: time="2024-07-02T08:08:12.786940081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-19,Uid:4e01883ba7c4a22495d9e8d6053a513c,Namespace:kube-system,Attempt:0,}" Jul 2 08:08:12.792332 containerd[2020]: time="2024-07-02T08:08:12.792145165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-19,Uid:43f81513cc73c0e9e543e5b2b16146f7,Namespace:kube-system,Attempt:0,}" Jul 2 08:08:12.899927 kubelet[2865]: E0702 08:08:12.899818 2865 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-19?timeout=10s\": dial tcp 172.31.20.19:6443: connect: connection refused" interval="800ms" Jul 2 08:08:13.008827 kubelet[2865]: I0702 08:08:13.008781 2865 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-20-19" Jul 2 08:08:13.009331 kubelet[2865]: E0702 08:08:13.009295 2865 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.20.19:6443/api/v1/nodes\": dial tcp 172.31.20.19:6443: connect: connection refused" node="ip-172-31-20-19" Jul 2 08:08:13.464362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472277003.mount: Deactivated successfully. Jul 2 08:08:13.473481 containerd[2020]: time="2024-07-02T08:08:13.473389597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:08:13.475207 containerd[2020]: time="2024-07-02T08:08:13.475134829Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:08:13.477065 containerd[2020]: time="2024-07-02T08:08:13.476998525Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:08:13.478117 containerd[2020]: time="2024-07-02T08:08:13.478052005Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 2 08:08:13.480672 containerd[2020]: time="2024-07-02T08:08:13.480169933Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:08:13.482590 containerd[2020]: time="2024-07-02T08:08:13.482393689Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:08:13.482722 containerd[2020]: time="2024-07-02T08:08:13.482637337Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:08:13.489823 containerd[2020]: time="2024-07-02T08:08:13.489638125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:08:13.492786 containerd[2020]: time="2024-07-02T08:08:13.492497005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 700.200064ms" Jul 2 08:08:13.496220 containerd[2020]: time="2024-07-02T08:08:13.496135801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 708.917968ms" Jul 2 08:08:13.513738 containerd[2020]: time="2024-07-02T08:08:13.513668605Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 744.561244ms" Jul 2 08:08:13.589752 kubelet[2865]: W0702 08:08:13.589469 2865 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:13.589752 kubelet[2865]: E0702 08:08:13.589538 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:13.700688 kubelet[2865]: E0702 08:08:13.700628 2865 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-19?timeout=10s\": dial tcp 172.31.20.19:6443: connect: connection refused" interval="1.6s" Jul 2 08:08:13.722400 containerd[2020]: time="2024-07-02T08:08:13.720624362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:13.722400 containerd[2020]: time="2024-07-02T08:08:13.720741854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:13.722400 containerd[2020]: time="2024-07-02T08:08:13.720784022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:13.722400 containerd[2020]: time="2024-07-02T08:08:13.720818690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:13.723238 containerd[2020]: time="2024-07-02T08:08:13.723090542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:13.725463 containerd[2020]: time="2024-07-02T08:08:13.725275034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:13.725744 containerd[2020]: time="2024-07-02T08:08:13.725584754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:13.727478 containerd[2020]: time="2024-07-02T08:08:13.727055246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:13.727478 containerd[2020]: time="2024-07-02T08:08:13.727143158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:13.727478 containerd[2020]: time="2024-07-02T08:08:13.727169822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:13.731194 containerd[2020]: time="2024-07-02T08:08:13.730831334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:13.731194 containerd[2020]: time="2024-07-02T08:08:13.730925642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:13.780557 systemd[1]: Started cri-containerd-b2b7a69d96ffefa644720c11bd66f3f71b5b0e3fb82305bbcdfec4fc6388082e.scope - libcontainer container b2b7a69d96ffefa644720c11bd66f3f71b5b0e3fb82305bbcdfec4fc6388082e. Jul 2 08:08:13.798611 systemd[1]: Started cri-containerd-d326a1eaebb03f0e1811996a75921dff4c24ae663fa2879a0146ad5b8d5603ec.scope - libcontainer container d326a1eaebb03f0e1811996a75921dff4c24ae663fa2879a0146ad5b8d5603ec. Jul 2 08:08:13.809596 systemd[1]: Started cri-containerd-ef0d0d689cc6e421f5d4ed11253f8fea6d59afdd6d93d4ed08ab02e5a31cf759.scope - libcontainer container ef0d0d689cc6e421f5d4ed11253f8fea6d59afdd6d93d4ed08ab02e5a31cf759. Jul 2 08:08:13.814748 kubelet[2865]: I0702 08:08:13.814601 2865 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-20-19" Jul 2 08:08:13.816702 kubelet[2865]: E0702 08:08:13.816408 2865 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.20.19:6443/api/v1/nodes\": dial tcp 172.31.20.19:6443: connect: connection refused" node="ip-172-31-20-19" Jul 2 08:08:13.823923 kubelet[2865]: W0702 08:08:13.823708 2865 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.20.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:13.823923 kubelet[2865]: E0702 08:08:13.823802 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:13.846724 kubelet[2865]: W0702 08:08:13.846599 2865 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:13.846851 kubelet[2865]: E0702 08:08:13.846734 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:13.857369 kubelet[2865]: W0702 08:08:13.857281 2865 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-19&limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:13.857369 kubelet[2865]: E0702 08:08:13.857375 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-19&limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:13.924659 containerd[2020]: time="2024-07-02T08:08:13.924501423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-19,Uid:ef643070e3c6b0a01f0a13537f0ceae2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2b7a69d96ffefa644720c11bd66f3f71b5b0e3fb82305bbcdfec4fc6388082e\"" Jul 2 08:08:13.935325 containerd[2020]: time="2024-07-02T08:08:13.934452639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-19,Uid:43f81513cc73c0e9e543e5b2b16146f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d326a1eaebb03f0e1811996a75921dff4c24ae663fa2879a0146ad5b8d5603ec\"" Jul 2 08:08:13.936033 containerd[2020]: time="2024-07-02T08:08:13.935688423Z" level=info msg="CreateContainer within sandbox \"b2b7a69d96ffefa644720c11bd66f3f71b5b0e3fb82305bbcdfec4fc6388082e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:08:13.950385 containerd[2020]: time="2024-07-02T08:08:13.950160303Z" level=info msg="CreateContainer within sandbox \"d326a1eaebb03f0e1811996a75921dff4c24ae663fa2879a0146ad5b8d5603ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:08:13.961089 containerd[2020]: time="2024-07-02T08:08:13.961023327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-19,Uid:4e01883ba7c4a22495d9e8d6053a513c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef0d0d689cc6e421f5d4ed11253f8fea6d59afdd6d93d4ed08ab02e5a31cf759\"" Jul 2 08:08:13.967192 containerd[2020]: time="2024-07-02T08:08:13.967132383Z" level=info msg="CreateContainer within sandbox \"ef0d0d689cc6e421f5d4ed11253f8fea6d59afdd6d93d4ed08ab02e5a31cf759\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:08:13.996285 containerd[2020]: time="2024-07-02T08:08:13.995904855Z" level=info msg="CreateContainer within sandbox \"d326a1eaebb03f0e1811996a75921dff4c24ae663fa2879a0146ad5b8d5603ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0ed1875d06ae7261b314c3882352d5608dcaff10705d418257d5881f73f7462a\"" Jul 2 08:08:13.998551 containerd[2020]: time="2024-07-02T08:08:13.998224227Z" level=info msg="StartContainer for \"0ed1875d06ae7261b314c3882352d5608dcaff10705d418257d5881f73f7462a\"" Jul 2 08:08:13.999536 containerd[2020]: time="2024-07-02T08:08:13.999484083Z" level=info msg="CreateContainer within sandbox \"b2b7a69d96ffefa644720c11bd66f3f71b5b0e3fb82305bbcdfec4fc6388082e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f5e344c4aa86d66dbbd6209825936287f4103f927905bbc4a615398a4b36d84b\"" Jul 2 08:08:14.001896 containerd[2020]: time="2024-07-02T08:08:14.001829051Z" level=info msg="StartContainer for \"f5e344c4aa86d66dbbd6209825936287f4103f927905bbc4a615398a4b36d84b\"" Jul 2 08:08:14.006443 containerd[2020]: time="2024-07-02T08:08:14.006244031Z" level=info msg="CreateContainer within sandbox \"ef0d0d689cc6e421f5d4ed11253f8fea6d59afdd6d93d4ed08ab02e5a31cf759\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1d682c3f775b4d1b7e362340b0c915e8f3aa3d07b0a4495acc6371aae64c7828\"" Jul 2 08:08:14.007941 containerd[2020]: time="2024-07-02T08:08:14.007001171Z" level=info msg="StartContainer for \"1d682c3f775b4d1b7e362340b0c915e8f3aa3d07b0a4495acc6371aae64c7828\"" Jul 2 08:08:14.068315 systemd[1]: Started cri-containerd-f5e344c4aa86d66dbbd6209825936287f4103f927905bbc4a615398a4b36d84b.scope - libcontainer container f5e344c4aa86d66dbbd6209825936287f4103f927905bbc4a615398a4b36d84b. Jul 2 08:08:14.091210 systemd[1]: Started cri-containerd-0ed1875d06ae7261b314c3882352d5608dcaff10705d418257d5881f73f7462a.scope - libcontainer container 0ed1875d06ae7261b314c3882352d5608dcaff10705d418257d5881f73f7462a. Jul 2 08:08:14.122065 systemd[1]: Started cri-containerd-1d682c3f775b4d1b7e362340b0c915e8f3aa3d07b0a4495acc6371aae64c7828.scope - libcontainer container 1d682c3f775b4d1b7e362340b0c915e8f3aa3d07b0a4495acc6371aae64c7828. Jul 2 08:08:14.190419 containerd[2020]: time="2024-07-02T08:08:14.190344264Z" level=info msg="StartContainer for \"f5e344c4aa86d66dbbd6209825936287f4103f927905bbc4a615398a4b36d84b\" returns successfully" Jul 2 08:08:14.247291 containerd[2020]: time="2024-07-02T08:08:14.246494293Z" level=info msg="StartContainer for \"0ed1875d06ae7261b314c3882352d5608dcaff10705d418257d5881f73f7462a\" returns successfully" Jul 2 08:08:14.253944 containerd[2020]: time="2024-07-02T08:08:14.253842145Z" level=info msg="StartContainer for \"1d682c3f775b4d1b7e362340b0c915e8f3aa3d07b0a4495acc6371aae64c7828\" returns successfully" Jul 2 08:08:14.340926 kubelet[2865]: E0702 08:08:14.340317 2865 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.19:6443: connect: connection refused Jul 2 08:08:15.419389 kubelet[2865]: I0702 08:08:15.419320 2865 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-20-19" Jul 2 08:08:17.338945 update_engine[1995]: I0702 08:08:17.337926 1995 update_attempter.cc:509] Updating boot flags... Jul 2 08:08:17.489937 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3153) Jul 2 08:08:18.273670 kubelet[2865]: I0702 08:08:18.273613 2865 apiserver.go:52] "Watching apiserver" Jul 2 08:08:18.315269 kubelet[2865]: E0702 08:08:18.315212 2865 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-19\" not found" node="ip-172-31-20-19" Jul 2 08:08:18.350009 kubelet[2865]: I0702 08:08:18.349956 2865 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-20-19" Jul 2 08:08:18.398216 kubelet[2865]: I0702 08:08:18.398168 2865 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:08:18.419729 kubelet[2865]: E0702 08:08:18.419593 2865 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-20-19.17de56ef04c54517", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-20-19", UID:"ip-172-31-20-19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-20-19"}, FirstTimestamp:time.Date(2024, time.July, 2, 8, 8, 12, 279317783, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 8, 8, 12, 279317783, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-20-19"}': 'namespaces "default" not found' (will not retry!) Jul 2 08:08:18.491847 kubelet[2865]: E0702 08:08:18.491690 2865 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-20-19.17de56ef053da40f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-20-19", UID:"ip-172-31-20-19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-20-19"}, FirstTimestamp:time.Date(2024, time.July, 2, 8, 8, 12, 287206415, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 8, 8, 12, 287206415, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-20-19"}': 'namespaces "default" not found' (will not retry!) Jul 2 08:08:21.116316 systemd[1]: Reloading requested from client PID 3237 ('systemctl') (unit session-9.scope)... Jul 2 08:08:21.116352 systemd[1]: Reloading... Jul 2 08:08:21.287952 zram_generator::config[3276]: No configuration found. Jul 2 08:08:21.591758 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:08:21.815622 systemd[1]: Reloading finished in 698 ms. Jul 2 08:08:21.909974 kubelet[2865]: I0702 08:08:21.909856 2865 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:08:21.910359 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:08:21.926853 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:08:21.927462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:08:21.927569 systemd[1]: kubelet.service: Consumed 2.101s CPU time, 114.1M memory peak, 0B memory swap peak. Jul 2 08:08:21.937638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:08:22.394338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:08:22.403537 (kubelet)[3335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:08:22.543973 kubelet[3335]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:08:22.543973 kubelet[3335]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:08:22.543973 kubelet[3335]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:08:22.543973 kubelet[3335]: I0702 08:08:22.543980 3335 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:08:22.552994 kubelet[3335]: I0702 08:08:22.552859 3335 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 08:08:22.552994 kubelet[3335]: I0702 08:08:22.552989 3335 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:08:22.553410 kubelet[3335]: I0702 08:08:22.553342 3335 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 08:08:22.556629 kubelet[3335]: I0702 08:08:22.556559 3335 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:08:22.559402 kubelet[3335]: I0702 08:08:22.559037 3335 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:08:22.562356 sudo[3347]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 08:08:22.564116 sudo[3347]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 08:08:22.575120 kubelet[3335]: W0702 08:08:22.574777 3335 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 08:08:22.577952 kubelet[3335]: I0702 08:08:22.577364 3335 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:08:22.578612 kubelet[3335]: I0702 08:08:22.578558 3335 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:08:22.579303 kubelet[3335]: I0702 08:08:22.578863 3335 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:08:22.579303 kubelet[3335]: I0702 08:08:22.578969 3335 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:08:22.579303 kubelet[3335]: I0702 08:08:22.578992 3335 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:08:22.579303 kubelet[3335]: I0702 08:08:22.579063 3335 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:08:22.579303 kubelet[3335]: I0702 08:08:22.579262 3335 kubelet.go:393] "Attempting to sync node with API server" Jul 2 08:08:22.584037 kubelet[3335]: I0702 08:08:22.580538 3335 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:08:22.584037 kubelet[3335]: I0702 08:08:22.580640 3335 kubelet.go:309] "Adding apiserver pod source" Jul 2 08:08:22.584037 kubelet[3335]: I0702 08:08:22.582947 3335 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:08:22.584935 kubelet[3335]: I0702 08:08:22.584528 3335 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:08:22.585559 kubelet[3335]: I0702 08:08:22.585488 3335 server.go:1232] "Started kubelet" Jul 2 08:08:22.593543 kubelet[3335]: I0702 08:08:22.593235 3335 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:08:22.599817 kubelet[3335]: E0702 08:08:22.599759 3335 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 08:08:22.599817 kubelet[3335]: E0702 08:08:22.599824 3335 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:08:22.607823 kubelet[3335]: I0702 08:08:22.607771 3335 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:08:22.608688 kubelet[3335]: I0702 08:08:22.608626 3335 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:08:22.611176 kubelet[3335]: I0702 08:08:22.608995 3335 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:08:22.611176 kubelet[3335]: I0702 08:08:22.610617 3335 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:08:22.613479 kubelet[3335]: I0702 08:08:22.613412 3335 server.go:462] "Adding debug handlers to kubelet server" Jul 2 08:08:22.615900 kubelet[3335]: I0702 08:08:22.615214 3335 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 08:08:22.615900 kubelet[3335]: I0702 08:08:22.615542 3335 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:08:22.692701 kubelet[3335]: I0702 08:08:22.689027 3335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:08:22.693018 kubelet[3335]: I0702 08:08:22.692965 3335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:08:22.693018 kubelet[3335]: I0702 08:08:22.693017 3335 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:08:22.693158 kubelet[3335]: I0702 08:08:22.693051 3335 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 08:08:22.693158 kubelet[3335]: E0702 08:08:22.693136 3335 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:08:22.739824 kubelet[3335]: I0702 08:08:22.739393 3335 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-20-19" Jul 2 08:08:22.792691 kubelet[3335]: I0702 08:08:22.791807 3335 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-20-19" Jul 2 08:08:22.792691 kubelet[3335]: I0702 08:08:22.792553 3335 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-20-19" Jul 2 08:08:22.797239 kubelet[3335]: E0702 08:08:22.797159 3335 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 08:08:22.924353 kubelet[3335]: I0702 08:08:22.924286 3335 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:08:22.924353 kubelet[3335]: I0702 08:08:22.924335 3335 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:08:22.924557 kubelet[3335]: I0702 08:08:22.924371 3335 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:08:22.925457 kubelet[3335]: I0702 08:08:22.924724 3335 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:08:22.925457 kubelet[3335]: I0702 08:08:22.924792 3335 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:08:22.925457 kubelet[3335]: I0702 08:08:22.924812 3335 policy_none.go:49] "None policy: Start" Jul 2 08:08:22.927834 kubelet[3335]: I0702 08:08:22.927287 3335 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 08:08:22.927834 kubelet[3335]: I0702 08:08:22.927350 3335 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:08:22.927834 kubelet[3335]: I0702 08:08:22.927668 3335 state_mem.go:75] "Updated machine memory state" Jul 2 08:08:22.946111 kubelet[3335]: I0702 08:08:22.942048 3335 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:08:22.946111 kubelet[3335]: I0702 08:08:22.944540 3335 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:08:22.997928 kubelet[3335]: I0702 08:08:22.997867 3335 topology_manager.go:215] "Topology Admit Handler" podUID="43f81513cc73c0e9e543e5b2b16146f7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-19" Jul 2 08:08:23.000111 kubelet[3335]: I0702 08:08:22.998154 3335 topology_manager.go:215] "Topology Admit Handler" podUID="4e01883ba7c4a22495d9e8d6053a513c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-19" Jul 2 08:08:23.000111 kubelet[3335]: I0702 08:08:22.998249 3335 topology_manager.go:215] "Topology Admit Handler" podUID="ef643070e3c6b0a01f0a13537f0ceae2" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-19" Jul 2 08:08:23.018813 kubelet[3335]: I0702 08:08:23.018749 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef643070e3c6b0a01f0a13537f0ceae2-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-19\" (UID: \"ef643070e3c6b0a01f0a13537f0ceae2\") " pod="kube-system/kube-scheduler-ip-172-31-20-19" Jul 2 08:08:23.018971 kubelet[3335]: I0702 08:08:23.018845 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43f81513cc73c0e9e543e5b2b16146f7-ca-certs\") pod \"kube-apiserver-ip-172-31-20-19\" (UID: \"43f81513cc73c0e9e543e5b2b16146f7\") " pod="kube-system/kube-apiserver-ip-172-31-20-19" Jul 2 08:08:23.018971 kubelet[3335]: I0702 08:08:23.018929 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43f81513cc73c0e9e543e5b2b16146f7-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-19\" (UID: \"43f81513cc73c0e9e543e5b2b16146f7\") " pod="kube-system/kube-apiserver-ip-172-31-20-19" Jul 2 08:08:23.019128 kubelet[3335]: I0702 08:08:23.018989 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43f81513cc73c0e9e543e5b2b16146f7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-19\" (UID: \"43f81513cc73c0e9e543e5b2b16146f7\") " pod="kube-system/kube-apiserver-ip-172-31-20-19" Jul 2 08:08:23.019128 kubelet[3335]: I0702 08:08:23.019055 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e01883ba7c4a22495d9e8d6053a513c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"4e01883ba7c4a22495d9e8d6053a513c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Jul 2 08:08:23.019128 kubelet[3335]: I0702 08:08:23.019108 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e01883ba7c4a22495d9e8d6053a513c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"4e01883ba7c4a22495d9e8d6053a513c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Jul 2 08:08:23.019293 kubelet[3335]: I0702 08:08:23.019152 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e01883ba7c4a22495d9e8d6053a513c-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"4e01883ba7c4a22495d9e8d6053a513c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Jul 2 08:08:23.019293 kubelet[3335]: I0702 08:08:23.019201 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4e01883ba7c4a22495d9e8d6053a513c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"4e01883ba7c4a22495d9e8d6053a513c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Jul 2 08:08:23.019293 kubelet[3335]: I0702 08:08:23.019260 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e01883ba7c4a22495d9e8d6053a513c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"4e01883ba7c4a22495d9e8d6053a513c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Jul 2 08:08:23.541372 sudo[3347]: pam_unix(sudo:session): session closed for user root Jul 2 08:08:23.585938 kubelet[3335]: I0702 08:08:23.584445 3335 apiserver.go:52] "Watching apiserver" Jul 2 08:08:23.609249 kubelet[3335]: I0702 08:08:23.609120 3335 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:08:23.840016 kubelet[3335]: E0702 08:08:23.838294 3335 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-19\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-19" Jul 2 08:08:23.881203 kubelet[3335]: I0702 08:08:23.880722 3335 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-19" podStartSLOduration=0.88059396 podCreationTimestamp="2024-07-02 08:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:08:23.880088016 +0000 UTC m=+1.461766280" watchObservedRunningTime="2024-07-02 08:08:23.88059396 +0000 UTC m=+1.462272224" Jul 2 08:08:23.882077 kubelet[3335]: I0702 08:08:23.881440 3335 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-19" podStartSLOduration=0.881355372 podCreationTimestamp="2024-07-02 08:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:08:23.865444548 +0000 UTC m=+1.447122836" watchObservedRunningTime="2024-07-02 08:08:23.881355372 +0000 UTC m=+1.463033636" Jul 2 08:08:23.896425 kubelet[3335]: I0702 08:08:23.896363 3335 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-19" podStartSLOduration=0.896281489 podCreationTimestamp="2024-07-02 08:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:08:23.895743169 +0000 UTC m=+1.477421445" watchObservedRunningTime="2024-07-02 08:08:23.896281489 +0000 UTC m=+1.477959765" Jul 2 08:08:25.462379 sudo[2363]: pam_unix(sudo:session): session closed for user root Jul 2 08:08:25.487678 sshd[2360]: pam_unix(sshd:session): session closed for user core Jul 2 08:08:25.494508 systemd[1]: sshd@8-172.31.20.19:22-139.178.89.65:34234.service: Deactivated successfully. Jul 2 08:08:25.499369 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:08:25.500092 systemd[1]: session-9.scope: Consumed 10.367s CPU time, 134.2M memory peak, 0B memory swap peak. Jul 2 08:08:25.502142 systemd-logind[1991]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:08:25.504386 systemd-logind[1991]: Removed session 9. Jul 2 08:08:34.304375 kubelet[3335]: I0702 08:08:34.304316 3335 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:08:34.305266 containerd[2020]: time="2024-07-02T08:08:34.305209988Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:08:34.306187 kubelet[3335]: I0702 08:08:34.306139 3335 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:08:35.178903 kubelet[3335]: I0702 08:08:35.178816 3335 topology_manager.go:215] "Topology Admit Handler" podUID="af632c04-aa15-463b-9e30-8eeb76585a78" podNamespace="kube-system" podName="kube-proxy-mbc2l" Jul 2 08:08:35.193404 kubelet[3335]: W0702 08:08:35.193327 3335 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-20-19" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-19' and this object Jul 2 08:08:35.193404 kubelet[3335]: E0702 08:08:35.193397 3335 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-20-19" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-19' and this object Jul 2 08:08:35.200860 systemd[1]: Created slice kubepods-besteffort-podaf632c04_aa15_463b_9e30_8eeb76585a78.slice - libcontainer container kubepods-besteffort-podaf632c04_aa15_463b_9e30_8eeb76585a78.slice. Jul 2 08:08:35.235411 kubelet[3335]: I0702 08:08:35.235333 3335 topology_manager.go:215] "Topology Admit Handler" podUID="45e13d6e-49bc-45b6-aab7-c45f816454fc" podNamespace="kube-system" podName="cilium-fs4qg" Jul 2 08:08:35.257801 systemd[1]: Created slice kubepods-burstable-pod45e13d6e_49bc_45b6_aab7_c45f816454fc.slice - libcontainer container kubepods-burstable-pod45e13d6e_49bc_45b6_aab7_c45f816454fc.slice. Jul 2 08:08:35.301486 kubelet[3335]: I0702 08:08:35.301417 3335 topology_manager.go:215] "Topology Admit Handler" podUID="946dadc0-ac26-4ef2-99af-e44d18ed7686" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-7xn9g" Jul 2 08:08:35.302725 kubelet[3335]: I0702 08:08:35.302664 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-cilium-run\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.302933 kubelet[3335]: I0702 08:08:35.302755 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-bpf-maps\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.302933 kubelet[3335]: I0702 08:08:35.302807 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-cni-path\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.302933 kubelet[3335]: I0702 08:08:35.302860 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45e13d6e-49bc-45b6-aab7-c45f816454fc-clustermesh-secrets\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.303138 kubelet[3335]: I0702 08:08:35.302940 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af632c04-aa15-463b-9e30-8eeb76585a78-kube-proxy\") pod \"kube-proxy-mbc2l\" (UID: \"af632c04-aa15-463b-9e30-8eeb76585a78\") " pod="kube-system/kube-proxy-mbc2l" Jul 2 08:08:35.303138 kubelet[3335]: I0702 08:08:35.302989 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-host-proc-sys-net\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.303138 kubelet[3335]: I0702 08:08:35.303039 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c8rn\" (UniqueName: \"kubernetes.io/projected/45e13d6e-49bc-45b6-aab7-c45f816454fc-kube-api-access-5c8rn\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.303138 kubelet[3335]: I0702 08:08:35.303087 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-xtables-lock\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.303138 kubelet[3335]: I0702 08:08:35.303136 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt5hw\" (UniqueName: \"kubernetes.io/projected/af632c04-aa15-463b-9e30-8eeb76585a78-kube-api-access-nt5hw\") pod \"kube-proxy-mbc2l\" (UID: \"af632c04-aa15-463b-9e30-8eeb76585a78\") " pod="kube-system/kube-proxy-mbc2l" Jul 2 08:08:35.303412 kubelet[3335]: I0702 08:08:35.303182 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45e13d6e-49bc-45b6-aab7-c45f816454fc-cilium-config-path\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.303412 kubelet[3335]: I0702 08:08:35.303229 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-host-proc-sys-kernel\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.303412 kubelet[3335]: I0702 08:08:35.303272 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-lib-modules\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.303412 kubelet[3335]: I0702 08:08:35.303322 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-hostproc\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.303412 kubelet[3335]: I0702 08:08:35.303366 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-cilium-cgroup\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.303675 kubelet[3335]: I0702 08:08:35.303414 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-etc-cni-netd\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.303675 kubelet[3335]: I0702 08:08:35.303463 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af632c04-aa15-463b-9e30-8eeb76585a78-lib-modules\") pod \"kube-proxy-mbc2l\" (UID: \"af632c04-aa15-463b-9e30-8eeb76585a78\") " pod="kube-system/kube-proxy-mbc2l" Jul 2 08:08:35.303675 kubelet[3335]: I0702 08:08:35.303511 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45e13d6e-49bc-45b6-aab7-c45f816454fc-hubble-tls\") pod \"cilium-fs4qg\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " pod="kube-system/cilium-fs4qg" Jul 2 08:08:35.303675 kubelet[3335]: I0702 08:08:35.303563 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af632c04-aa15-463b-9e30-8eeb76585a78-xtables-lock\") pod \"kube-proxy-mbc2l\" (UID: \"af632c04-aa15-463b-9e30-8eeb76585a78\") " pod="kube-system/kube-proxy-mbc2l" Jul 2 08:08:35.325947 systemd[1]: Created slice kubepods-besteffort-pod946dadc0_ac26_4ef2_99af_e44d18ed7686.slice - libcontainer container kubepods-besteffort-pod946dadc0_ac26_4ef2_99af_e44d18ed7686.slice. Jul 2 08:08:35.404273 kubelet[3335]: I0702 08:08:35.404132 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm5jx\" (UniqueName: \"kubernetes.io/projected/946dadc0-ac26-4ef2-99af-e44d18ed7686-kube-api-access-nm5jx\") pod \"cilium-operator-6bc8ccdb58-7xn9g\" (UID: \"946dadc0-ac26-4ef2-99af-e44d18ed7686\") " pod="kube-system/cilium-operator-6bc8ccdb58-7xn9g" Jul 2 08:08:35.411941 kubelet[3335]: I0702 08:08:35.406746 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/946dadc0-ac26-4ef2-99af-e44d18ed7686-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-7xn9g\" (UID: \"946dadc0-ac26-4ef2-99af-e44d18ed7686\") " pod="kube-system/cilium-operator-6bc8ccdb58-7xn9g" Jul 2 08:08:35.570435 containerd[2020]: time="2024-07-02T08:08:35.570211943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fs4qg,Uid:45e13d6e-49bc-45b6-aab7-c45f816454fc,Namespace:kube-system,Attempt:0,}" Jul 2 08:08:35.623177 containerd[2020]: time="2024-07-02T08:08:35.623004047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:35.623491 containerd[2020]: time="2024-07-02T08:08:35.623133071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:35.623491 containerd[2020]: time="2024-07-02T08:08:35.623180399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:35.623491 containerd[2020]: time="2024-07-02T08:08:35.623239739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:35.637182 containerd[2020]: time="2024-07-02T08:08:35.637106711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-7xn9g,Uid:946dadc0-ac26-4ef2-99af-e44d18ed7686,Namespace:kube-system,Attempt:0,}" Jul 2 08:08:35.665235 systemd[1]: Started cri-containerd-f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f.scope - libcontainer container f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f. Jul 2 08:08:35.704517 containerd[2020]: time="2024-07-02T08:08:35.703712387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:35.704517 containerd[2020]: time="2024-07-02T08:08:35.704023175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:35.704517 containerd[2020]: time="2024-07-02T08:08:35.704082515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:35.704517 containerd[2020]: time="2024-07-02T08:08:35.704140055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:35.745157 containerd[2020]: time="2024-07-02T08:08:35.745061831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fs4qg,Uid:45e13d6e-49bc-45b6-aab7-c45f816454fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\"" Jul 2 08:08:35.751043 containerd[2020]: time="2024-07-02T08:08:35.750502355Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 08:08:35.762315 systemd[1]: Started cri-containerd-5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957.scope - libcontainer container 5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957. Jul 2 08:08:35.842403 containerd[2020]: time="2024-07-02T08:08:35.842198208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-7xn9g,Uid:946dadc0-ac26-4ef2-99af-e44d18ed7686,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\"" Jul 2 08:08:36.113912 containerd[2020]: time="2024-07-02T08:08:36.113016273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbc2l,Uid:af632c04-aa15-463b-9e30-8eeb76585a78,Namespace:kube-system,Attempt:0,}" Jul 2 08:08:36.153999 containerd[2020]: time="2024-07-02T08:08:36.153673785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:36.154914 containerd[2020]: time="2024-07-02T08:08:36.154693725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:36.155233 containerd[2020]: time="2024-07-02T08:08:36.154782393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:36.155233 containerd[2020]: time="2024-07-02T08:08:36.154866693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:36.196571 systemd[1]: Started cri-containerd-f79f40a18c791385f51a02d2c2127cbbe0c0383846cab73de8185bf70e8bb308.scope - libcontainer container f79f40a18c791385f51a02d2c2127cbbe0c0383846cab73de8185bf70e8bb308. Jul 2 08:08:36.245382 containerd[2020]: time="2024-07-02T08:08:36.245319118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbc2l,Uid:af632c04-aa15-463b-9e30-8eeb76585a78,Namespace:kube-system,Attempt:0,} returns sandbox id \"f79f40a18c791385f51a02d2c2127cbbe0c0383846cab73de8185bf70e8bb308\"" Jul 2 08:08:36.253056 containerd[2020]: time="2024-07-02T08:08:36.252961954Z" level=info msg="CreateContainer within sandbox \"f79f40a18c791385f51a02d2c2127cbbe0c0383846cab73de8185bf70e8bb308\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:08:36.278018 containerd[2020]: time="2024-07-02T08:08:36.277917274Z" level=info msg="CreateContainer within sandbox \"f79f40a18c791385f51a02d2c2127cbbe0c0383846cab73de8185bf70e8bb308\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07320d177c29c290cf4d03f112c1b6712d8371a896efbfc3004161b984add0ab\"" Jul 2 08:08:36.278973 containerd[2020]: time="2024-07-02T08:08:36.278865886Z" level=info msg="StartContainer for \"07320d177c29c290cf4d03f112c1b6712d8371a896efbfc3004161b984add0ab\"" Jul 2 08:08:36.333187 systemd[1]: Started cri-containerd-07320d177c29c290cf4d03f112c1b6712d8371a896efbfc3004161b984add0ab.scope - libcontainer container 07320d177c29c290cf4d03f112c1b6712d8371a896efbfc3004161b984add0ab. Jul 2 08:08:36.416156 containerd[2020]: time="2024-07-02T08:08:36.416045123Z" level=info msg="StartContainer for \"07320d177c29c290cf4d03f112c1b6712d8371a896efbfc3004161b984add0ab\" returns successfully" Jul 2 08:08:41.398855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322938810.mount: Deactivated successfully. Jul 2 08:08:44.173944 containerd[2020]: time="2024-07-02T08:08:44.173629097Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:44.175223 containerd[2020]: time="2024-07-02T08:08:44.175168361Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651546" Jul 2 08:08:44.177932 containerd[2020]: time="2024-07-02T08:08:44.177842705Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:44.180675 containerd[2020]: time="2024-07-02T08:08:44.180547901Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.429966934s" Jul 2 08:08:44.180675 containerd[2020]: time="2024-07-02T08:08:44.180614225Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 08:08:44.183013 containerd[2020]: time="2024-07-02T08:08:44.182509733Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 08:08:44.187900 containerd[2020]: time="2024-07-02T08:08:44.187426565Z" level=info msg="CreateContainer within sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:08:44.218449 containerd[2020]: time="2024-07-02T08:08:44.218224805Z" level=info msg="CreateContainer within sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac\"" Jul 2 08:08:44.221231 containerd[2020]: time="2024-07-02T08:08:44.219716141Z" level=info msg="StartContainer for \"b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac\"" Jul 2 08:08:44.293488 systemd[1]: Started cri-containerd-b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac.scope - libcontainer container b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac. Jul 2 08:08:44.372388 containerd[2020]: time="2024-07-02T08:08:44.371644638Z" level=info msg="StartContainer for \"b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac\" returns successfully" Jul 2 08:08:44.399199 systemd[1]: cri-containerd-b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac.scope: Deactivated successfully. Jul 2 08:08:44.939513 kubelet[3335]: I0702 08:08:44.937941 3335 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mbc2l" podStartSLOduration=9.937859433 podCreationTimestamp="2024-07-02 08:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:08:36.891496273 +0000 UTC m=+14.473174537" watchObservedRunningTime="2024-07-02 08:08:44.937859433 +0000 UTC m=+22.519537685" Jul 2 08:08:45.207540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac-rootfs.mount: Deactivated successfully. Jul 2 08:08:45.776111 containerd[2020]: time="2024-07-02T08:08:45.776012829Z" level=info msg="shim disconnected" id=b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac namespace=k8s.io Jul 2 08:08:45.776111 containerd[2020]: time="2024-07-02T08:08:45.776104569Z" level=warning msg="cleaning up after shim disconnected" id=b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac namespace=k8s.io Jul 2 08:08:45.777125 containerd[2020]: time="2024-07-02T08:08:45.776127033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:08:45.919942 containerd[2020]: time="2024-07-02T08:08:45.919800922Z" level=info msg="CreateContainer within sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:08:45.950648 containerd[2020]: time="2024-07-02T08:08:45.950561950Z" level=info msg="CreateContainer within sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9\"" Jul 2 08:08:45.953105 containerd[2020]: time="2024-07-02T08:08:45.952940134Z" level=info msg="StartContainer for \"c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9\"" Jul 2 08:08:46.025345 systemd[1]: Started cri-containerd-c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9.scope - libcontainer container c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9. Jul 2 08:08:46.093469 containerd[2020]: time="2024-07-02T08:08:46.090593335Z" level=info msg="StartContainer for \"c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9\" returns successfully" Jul 2 08:08:46.120037 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:08:46.120599 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:08:46.120728 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:08:46.129554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:08:46.133943 systemd[1]: cri-containerd-c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9.scope: Deactivated successfully. Jul 2 08:08:46.180412 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:08:46.211249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount344457524.mount: Deactivated successfully. Jul 2 08:08:46.220722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9-rootfs.mount: Deactivated successfully. Jul 2 08:08:46.257341 containerd[2020]: time="2024-07-02T08:08:46.257228996Z" level=info msg="shim disconnected" id=c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9 namespace=k8s.io Jul 2 08:08:46.257623 containerd[2020]: time="2024-07-02T08:08:46.257360852Z" level=warning msg="cleaning up after shim disconnected" id=c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9 namespace=k8s.io Jul 2 08:08:46.257623 containerd[2020]: time="2024-07-02T08:08:46.257384372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:08:46.879895 containerd[2020]: time="2024-07-02T08:08:46.879810575Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:46.881573 containerd[2020]: time="2024-07-02T08:08:46.881490479Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138278" Jul 2 08:08:46.883179 containerd[2020]: time="2024-07-02T08:08:46.883104155Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:46.887256 containerd[2020]: time="2024-07-02T08:08:46.886445699Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.703867806s" Jul 2 08:08:46.887256 containerd[2020]: time="2024-07-02T08:08:46.886518959Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 08:08:46.891717 containerd[2020]: time="2024-07-02T08:08:46.891639947Z" level=info msg="CreateContainer within sandbox \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 08:08:46.924190 containerd[2020]: time="2024-07-02T08:08:46.924117683Z" level=info msg="CreateContainer within sandbox \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\"" Jul 2 08:08:46.925733 containerd[2020]: time="2024-07-02T08:08:46.925627715Z" level=info msg="StartContainer for \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\"" Jul 2 08:08:46.936813 containerd[2020]: time="2024-07-02T08:08:46.936702767Z" level=info msg="CreateContainer within sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:08:47.003410 systemd[1]: Started cri-containerd-46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6.scope - libcontainer container 46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6. Jul 2 08:08:47.007717 containerd[2020]: time="2024-07-02T08:08:47.007435723Z" level=info msg="CreateContainer within sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c\"" Jul 2 08:08:47.010687 containerd[2020]: time="2024-07-02T08:08:47.009472975Z" level=info msg="StartContainer for \"d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c\"" Jul 2 08:08:47.076672 systemd[1]: Started cri-containerd-d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c.scope - libcontainer container d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c. Jul 2 08:08:47.087515 containerd[2020]: time="2024-07-02T08:08:47.087341696Z" level=info msg="StartContainer for \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\" returns successfully" Jul 2 08:08:47.142106 containerd[2020]: time="2024-07-02T08:08:47.141840032Z" level=info msg="StartContainer for \"d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c\" returns successfully" Jul 2 08:08:47.149756 systemd[1]: cri-containerd-d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c.scope: Deactivated successfully. Jul 2 08:08:47.437718 containerd[2020]: time="2024-07-02T08:08:47.437119929Z" level=info msg="shim disconnected" id=d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c namespace=k8s.io Jul 2 08:08:47.437718 containerd[2020]: time="2024-07-02T08:08:47.437200965Z" level=warning msg="cleaning up after shim disconnected" id=d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c namespace=k8s.io Jul 2 08:08:47.437718 containerd[2020]: time="2024-07-02T08:08:47.437247825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:08:47.943679 containerd[2020]: time="2024-07-02T08:08:47.943389972Z" level=info msg="CreateContainer within sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:08:47.988287 containerd[2020]: time="2024-07-02T08:08:47.987985020Z" level=info msg="CreateContainer within sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d\"" Jul 2 08:08:47.988287 containerd[2020]: time="2024-07-02T08:08:47.988948824Z" level=info msg="StartContainer for \"4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d\"" Jul 2 08:08:48.115224 systemd[1]: Started cri-containerd-4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d.scope - libcontainer container 4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d. Jul 2 08:08:48.235393 containerd[2020]: time="2024-07-02T08:08:48.233572125Z" level=info msg="StartContainer for \"4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d\" returns successfully" Jul 2 08:08:48.233764 systemd[1]: cri-containerd-4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d.scope: Deactivated successfully. Jul 2 08:08:48.296030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d-rootfs.mount: Deactivated successfully. Jul 2 08:08:48.311316 containerd[2020]: time="2024-07-02T08:08:48.310759426Z" level=info msg="shim disconnected" id=4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d namespace=k8s.io Jul 2 08:08:48.312273 containerd[2020]: time="2024-07-02T08:08:48.311941726Z" level=warning msg="cleaning up after shim disconnected" id=4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d namespace=k8s.io Jul 2 08:08:48.312273 containerd[2020]: time="2024-07-02T08:08:48.312002170Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:08:48.315957 kubelet[3335]: I0702 08:08:48.315913 3335 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-7xn9g" podStartSLOduration=2.274726067 podCreationTimestamp="2024-07-02 08:08:35 +0000 UTC" firstStartedPulling="2024-07-02 08:08:35.845728236 +0000 UTC m=+13.427406476" lastFinishedPulling="2024-07-02 08:08:46.886815899 +0000 UTC m=+24.468494151" observedRunningTime="2024-07-02 08:08:48.160782441 +0000 UTC m=+25.742460789" watchObservedRunningTime="2024-07-02 08:08:48.315813742 +0000 UTC m=+25.897492006" Jul 2 08:08:48.956337 containerd[2020]: time="2024-07-02T08:08:48.956077621Z" level=info msg="CreateContainer within sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:08:49.003469 containerd[2020]: time="2024-07-02T08:08:49.002532513Z" level=info msg="CreateContainer within sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\"" Jul 2 08:08:49.004454 containerd[2020]: time="2024-07-02T08:08:49.004376721Z" level=info msg="StartContainer for \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\"" Jul 2 08:08:49.095216 systemd[1]: Started cri-containerd-ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303.scope - libcontainer container ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303. Jul 2 08:08:49.151955 containerd[2020]: time="2024-07-02T08:08:49.151831618Z" level=info msg="StartContainer for \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\" returns successfully" Jul 2 08:08:49.374016 kubelet[3335]: I0702 08:08:49.373738 3335 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 08:08:49.468134 kubelet[3335]: I0702 08:08:49.466504 3335 topology_manager.go:215] "Topology Admit Handler" podUID="0a5f95bf-0581-4009-be86-0f6d377b6506" podNamespace="kube-system" podName="coredns-5dd5756b68-f57bl" Jul 2 08:08:49.480523 kubelet[3335]: I0702 08:08:49.480455 3335 topology_manager.go:215] "Topology Admit Handler" podUID="498928c2-622f-4968-ba46-8ccf94f4a2bf" podNamespace="kube-system" podName="coredns-5dd5756b68-4w4v8" Jul 2 08:08:49.489045 systemd[1]: Created slice kubepods-burstable-pod0a5f95bf_0581_4009_be86_0f6d377b6506.slice - libcontainer container kubepods-burstable-pod0a5f95bf_0581_4009_be86_0f6d377b6506.slice. Jul 2 08:08:49.509823 systemd[1]: Created slice kubepods-burstable-pod498928c2_622f_4968_ba46_8ccf94f4a2bf.slice - libcontainer container kubepods-burstable-pod498928c2_622f_4968_ba46_8ccf94f4a2bf.slice. Jul 2 08:08:49.529427 kubelet[3335]: I0702 08:08:49.529333 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxf7k\" (UniqueName: \"kubernetes.io/projected/0a5f95bf-0581-4009-be86-0f6d377b6506-kube-api-access-pxf7k\") pod \"coredns-5dd5756b68-f57bl\" (UID: \"0a5f95bf-0581-4009-be86-0f6d377b6506\") " pod="kube-system/coredns-5dd5756b68-f57bl" Jul 2 08:08:49.529427 kubelet[3335]: I0702 08:08:49.529436 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/498928c2-622f-4968-ba46-8ccf94f4a2bf-config-volume\") pod \"coredns-5dd5756b68-4w4v8\" (UID: \"498928c2-622f-4968-ba46-8ccf94f4a2bf\") " pod="kube-system/coredns-5dd5756b68-4w4v8" Jul 2 08:08:49.529781 kubelet[3335]: I0702 08:08:49.529494 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjmsx\" (UniqueName: \"kubernetes.io/projected/498928c2-622f-4968-ba46-8ccf94f4a2bf-kube-api-access-bjmsx\") pod \"coredns-5dd5756b68-4w4v8\" (UID: \"498928c2-622f-4968-ba46-8ccf94f4a2bf\") " pod="kube-system/coredns-5dd5756b68-4w4v8" Jul 2 08:08:49.529781 kubelet[3335]: I0702 08:08:49.529551 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a5f95bf-0581-4009-be86-0f6d377b6506-config-volume\") pod \"coredns-5dd5756b68-f57bl\" (UID: \"0a5f95bf-0581-4009-be86-0f6d377b6506\") " pod="kube-system/coredns-5dd5756b68-f57bl" Jul 2 08:08:49.799266 containerd[2020]: time="2024-07-02T08:08:49.799188337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-f57bl,Uid:0a5f95bf-0581-4009-be86-0f6d377b6506,Namespace:kube-system,Attempt:0,}" Jul 2 08:08:49.823254 containerd[2020]: time="2024-07-02T08:08:49.821522593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4w4v8,Uid:498928c2-622f-4968-ba46-8ccf94f4a2bf,Namespace:kube-system,Attempt:0,}" Jul 2 08:08:52.265806 systemd-networkd[1929]: cilium_host: Link UP Jul 2 08:08:52.266191 systemd-networkd[1929]: cilium_net: Link UP Jul 2 08:08:52.267097 (udev-worker)[4124]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:08:52.267098 (udev-worker)[4126]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:08:52.271437 systemd-networkd[1929]: cilium_net: Gained carrier Jul 2 08:08:52.273362 systemd-networkd[1929]: cilium_host: Gained carrier Jul 2 08:08:52.307587 systemd-networkd[1929]: cilium_host: Gained IPv6LL Jul 2 08:08:52.441376 (udev-worker)[4172]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:08:52.452153 systemd-networkd[1929]: cilium_vxlan: Link UP Jul 2 08:08:52.452173 systemd-networkd[1929]: cilium_vxlan: Gained carrier Jul 2 08:08:52.559192 systemd-networkd[1929]: cilium_net: Gained IPv6LL Jul 2 08:08:52.951357 kernel: NET: Registered PF_ALG protocol family Jul 2 08:08:53.967689 systemd-networkd[1929]: cilium_vxlan: Gained IPv6LL Jul 2 08:08:54.342381 (udev-worker)[4171]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:08:54.357409 systemd-networkd[1929]: lxc_health: Link UP Jul 2 08:08:54.358480 systemd-networkd[1929]: lxc_health: Gained carrier Jul 2 08:08:54.890522 kernel: eth0: renamed from tmp3dd82 Jul 2 08:08:54.894751 systemd-networkd[1929]: lxca65db6747fc4: Link UP Jul 2 08:08:54.905300 systemd-networkd[1929]: lxca65db6747fc4: Gained carrier Jul 2 08:08:54.944150 systemd-networkd[1929]: lxc16f2b46583cf: Link UP Jul 2 08:08:54.952992 (udev-worker)[4506]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:08:54.956243 kernel: eth0: renamed from tmpb94af Jul 2 08:08:54.964010 systemd-networkd[1929]: lxc16f2b46583cf: Gained carrier Jul 2 08:08:55.616569 kubelet[3335]: I0702 08:08:55.616275 3335 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fs4qg" podStartSLOduration=12.183330332 podCreationTimestamp="2024-07-02 08:08:35 +0000 UTC" firstStartedPulling="2024-07-02 08:08:35.748778807 +0000 UTC m=+13.330457047" lastFinishedPulling="2024-07-02 08:08:44.181668929 +0000 UTC m=+21.763347241" observedRunningTime="2024-07-02 08:08:49.999705614 +0000 UTC m=+27.581383902" watchObservedRunningTime="2024-07-02 08:08:55.616220526 +0000 UTC m=+33.197898802" Jul 2 08:08:56.143169 systemd-networkd[1929]: lxca65db6747fc4: Gained IPv6LL Jul 2 08:08:56.335622 systemd-networkd[1929]: lxc_health: Gained IPv6LL Jul 2 08:08:56.911361 systemd-networkd[1929]: lxc16f2b46583cf: Gained IPv6LL Jul 2 08:08:59.342698 systemd[1]: Started sshd@9-172.31.20.19:22-139.178.89.65:58410.service - OpenSSH per-connection server daemon (139.178.89.65:58410). Jul 2 08:08:59.396490 ntpd[1985]: Listen normally on 8 cilium_host 192.168.0.142:123 Jul 2 08:08:59.398591 ntpd[1985]: 2 Jul 08:08:59 ntpd[1985]: Listen normally on 8 cilium_host 192.168.0.142:123 Jul 2 08:08:59.398591 ntpd[1985]: 2 Jul 08:08:59 ntpd[1985]: Listen normally on 9 cilium_net [fe80::f8c0:feff:fede:a114%4]:123 Jul 2 08:08:59.398591 ntpd[1985]: 2 Jul 08:08:59 ntpd[1985]: Listen normally on 10 cilium_host [fe80::94c5:39ff:feb3:7697%5]:123 Jul 2 08:08:59.398591 ntpd[1985]: 2 Jul 08:08:59 ntpd[1985]: Listen normally on 11 cilium_vxlan [fe80::2c0b:e1ff:fe16:328e%6]:123 Jul 2 08:08:59.398591 ntpd[1985]: 2 Jul 08:08:59 ntpd[1985]: Listen normally on 12 lxc_health [fe80::2093:d1ff:fed7:2bfe%8]:123 Jul 2 08:08:59.398591 ntpd[1985]: 2 Jul 08:08:59 ntpd[1985]: Listen normally on 13 lxca65db6747fc4 [fe80::3079:faff:fe5c:62ca%10]:123 Jul 2 08:08:59.398591 ntpd[1985]: 2 Jul 08:08:59 ntpd[1985]: Listen normally on 14 lxc16f2b46583cf [fe80::a844:77ff:fe4f:fab0%12]:123 Jul 2 08:08:59.398081 ntpd[1985]: Listen normally on 9 cilium_net [fe80::f8c0:feff:fede:a114%4]:123 Jul 2 08:08:59.398166 ntpd[1985]: Listen normally on 10 cilium_host [fe80::94c5:39ff:feb3:7697%5]:123 Jul 2 08:08:59.398234 ntpd[1985]: Listen normally on 11 cilium_vxlan [fe80::2c0b:e1ff:fe16:328e%6]:123 Jul 2 08:08:59.398304 ntpd[1985]: Listen normally on 12 lxc_health [fe80::2093:d1ff:fed7:2bfe%8]:123 Jul 2 08:08:59.398372 ntpd[1985]: Listen normally on 13 lxca65db6747fc4 [fe80::3079:faff:fe5c:62ca%10]:123 Jul 2 08:08:59.398446 ntpd[1985]: Listen normally on 14 lxc16f2b46583cf [fe80::a844:77ff:fe4f:fab0%12]:123 Jul 2 08:08:59.540221 sshd[4527]: Accepted publickey for core from 139.178.89.65 port 58410 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:08:59.543241 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:08:59.559219 systemd-logind[1991]: New session 10 of user core. Jul 2 08:08:59.566218 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 08:08:59.875728 sshd[4527]: pam_unix(sshd:session): session closed for user core Jul 2 08:08:59.881661 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:08:59.886754 systemd[1]: sshd@9-172.31.20.19:22-139.178.89.65:58410.service: Deactivated successfully. Jul 2 08:08:59.894997 systemd-logind[1991]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:08:59.900536 systemd-logind[1991]: Removed session 10. Jul 2 08:09:03.434513 containerd[2020]: time="2024-07-02T08:09:03.434350285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:09:03.435403 containerd[2020]: time="2024-07-02T08:09:03.434457937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:09:03.438965 containerd[2020]: time="2024-07-02T08:09:03.436147141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:09:03.438965 containerd[2020]: time="2024-07-02T08:09:03.438518713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:09:03.514027 systemd[1]: Started cri-containerd-b94af8c5cb36b0419a1a33710cde2b769f9cdeff2ff11fe04b1e4b3319563c67.scope - libcontainer container b94af8c5cb36b0419a1a33710cde2b769f9cdeff2ff11fe04b1e4b3319563c67. Jul 2 08:09:03.524317 containerd[2020]: time="2024-07-02T08:09:03.523523677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:09:03.524317 containerd[2020]: time="2024-07-02T08:09:03.523639789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:09:03.524317 containerd[2020]: time="2024-07-02T08:09:03.523717165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:09:03.524317 containerd[2020]: time="2024-07-02T08:09:03.523766773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:09:03.598242 systemd[1]: Started cri-containerd-3dd82c81d59f5d2ced08750cffea2bd5d472d6eec42ef488bd832e59b82936d8.scope - libcontainer container 3dd82c81d59f5d2ced08750cffea2bd5d472d6eec42ef488bd832e59b82936d8. Jul 2 08:09:03.685417 containerd[2020]: time="2024-07-02T08:09:03.684630638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4w4v8,Uid:498928c2-622f-4968-ba46-8ccf94f4a2bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"b94af8c5cb36b0419a1a33710cde2b769f9cdeff2ff11fe04b1e4b3319563c67\"" Jul 2 08:09:03.695915 containerd[2020]: time="2024-07-02T08:09:03.695824154Z" level=info msg="CreateContainer within sandbox \"b94af8c5cb36b0419a1a33710cde2b769f9cdeff2ff11fe04b1e4b3319563c67\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:09:03.725678 containerd[2020]: time="2024-07-02T08:09:03.725435402Z" level=info msg="CreateContainer within sandbox \"b94af8c5cb36b0419a1a33710cde2b769f9cdeff2ff11fe04b1e4b3319563c67\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ec7de0d5e28891573f811b2c03a127a3972cbf0581989eed91d16c8cfcb170f\"" Jul 2 08:09:03.729571 containerd[2020]: time="2024-07-02T08:09:03.727289870Z" level=info msg="StartContainer for \"8ec7de0d5e28891573f811b2c03a127a3972cbf0581989eed91d16c8cfcb170f\"" Jul 2 08:09:03.784547 containerd[2020]: time="2024-07-02T08:09:03.784419207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-f57bl,Uid:0a5f95bf-0581-4009-be86-0f6d377b6506,Namespace:kube-system,Attempt:0,} returns sandbox id \"3dd82c81d59f5d2ced08750cffea2bd5d472d6eec42ef488bd832e59b82936d8\"" Jul 2 08:09:03.792958 containerd[2020]: time="2024-07-02T08:09:03.792857823Z" level=info msg="CreateContainer within sandbox \"3dd82c81d59f5d2ced08750cffea2bd5d472d6eec42ef488bd832e59b82936d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:09:03.826194 systemd[1]: Started cri-containerd-8ec7de0d5e28891573f811b2c03a127a3972cbf0581989eed91d16c8cfcb170f.scope - libcontainer container 8ec7de0d5e28891573f811b2c03a127a3972cbf0581989eed91d16c8cfcb170f. Jul 2 08:09:03.836857 containerd[2020]: time="2024-07-02T08:09:03.836792679Z" level=info msg="CreateContainer within sandbox \"3dd82c81d59f5d2ced08750cffea2bd5d472d6eec42ef488bd832e59b82936d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae29daf896fd01b7dbca869047dfbbcaa0e4fb46c4956aceae6e446c3cf74bf2\"" Jul 2 08:09:03.840518 containerd[2020]: time="2024-07-02T08:09:03.838927467Z" level=info msg="StartContainer for \"ae29daf896fd01b7dbca869047dfbbcaa0e4fb46c4956aceae6e446c3cf74bf2\"" Jul 2 08:09:03.920480 containerd[2020]: time="2024-07-02T08:09:03.920208711Z" level=info msg="StartContainer for \"8ec7de0d5e28891573f811b2c03a127a3972cbf0581989eed91d16c8cfcb170f\" returns successfully" Jul 2 08:09:03.951022 systemd[1]: Started cri-containerd-ae29daf896fd01b7dbca869047dfbbcaa0e4fb46c4956aceae6e446c3cf74bf2.scope - libcontainer container ae29daf896fd01b7dbca869047dfbbcaa0e4fb46c4956aceae6e446c3cf74bf2. Jul 2 08:09:04.047522 containerd[2020]: time="2024-07-02T08:09:04.047365524Z" level=info msg="StartContainer for \"ae29daf896fd01b7dbca869047dfbbcaa0e4fb46c4956aceae6e446c3cf74bf2\" returns successfully" Jul 2 08:09:04.913476 systemd[1]: Started sshd@10-172.31.20.19:22-139.178.89.65:58412.service - OpenSSH per-connection server daemon (139.178.89.65:58412). Jul 2 08:09:05.066276 kubelet[3335]: I0702 08:09:05.065329 3335 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-f57bl" podStartSLOduration=30.064921345 podCreationTimestamp="2024-07-02 08:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:09:05.060492253 +0000 UTC m=+42.642170625" watchObservedRunningTime="2024-07-02 08:09:05.064921345 +0000 UTC m=+42.646599609" Jul 2 08:09:05.068171 kubelet[3335]: I0702 08:09:05.066964 3335 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4w4v8" podStartSLOduration=30.066809401 podCreationTimestamp="2024-07-02 08:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:09:04.069280644 +0000 UTC m=+41.650958920" watchObservedRunningTime="2024-07-02 08:09:05.066809401 +0000 UTC m=+42.648487677" Jul 2 08:09:05.095752 sshd[4710]: Accepted publickey for core from 139.178.89.65 port 58412 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:05.102834 sshd[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:05.120410 systemd-logind[1991]: New session 11 of user core. Jul 2 08:09:05.126640 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 08:09:05.366858 sshd[4710]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:05.373657 systemd[1]: sshd@10-172.31.20.19:22-139.178.89.65:58412.service: Deactivated successfully. Jul 2 08:09:05.379978 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:09:05.381495 systemd-logind[1991]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:09:05.384366 systemd-logind[1991]: Removed session 11. Jul 2 08:09:10.406428 systemd[1]: Started sshd@11-172.31.20.19:22-139.178.89.65:51934.service - OpenSSH per-connection server daemon (139.178.89.65:51934). Jul 2 08:09:10.592163 sshd[4736]: Accepted publickey for core from 139.178.89.65 port 51934 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:10.594798 sshd[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:10.603024 systemd-logind[1991]: New session 12 of user core. Jul 2 08:09:10.611194 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 08:09:10.854601 sshd[4736]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:10.861855 systemd[1]: sshd@11-172.31.20.19:22-139.178.89.65:51934.service: Deactivated successfully. Jul 2 08:09:10.866002 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:09:10.867406 systemd-logind[1991]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:09:10.869497 systemd-logind[1991]: Removed session 12. Jul 2 08:09:15.895480 systemd[1]: Started sshd@12-172.31.20.19:22-139.178.89.65:51944.service - OpenSSH per-connection server daemon (139.178.89.65:51944). Jul 2 08:09:16.074672 sshd[4752]: Accepted publickey for core from 139.178.89.65 port 51944 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:16.080136 sshd[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:16.094404 systemd-logind[1991]: New session 13 of user core. Jul 2 08:09:16.105204 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 08:09:16.348213 sshd[4752]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:16.357191 systemd[1]: sshd@12-172.31.20.19:22-139.178.89.65:51944.service: Deactivated successfully. Jul 2 08:09:16.361722 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:09:16.363037 systemd-logind[1991]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:09:16.364860 systemd-logind[1991]: Removed session 13. Jul 2 08:09:16.385418 systemd[1]: Started sshd@13-172.31.20.19:22-139.178.89.65:51950.service - OpenSSH per-connection server daemon (139.178.89.65:51950). Jul 2 08:09:16.562605 sshd[4766]: Accepted publickey for core from 139.178.89.65 port 51950 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:16.565358 sshd[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:16.573972 systemd-logind[1991]: New session 14 of user core. Jul 2 08:09:16.580162 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 08:09:18.138591 sshd[4766]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:18.149760 systemd[1]: sshd@13-172.31.20.19:22-139.178.89.65:51950.service: Deactivated successfully. Jul 2 08:09:18.150273 systemd-logind[1991]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:09:18.159405 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:09:18.177913 systemd-logind[1991]: Removed session 14. Jul 2 08:09:18.186412 systemd[1]: Started sshd@14-172.31.20.19:22-139.178.89.65:39256.service - OpenSSH per-connection server daemon (139.178.89.65:39256). Jul 2 08:09:18.373716 sshd[4777]: Accepted publickey for core from 139.178.89.65 port 39256 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:18.376458 sshd[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:18.384444 systemd-logind[1991]: New session 15 of user core. Jul 2 08:09:18.392175 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 08:09:18.649550 sshd[4777]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:18.657791 systemd-logind[1991]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:09:18.659231 systemd[1]: sshd@14-172.31.20.19:22-139.178.89.65:39256.service: Deactivated successfully. Jul 2 08:09:18.663511 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:09:18.667321 systemd-logind[1991]: Removed session 15. Jul 2 08:09:23.695402 systemd[1]: Started sshd@15-172.31.20.19:22-139.178.89.65:39268.service - OpenSSH per-connection server daemon (139.178.89.65:39268). Jul 2 08:09:23.869711 sshd[4792]: Accepted publickey for core from 139.178.89.65 port 39268 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:23.872341 sshd[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:23.879774 systemd-logind[1991]: New session 16 of user core. Jul 2 08:09:23.885146 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 08:09:24.121784 sshd[4792]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:24.126848 systemd[1]: sshd@15-172.31.20.19:22-139.178.89.65:39268.service: Deactivated successfully. Jul 2 08:09:24.132062 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:09:24.135481 systemd-logind[1991]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:09:24.137815 systemd-logind[1991]: Removed session 16. Jul 2 08:09:29.165413 systemd[1]: Started sshd@16-172.31.20.19:22-139.178.89.65:34374.service - OpenSSH per-connection server daemon (139.178.89.65:34374). Jul 2 08:09:29.340939 sshd[4806]: Accepted publickey for core from 139.178.89.65 port 34374 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:29.343582 sshd[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:29.352629 systemd-logind[1991]: New session 17 of user core. Jul 2 08:09:29.361248 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 08:09:29.599792 sshd[4806]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:29.605728 systemd[1]: sshd@16-172.31.20.19:22-139.178.89.65:34374.service: Deactivated successfully. Jul 2 08:09:29.609035 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:09:29.612780 systemd-logind[1991]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:09:29.615249 systemd-logind[1991]: Removed session 17. Jul 2 08:09:34.644221 systemd[1]: Started sshd@17-172.31.20.19:22-139.178.89.65:34376.service - OpenSSH per-connection server daemon (139.178.89.65:34376). Jul 2 08:09:34.829035 sshd[4819]: Accepted publickey for core from 139.178.89.65 port 34376 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:34.831716 sshd[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:34.841444 systemd-logind[1991]: New session 18 of user core. Jul 2 08:09:34.847182 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 08:09:35.085125 sshd[4819]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:35.091939 systemd[1]: sshd@17-172.31.20.19:22-139.178.89.65:34376.service: Deactivated successfully. Jul 2 08:09:35.096816 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:09:35.099681 systemd-logind[1991]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:09:35.102429 systemd-logind[1991]: Removed session 18. Jul 2 08:09:40.123445 systemd[1]: Started sshd@18-172.31.20.19:22-139.178.89.65:33966.service - OpenSSH per-connection server daemon (139.178.89.65:33966). Jul 2 08:09:40.296536 sshd[4836]: Accepted publickey for core from 139.178.89.65 port 33966 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:40.299055 sshd[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:40.308245 systemd-logind[1991]: New session 19 of user core. Jul 2 08:09:40.315336 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 08:09:40.554607 sshd[4836]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:40.561296 systemd[1]: sshd@18-172.31.20.19:22-139.178.89.65:33966.service: Deactivated successfully. Jul 2 08:09:40.565991 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:09:40.567560 systemd-logind[1991]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:09:40.569980 systemd-logind[1991]: Removed session 19. Jul 2 08:09:40.600382 systemd[1]: Started sshd@19-172.31.20.19:22-139.178.89.65:33978.service - OpenSSH per-connection server daemon (139.178.89.65:33978). Jul 2 08:09:40.769246 sshd[4849]: Accepted publickey for core from 139.178.89.65 port 33978 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:40.771743 sshd[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:40.780618 systemd-logind[1991]: New session 20 of user core. Jul 2 08:09:40.788189 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 08:09:41.081198 sshd[4849]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:41.089547 systemd[1]: sshd@19-172.31.20.19:22-139.178.89.65:33978.service: Deactivated successfully. Jul 2 08:09:41.094120 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:09:41.096102 systemd-logind[1991]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:09:41.098949 systemd-logind[1991]: Removed session 20. Jul 2 08:09:41.123426 systemd[1]: Started sshd@20-172.31.20.19:22-139.178.89.65:33994.service - OpenSSH per-connection server daemon (139.178.89.65:33994). Jul 2 08:09:41.296520 sshd[4860]: Accepted publickey for core from 139.178.89.65 port 33994 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:41.299116 sshd[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:41.306730 systemd-logind[1991]: New session 21 of user core. Jul 2 08:09:41.316201 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 08:09:42.574182 sshd[4860]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:42.583431 systemd[1]: sshd@20-172.31.20.19:22-139.178.89.65:33994.service: Deactivated successfully. Jul 2 08:09:42.592796 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:09:42.605975 systemd-logind[1991]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:09:42.631077 systemd[1]: Started sshd@21-172.31.20.19:22-139.178.89.65:33998.service - OpenSSH per-connection server daemon (139.178.89.65:33998). Jul 2 08:09:42.633225 systemd-logind[1991]: Removed session 21. Jul 2 08:09:42.806466 sshd[4878]: Accepted publickey for core from 139.178.89.65 port 33998 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:42.809208 sshd[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:42.818721 systemd-logind[1991]: New session 22 of user core. Jul 2 08:09:42.825230 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 08:09:43.437185 sshd[4878]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:43.445471 systemd[1]: sshd@21-172.31.20.19:22-139.178.89.65:33998.service: Deactivated successfully. Jul 2 08:09:43.451498 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 08:09:43.453602 systemd-logind[1991]: Session 22 logged out. Waiting for processes to exit. Jul 2 08:09:43.458205 systemd-logind[1991]: Removed session 22. Jul 2 08:09:43.478470 systemd[1]: Started sshd@22-172.31.20.19:22-139.178.89.65:34014.service - OpenSSH per-connection server daemon (139.178.89.65:34014). Jul 2 08:09:43.662024 sshd[4889]: Accepted publickey for core from 139.178.89.65 port 34014 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:43.664628 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:43.673215 systemd-logind[1991]: New session 23 of user core. Jul 2 08:09:43.684434 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 08:09:43.919016 sshd[4889]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:43.925740 systemd[1]: sshd@22-172.31.20.19:22-139.178.89.65:34014.service: Deactivated successfully. Jul 2 08:09:43.930160 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 08:09:43.932536 systemd-logind[1991]: Session 23 logged out. Waiting for processes to exit. Jul 2 08:09:43.935072 systemd-logind[1991]: Removed session 23. Jul 2 08:09:48.962437 systemd[1]: Started sshd@23-172.31.20.19:22-139.178.89.65:41180.service - OpenSSH per-connection server daemon (139.178.89.65:41180). Jul 2 08:09:49.135365 sshd[4902]: Accepted publickey for core from 139.178.89.65 port 41180 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:49.138133 sshd[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:49.147310 systemd-logind[1991]: New session 24 of user core. Jul 2 08:09:49.155196 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 08:09:49.395602 sshd[4902]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:49.401182 systemd[1]: sshd@23-172.31.20.19:22-139.178.89.65:41180.service: Deactivated successfully. Jul 2 08:09:49.406149 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 08:09:49.410161 systemd-logind[1991]: Session 24 logged out. Waiting for processes to exit. Jul 2 08:09:49.412523 systemd-logind[1991]: Removed session 24. Jul 2 08:09:54.435435 systemd[1]: Started sshd@24-172.31.20.19:22-139.178.89.65:41188.service - OpenSSH per-connection server daemon (139.178.89.65:41188). Jul 2 08:09:54.607069 sshd[4918]: Accepted publickey for core from 139.178.89.65 port 41188 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:54.609722 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:54.619419 systemd-logind[1991]: New session 25 of user core. Jul 2 08:09:54.628233 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 08:09:54.870259 sshd[4918]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:54.875900 systemd[1]: sshd@24-172.31.20.19:22-139.178.89.65:41188.service: Deactivated successfully. Jul 2 08:09:54.879989 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 08:09:54.883499 systemd-logind[1991]: Session 25 logged out. Waiting for processes to exit. Jul 2 08:09:54.886176 systemd-logind[1991]: Removed session 25. Jul 2 08:09:59.920393 systemd[1]: Started sshd@25-172.31.20.19:22-139.178.89.65:52016.service - OpenSSH per-connection server daemon (139.178.89.65:52016). Jul 2 08:10:00.104220 sshd[4931]: Accepted publickey for core from 139.178.89.65 port 52016 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:10:00.106823 sshd[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:10:00.115781 systemd-logind[1991]: New session 26 of user core. Jul 2 08:10:00.128330 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 08:10:00.372720 sshd[4931]: pam_unix(sshd:session): session closed for user core Jul 2 08:10:00.377571 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 08:10:00.381166 systemd[1]: sshd@25-172.31.20.19:22-139.178.89.65:52016.service: Deactivated successfully. Jul 2 08:10:00.386212 systemd-logind[1991]: Session 26 logged out. Waiting for processes to exit. Jul 2 08:10:00.388652 systemd-logind[1991]: Removed session 26. Jul 2 08:10:05.412441 systemd[1]: Started sshd@26-172.31.20.19:22-139.178.89.65:52024.service - OpenSSH per-connection server daemon (139.178.89.65:52024). Jul 2 08:10:05.590837 sshd[4944]: Accepted publickey for core from 139.178.89.65 port 52024 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:10:05.593506 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:10:05.600951 systemd-logind[1991]: New session 27 of user core. Jul 2 08:10:05.612246 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 08:10:05.845335 sshd[4944]: pam_unix(sshd:session): session closed for user core Jul 2 08:10:05.852329 systemd[1]: sshd@26-172.31.20.19:22-139.178.89.65:52024.service: Deactivated successfully. Jul 2 08:10:05.857473 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 08:10:05.859073 systemd-logind[1991]: Session 27 logged out. Waiting for processes to exit. Jul 2 08:10:05.861001 systemd-logind[1991]: Removed session 27. Jul 2 08:10:05.887446 systemd[1]: Started sshd@27-172.31.20.19:22-139.178.89.65:52038.service - OpenSSH per-connection server daemon (139.178.89.65:52038). Jul 2 08:10:06.067279 sshd[4957]: Accepted publickey for core from 139.178.89.65 port 52038 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:10:06.069908 sshd[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:10:06.078270 systemd-logind[1991]: New session 28 of user core. Jul 2 08:10:06.086173 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 08:10:09.409371 systemd[1]: run-containerd-runc-k8s.io-ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303-runc.RiRHIE.mount: Deactivated successfully. Jul 2 08:10:09.422418 containerd[2020]: time="2024-07-02T08:10:09.422290241Z" level=info msg="StopContainer for \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\" with timeout 30 (s)" Jul 2 08:10:09.429549 containerd[2020]: time="2024-07-02T08:10:09.429056777Z" level=info msg="Stop container \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\" with signal terminated" Jul 2 08:10:09.435701 containerd[2020]: time="2024-07-02T08:10:09.435513485Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:10:09.450406 containerd[2020]: time="2024-07-02T08:10:09.450257465Z" level=info msg="StopContainer for \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\" with timeout 2 (s)" Jul 2 08:10:09.452181 containerd[2020]: time="2024-07-02T08:10:09.452104613Z" level=info msg="Stop container \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\" with signal terminated" Jul 2 08:10:09.456451 systemd[1]: cri-containerd-46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6.scope: Deactivated successfully. Jul 2 08:10:09.479496 systemd-networkd[1929]: lxc_health: Link DOWN Jul 2 08:10:09.479515 systemd-networkd[1929]: lxc_health: Lost carrier Jul 2 08:10:09.516731 systemd[1]: cri-containerd-ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303.scope: Deactivated successfully. Jul 2 08:10:09.517302 systemd[1]: cri-containerd-ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303.scope: Consumed 14.456s CPU time. Jul 2 08:10:09.525023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6-rootfs.mount: Deactivated successfully. Jul 2 08:10:09.548240 containerd[2020]: time="2024-07-02T08:10:09.548112905Z" level=info msg="shim disconnected" id=46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6 namespace=k8s.io Jul 2 08:10:09.548849 containerd[2020]: time="2024-07-02T08:10:09.548603081Z" level=warning msg="cleaning up after shim disconnected" id=46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6 namespace=k8s.io Jul 2 08:10:09.548849 containerd[2020]: time="2024-07-02T08:10:09.548644241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:09.575305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303-rootfs.mount: Deactivated successfully. Jul 2 08:10:09.591850 containerd[2020]: time="2024-07-02T08:10:09.591771522Z" level=info msg="StopContainer for \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\" returns successfully" Jul 2 08:10:09.593331 containerd[2020]: time="2024-07-02T08:10:09.592914378Z" level=info msg="shim disconnected" id=ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303 namespace=k8s.io Jul 2 08:10:09.593331 containerd[2020]: time="2024-07-02T08:10:09.593056782Z" level=warning msg="cleaning up after shim disconnected" id=ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303 namespace=k8s.io Jul 2 08:10:09.593331 containerd[2020]: time="2024-07-02T08:10:09.593107998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:09.593331 containerd[2020]: time="2024-07-02T08:10:09.593230242Z" level=info msg="StopPodSandbox for \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\"" Jul 2 08:10:09.593331 containerd[2020]: time="2024-07-02T08:10:09.593284266Z" level=info msg="Container to stop \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:10:09.599480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957-shm.mount: Deactivated successfully. Jul 2 08:10:09.610951 systemd[1]: cri-containerd-5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957.scope: Deactivated successfully. Jul 2 08:10:09.639780 containerd[2020]: time="2024-07-02T08:10:09.639503286Z" level=info msg="StopContainer for \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\" returns successfully" Jul 2 08:10:09.641353 containerd[2020]: time="2024-07-02T08:10:09.641215038Z" level=info msg="StopPodSandbox for \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\"" Jul 2 08:10:09.641771 containerd[2020]: time="2024-07-02T08:10:09.641316606Z" level=info msg="Container to stop \"d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:10:09.641771 containerd[2020]: time="2024-07-02T08:10:09.641548158Z" level=info msg="Container to stop \"b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:10:09.641771 containerd[2020]: time="2024-07-02T08:10:09.641576634Z" level=info msg="Container to stop \"c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:10:09.641771 containerd[2020]: time="2024-07-02T08:10:09.641600334Z" level=info msg="Container to stop \"4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:10:09.641771 containerd[2020]: time="2024-07-02T08:10:09.641623950Z" level=info msg="Container to stop \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:10:09.656216 systemd[1]: cri-containerd-f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f.scope: Deactivated successfully. Jul 2 08:10:09.685037 containerd[2020]: time="2024-07-02T08:10:09.683486778Z" level=info msg="shim disconnected" id=5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957 namespace=k8s.io Jul 2 08:10:09.685037 containerd[2020]: time="2024-07-02T08:10:09.683586666Z" level=warning msg="cleaning up after shim disconnected" id=5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957 namespace=k8s.io Jul 2 08:10:09.685037 containerd[2020]: time="2024-07-02T08:10:09.683615478Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:09.708987 containerd[2020]: time="2024-07-02T08:10:09.708871662Z" level=info msg="shim disconnected" id=f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f namespace=k8s.io Jul 2 08:10:09.708987 containerd[2020]: time="2024-07-02T08:10:09.708982554Z" level=warning msg="cleaning up after shim disconnected" id=f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f namespace=k8s.io Jul 2 08:10:09.708987 containerd[2020]: time="2024-07-02T08:10:09.709006170Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:09.723564 containerd[2020]: time="2024-07-02T08:10:09.723095718Z" level=info msg="TearDown network for sandbox \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\" successfully" Jul 2 08:10:09.723564 containerd[2020]: time="2024-07-02T08:10:09.723154434Z" level=info msg="StopPodSandbox for \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\" returns successfully" Jul 2 08:10:09.750872 containerd[2020]: time="2024-07-02T08:10:09.750784602Z" level=info msg="TearDown network for sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" successfully" Jul 2 08:10:09.750872 containerd[2020]: time="2024-07-02T08:10:09.750840630Z" level=info msg="StopPodSandbox for \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" returns successfully" Jul 2 08:10:09.832931 kubelet[3335]: I0702 08:10:09.830315 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-hostproc\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.832931 kubelet[3335]: I0702 08:10:09.830396 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-host-proc-sys-net\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.832931 kubelet[3335]: I0702 08:10:09.830438 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-lib-modules\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.832931 kubelet[3335]: I0702 08:10:09.830440 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-hostproc" (OuterVolumeSpecName: "hostproc") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:10:09.832931 kubelet[3335]: I0702 08:10:09.830490 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/946dadc0-ac26-4ef2-99af-e44d18ed7686-cilium-config-path\") pod \"946dadc0-ac26-4ef2-99af-e44d18ed7686\" (UID: \"946dadc0-ac26-4ef2-99af-e44d18ed7686\") " Jul 2 08:10:09.833747 kubelet[3335]: I0702 08:10:09.830506 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:10:09.833747 kubelet[3335]: I0702 08:10:09.830539 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45e13d6e-49bc-45b6-aab7-c45f816454fc-clustermesh-secrets\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.833747 kubelet[3335]: I0702 08:10:09.830549 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:10:09.833747 kubelet[3335]: I0702 08:10:09.830585 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c8rn\" (UniqueName: \"kubernetes.io/projected/45e13d6e-49bc-45b6-aab7-c45f816454fc-kube-api-access-5c8rn\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.833747 kubelet[3335]: I0702 08:10:09.830626 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-cilium-cgroup\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.835090 kubelet[3335]: I0702 08:10:09.830670 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm5jx\" (UniqueName: \"kubernetes.io/projected/946dadc0-ac26-4ef2-99af-e44d18ed7686-kube-api-access-nm5jx\") pod \"946dadc0-ac26-4ef2-99af-e44d18ed7686\" (UID: \"946dadc0-ac26-4ef2-99af-e44d18ed7686\") " Jul 2 08:10:09.835090 kubelet[3335]: I0702 08:10:09.830714 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45e13d6e-49bc-45b6-aab7-c45f816454fc-cilium-config-path\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.835090 kubelet[3335]: I0702 08:10:09.830755 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45e13d6e-49bc-45b6-aab7-c45f816454fc-hubble-tls\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.835090 kubelet[3335]: I0702 08:10:09.830794 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-bpf-maps\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.835090 kubelet[3335]: I0702 08:10:09.830830 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-cni-path\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.835090 kubelet[3335]: I0702 08:10:09.830868 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-cilium-run\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.835417 kubelet[3335]: I0702 08:10:09.833990 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-xtables-lock\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.835417 kubelet[3335]: I0702 08:10:09.834053 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-host-proc-sys-kernel\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.835417 kubelet[3335]: I0702 08:10:09.834092 3335 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-etc-cni-netd\") pod \"45e13d6e-49bc-45b6-aab7-c45f816454fc\" (UID: \"45e13d6e-49bc-45b6-aab7-c45f816454fc\") " Jul 2 08:10:09.835417 kubelet[3335]: I0702 08:10:09.834162 3335 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-hostproc\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.835417 kubelet[3335]: I0702 08:10:09.834191 3335 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-host-proc-sys-net\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.835417 kubelet[3335]: I0702 08:10:09.834215 3335 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-lib-modules\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.835717 kubelet[3335]: I0702 08:10:09.834267 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:10:09.839177 kubelet[3335]: I0702 08:10:09.838678 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:10:09.839177 kubelet[3335]: I0702 08:10:09.838780 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:10:09.839177 kubelet[3335]: I0702 08:10:09.838844 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-cni-path" (OuterVolumeSpecName: "cni-path") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:10:09.839177 kubelet[3335]: I0702 08:10:09.838905 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:10:09.839177 kubelet[3335]: I0702 08:10:09.838949 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:10:09.839542 kubelet[3335]: I0702 08:10:09.838991 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:10:09.859036 kubelet[3335]: I0702 08:10:09.858285 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e13d6e-49bc-45b6-aab7-c45f816454fc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:10:09.861451 kubelet[3335]: I0702 08:10:09.861392 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946dadc0-ac26-4ef2-99af-e44d18ed7686-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "946dadc0-ac26-4ef2-99af-e44d18ed7686" (UID: "946dadc0-ac26-4ef2-99af-e44d18ed7686"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:10:09.862062 kubelet[3335]: I0702 08:10:09.861911 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e13d6e-49bc-45b6-aab7-c45f816454fc-kube-api-access-5c8rn" (OuterVolumeSpecName: "kube-api-access-5c8rn") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "kube-api-access-5c8rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:10:09.867075 kubelet[3335]: I0702 08:10:09.866171 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e13d6e-49bc-45b6-aab7-c45f816454fc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:10:09.867075 kubelet[3335]: I0702 08:10:09.866320 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946dadc0-ac26-4ef2-99af-e44d18ed7686-kube-api-access-nm5jx" (OuterVolumeSpecName: "kube-api-access-nm5jx") pod "946dadc0-ac26-4ef2-99af-e44d18ed7686" (UID: "946dadc0-ac26-4ef2-99af-e44d18ed7686"). InnerVolumeSpecName "kube-api-access-nm5jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:10:09.868007 kubelet[3335]: I0702 08:10:09.867958 3335 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e13d6e-49bc-45b6-aab7-c45f816454fc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "45e13d6e-49bc-45b6-aab7-c45f816454fc" (UID: "45e13d6e-49bc-45b6-aab7-c45f816454fc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:10:09.934972 kubelet[3335]: I0702 08:10:09.934931 3335 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45e13d6e-49bc-45b6-aab7-c45f816454fc-clustermesh-secrets\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.935271 kubelet[3335]: I0702 08:10:09.935182 3335 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5c8rn\" (UniqueName: \"kubernetes.io/projected/45e13d6e-49bc-45b6-aab7-c45f816454fc-kube-api-access-5c8rn\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.935394 kubelet[3335]: I0702 08:10:09.935374 3335 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/946dadc0-ac26-4ef2-99af-e44d18ed7686-cilium-config-path\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.935735 kubelet[3335]: I0702 08:10:09.935493 3335 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-cilium-cgroup\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.936439 kubelet[3335]: I0702 08:10:09.936361 3335 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nm5jx\" (UniqueName: \"kubernetes.io/projected/946dadc0-ac26-4ef2-99af-e44d18ed7686-kube-api-access-nm5jx\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.936678 kubelet[3335]: I0702 08:10:09.936658 3335 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45e13d6e-49bc-45b6-aab7-c45f816454fc-cilium-config-path\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.936919 kubelet[3335]: I0702 08:10:09.936899 3335 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45e13d6e-49bc-45b6-aab7-c45f816454fc-hubble-tls\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.937251 kubelet[3335]: I0702 08:10:09.937100 3335 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-bpf-maps\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.937251 kubelet[3335]: I0702 08:10:09.937134 3335 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-cni-path\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.937251 kubelet[3335]: I0702 08:10:09.937159 3335 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-etc-cni-netd\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.937251 kubelet[3335]: I0702 08:10:09.937183 3335 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-cilium-run\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.937615 kubelet[3335]: I0702 08:10:09.937555 3335 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-xtables-lock\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:09.937615 kubelet[3335]: I0702 08:10:09.937588 3335 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45e13d6e-49bc-45b6-aab7-c45f816454fc-host-proc-sys-kernel\") on node \"ip-172-31-20-19\" DevicePath \"\"" Jul 2 08:10:10.208480 kubelet[3335]: I0702 08:10:10.208237 3335 scope.go:117] "RemoveContainer" containerID="ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303" Jul 2 08:10:10.215912 containerd[2020]: time="2024-07-02T08:10:10.214372409Z" level=info msg="RemoveContainer for \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\"" Jul 2 08:10:10.226903 containerd[2020]: time="2024-07-02T08:10:10.226808165Z" level=info msg="RemoveContainer for \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\" returns successfully" Jul 2 08:10:10.229333 kubelet[3335]: I0702 08:10:10.229278 3335 scope.go:117] "RemoveContainer" containerID="4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d" Jul 2 08:10:10.234720 systemd[1]: Removed slice kubepods-besteffort-pod946dadc0_ac26_4ef2_99af_e44d18ed7686.slice - libcontainer container kubepods-besteffort-pod946dadc0_ac26_4ef2_99af_e44d18ed7686.slice. Jul 2 08:10:10.236220 containerd[2020]: time="2024-07-02T08:10:10.235954109Z" level=info msg="RemoveContainer for \"4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d\"" Jul 2 08:10:10.241817 systemd[1]: Removed slice kubepods-burstable-pod45e13d6e_49bc_45b6_aab7_c45f816454fc.slice - libcontainer container kubepods-burstable-pod45e13d6e_49bc_45b6_aab7_c45f816454fc.slice. Jul 2 08:10:10.243029 systemd[1]: kubepods-burstable-pod45e13d6e_49bc_45b6_aab7_c45f816454fc.slice: Consumed 14.615s CPU time. Jul 2 08:10:10.245527 containerd[2020]: time="2024-07-02T08:10:10.245330309Z" level=info msg="RemoveContainer for \"4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d\" returns successfully" Jul 2 08:10:10.247909 kubelet[3335]: I0702 08:10:10.245799 3335 scope.go:117] "RemoveContainer" containerID="d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c" Jul 2 08:10:10.251360 containerd[2020]: time="2024-07-02T08:10:10.251297753Z" level=info msg="RemoveContainer for \"d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c\"" Jul 2 08:10:10.261168 containerd[2020]: time="2024-07-02T08:10:10.261001253Z" level=info msg="RemoveContainer for \"d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c\" returns successfully" Jul 2 08:10:10.262207 kubelet[3335]: I0702 08:10:10.262154 3335 scope.go:117] "RemoveContainer" containerID="c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9" Jul 2 08:10:10.265272 containerd[2020]: time="2024-07-02T08:10:10.265177073Z" level=info msg="RemoveContainer for \"c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9\"" Jul 2 08:10:10.271764 containerd[2020]: time="2024-07-02T08:10:10.271597793Z" level=info msg="RemoveContainer for \"c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9\" returns successfully" Jul 2 08:10:10.272315 kubelet[3335]: I0702 08:10:10.272279 3335 scope.go:117] "RemoveContainer" containerID="b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac" Jul 2 08:10:10.277511 containerd[2020]: time="2024-07-02T08:10:10.277364561Z" level=info msg="RemoveContainer for \"b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac\"" Jul 2 08:10:10.284768 containerd[2020]: time="2024-07-02T08:10:10.284578841Z" level=info msg="RemoveContainer for \"b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac\" returns successfully" Jul 2 08:10:10.285409 kubelet[3335]: I0702 08:10:10.285290 3335 scope.go:117] "RemoveContainer" containerID="ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303" Jul 2 08:10:10.286781 containerd[2020]: time="2024-07-02T08:10:10.286195337Z" level=error msg="ContainerStatus for \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\": not found" Jul 2 08:10:10.287398 kubelet[3335]: E0702 08:10:10.287334 3335 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\": not found" containerID="ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303" Jul 2 08:10:10.287511 kubelet[3335]: I0702 08:10:10.287492 3335 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303"} err="failed to get container status \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff092e58ed6e83274af3b403a050238014dd8e3c914ca7ffa4c2f189bfd52303\": not found" Jul 2 08:10:10.287576 kubelet[3335]: I0702 08:10:10.287524 3335 scope.go:117] "RemoveContainer" containerID="4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d" Jul 2 08:10:10.287972 containerd[2020]: time="2024-07-02T08:10:10.287858705Z" level=error msg="ContainerStatus for \"4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d\": not found" Jul 2 08:10:10.288541 kubelet[3335]: E0702 08:10:10.288317 3335 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d\": not found" containerID="4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d" Jul 2 08:10:10.289025 kubelet[3335]: I0702 08:10:10.288530 3335 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d"} err="failed to get container status \"4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d\": rpc error: code = NotFound desc = an error occurred when try to find container \"4154da43d1f840153ab2c53e90b95b1985cd00c1448898d1f0dd85635b12c88d\": not found" Jul 2 08:10:10.289025 kubelet[3335]: I0702 08:10:10.288590 3335 scope.go:117] "RemoveContainer" containerID="d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c" Jul 2 08:10:10.289608 containerd[2020]: time="2024-07-02T08:10:10.289505129Z" level=error msg="ContainerStatus for \"d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c\": not found" Jul 2 08:10:10.289926 kubelet[3335]: E0702 08:10:10.289873 3335 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c\": not found" containerID="d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c" Jul 2 08:10:10.290067 kubelet[3335]: I0702 08:10:10.289955 3335 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c"} err="failed to get container status \"d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d54f6bf830f5adfc843befd8167dfa0801df575101f40101105c367d3f228f4c\": not found" Jul 2 08:10:10.290067 kubelet[3335]: I0702 08:10:10.289980 3335 scope.go:117] "RemoveContainer" containerID="c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9" Jul 2 08:10:10.290375 containerd[2020]: time="2024-07-02T08:10:10.290316509Z" level=error msg="ContainerStatus for \"c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9\": not found" Jul 2 08:10:10.290719 kubelet[3335]: E0702 08:10:10.290689 3335 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9\": not found" containerID="c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9" Jul 2 08:10:10.290815 kubelet[3335]: I0702 08:10:10.290742 3335 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9"} err="failed to get container status \"c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"c04d2ae03e3e1a2a61c76a7f63ef2e5ccfcf1af06c7c2dfc01842afdb29991a9\": not found" Jul 2 08:10:10.290815 kubelet[3335]: I0702 08:10:10.290765 3335 scope.go:117] "RemoveContainer" containerID="b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac" Jul 2 08:10:10.291146 containerd[2020]: time="2024-07-02T08:10:10.291058265Z" level=error msg="ContainerStatus for \"b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac\": not found" Jul 2 08:10:10.291361 kubelet[3335]: E0702 08:10:10.291303 3335 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac\": not found" containerID="b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac" Jul 2 08:10:10.291461 kubelet[3335]: I0702 08:10:10.291362 3335 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac"} err="failed to get container status \"b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"b97bd4330d47cf4a3d96624284078002c88df4b2740789c84ad2cf611b7818ac\": not found" Jul 2 08:10:10.291461 kubelet[3335]: I0702 08:10:10.291384 3335 scope.go:117] "RemoveContainer" containerID="46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6" Jul 2 08:10:10.293840 containerd[2020]: time="2024-07-02T08:10:10.293416289Z" level=info msg="RemoveContainer for \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\"" Jul 2 08:10:10.298283 containerd[2020]: time="2024-07-02T08:10:10.298173533Z" level=info msg="RemoveContainer for \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\" returns successfully" Jul 2 08:10:10.298715 kubelet[3335]: I0702 08:10:10.298482 3335 scope.go:117] "RemoveContainer" containerID="46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6" Jul 2 08:10:10.299338 containerd[2020]: time="2024-07-02T08:10:10.299031929Z" level=error msg="ContainerStatus for \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\": not found" Jul 2 08:10:10.299427 kubelet[3335]: E0702 08:10:10.299266 3335 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\": not found" containerID="46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6" Jul 2 08:10:10.299427 kubelet[3335]: I0702 08:10:10.299315 3335 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6"} err="failed to get container status \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"46cc9709d3cc66461d6c4ee8b10037269c748ab500e473ce22ec24f7a12bf6d6\": not found" Jul 2 08:10:10.389634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957-rootfs.mount: Deactivated successfully. Jul 2 08:10:10.389805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f-rootfs.mount: Deactivated successfully. Jul 2 08:10:10.389962 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f-shm.mount: Deactivated successfully. Jul 2 08:10:10.390133 systemd[1]: var-lib-kubelet-pods-946dadc0\x2dac26\x2d4ef2\x2d99af\x2de44d18ed7686-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnm5jx.mount: Deactivated successfully. Jul 2 08:10:10.390270 systemd[1]: var-lib-kubelet-pods-45e13d6e\x2d49bc\x2d45b6\x2daab7\x2dc45f816454fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5c8rn.mount: Deactivated successfully. Jul 2 08:10:10.390403 systemd[1]: var-lib-kubelet-pods-45e13d6e\x2d49bc\x2d45b6\x2daab7\x2dc45f816454fc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:10:10.390544 systemd[1]: var-lib-kubelet-pods-45e13d6e\x2d49bc\x2d45b6\x2daab7\x2dc45f816454fc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:10:10.699350 kubelet[3335]: I0702 08:10:10.698799 3335 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="45e13d6e-49bc-45b6-aab7-c45f816454fc" path="/var/lib/kubelet/pods/45e13d6e-49bc-45b6-aab7-c45f816454fc/volumes" Jul 2 08:10:10.700426 kubelet[3335]: I0702 08:10:10.700376 3335 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="946dadc0-ac26-4ef2-99af-e44d18ed7686" path="/var/lib/kubelet/pods/946dadc0-ac26-4ef2-99af-e44d18ed7686/volumes" Jul 2 08:10:11.313068 sshd[4957]: pam_unix(sshd:session): session closed for user core Jul 2 08:10:11.320272 systemd[1]: sshd@27-172.31.20.19:22-139.178.89.65:52038.service: Deactivated successfully. Jul 2 08:10:11.324537 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 08:10:11.324967 systemd[1]: session-28.scope: Consumed 2.546s CPU time. Jul 2 08:10:11.327001 systemd-logind[1991]: Session 28 logged out. Waiting for processes to exit. Jul 2 08:10:11.329251 systemd-logind[1991]: Removed session 28. Jul 2 08:10:11.354754 systemd[1]: Started sshd@28-172.31.20.19:22-139.178.89.65:47308.service - OpenSSH per-connection server daemon (139.178.89.65:47308). Jul 2 08:10:11.537847 sshd[5121]: Accepted publickey for core from 139.178.89.65 port 47308 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:10:11.540515 sshd[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:10:11.548836 systemd-logind[1991]: New session 29 of user core. Jul 2 08:10:11.558149 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 08:10:12.396231 ntpd[1985]: Deleting interface #12 lxc_health, fe80::2093:d1ff:fed7:2bfe%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs Jul 2 08:10:12.396794 ntpd[1985]: 2 Jul 08:10:12 ntpd[1985]: Deleting interface #12 lxc_health, fe80::2093:d1ff:fed7:2bfe%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs Jul 2 08:10:12.982359 kubelet[3335]: E0702 08:10:12.981364 3335 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:10:13.310229 sshd[5121]: pam_unix(sshd:session): session closed for user core Jul 2 08:10:13.321463 systemd[1]: sshd@28-172.31.20.19:22-139.178.89.65:47308.service: Deactivated successfully. Jul 2 08:10:13.330230 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 08:10:13.331521 systemd[1]: session-29.scope: Consumed 1.580s CPU time. Jul 2 08:10:13.335089 systemd-logind[1991]: Session 29 logged out. Waiting for processes to exit. Jul 2 08:10:13.366528 systemd[1]: Started sshd@29-172.31.20.19:22-139.178.89.65:47314.service - OpenSSH per-connection server daemon (139.178.89.65:47314). Jul 2 08:10:13.370547 systemd-logind[1991]: Removed session 29. Jul 2 08:10:13.424268 kubelet[3335]: I0702 08:10:13.421970 3335 topology_manager.go:215] "Topology Admit Handler" podUID="4b066486-d17f-4270-a662-2e04b47d27da" podNamespace="kube-system" podName="cilium-mf99q" Jul 2 08:10:13.424268 kubelet[3335]: E0702 08:10:13.422083 3335 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45e13d6e-49bc-45b6-aab7-c45f816454fc" containerName="clean-cilium-state" Jul 2 08:10:13.424268 kubelet[3335]: E0702 08:10:13.422104 3335 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45e13d6e-49bc-45b6-aab7-c45f816454fc" containerName="cilium-agent" Jul 2 08:10:13.424268 kubelet[3335]: E0702 08:10:13.422136 3335 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45e13d6e-49bc-45b6-aab7-c45f816454fc" containerName="mount-cgroup" Jul 2 08:10:13.424268 kubelet[3335]: E0702 08:10:13.422156 3335 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45e13d6e-49bc-45b6-aab7-c45f816454fc" containerName="apply-sysctl-overwrites" Jul 2 08:10:13.424268 kubelet[3335]: E0702 08:10:13.422174 3335 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="946dadc0-ac26-4ef2-99af-e44d18ed7686" containerName="cilium-operator" Jul 2 08:10:13.424268 kubelet[3335]: E0702 08:10:13.422193 3335 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45e13d6e-49bc-45b6-aab7-c45f816454fc" containerName="mount-bpf-fs" Jul 2 08:10:13.424268 kubelet[3335]: I0702 08:10:13.422236 3335 memory_manager.go:346] "RemoveStaleState removing state" podUID="946dadc0-ac26-4ef2-99af-e44d18ed7686" containerName="cilium-operator" Jul 2 08:10:13.424268 kubelet[3335]: I0702 08:10:13.422255 3335 memory_manager.go:346] "RemoveStaleState removing state" podUID="45e13d6e-49bc-45b6-aab7-c45f816454fc" containerName="cilium-agent" Jul 2 08:10:13.448808 systemd[1]: Created slice kubepods-burstable-pod4b066486_d17f_4270_a662_2e04b47d27da.slice - libcontainer container kubepods-burstable-pod4b066486_d17f_4270_a662_2e04b47d27da.slice. Jul 2 08:10:13.561644 kubelet[3335]: I0702 08:10:13.559743 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b066486-d17f-4270-a662-2e04b47d27da-cilium-run\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.561644 kubelet[3335]: I0702 08:10:13.559926 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b066486-d17f-4270-a662-2e04b47d27da-host-proc-sys-kernel\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.561644 kubelet[3335]: I0702 08:10:13.559978 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b066486-d17f-4270-a662-2e04b47d27da-etc-cni-netd\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.561644 kubelet[3335]: I0702 08:10:13.560023 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b066486-d17f-4270-a662-2e04b47d27da-lib-modules\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.561644 kubelet[3335]: I0702 08:10:13.560084 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92djk\" (UniqueName: \"kubernetes.io/projected/4b066486-d17f-4270-a662-2e04b47d27da-kube-api-access-92djk\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.561644 kubelet[3335]: I0702 08:10:13.560134 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b066486-d17f-4270-a662-2e04b47d27da-hostproc\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.562077 kubelet[3335]: I0702 08:10:13.560177 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b066486-d17f-4270-a662-2e04b47d27da-cilium-ipsec-secrets\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.562077 kubelet[3335]: I0702 08:10:13.560220 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b066486-d17f-4270-a662-2e04b47d27da-hubble-tls\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.562077 kubelet[3335]: I0702 08:10:13.560268 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b066486-d17f-4270-a662-2e04b47d27da-cilium-cgroup\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.562077 kubelet[3335]: I0702 08:10:13.560308 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b066486-d17f-4270-a662-2e04b47d27da-cni-path\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.562077 kubelet[3335]: I0702 08:10:13.560350 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b066486-d17f-4270-a662-2e04b47d27da-xtables-lock\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.562077 kubelet[3335]: I0702 08:10:13.560391 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b066486-d17f-4270-a662-2e04b47d27da-cilium-config-path\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.562473 kubelet[3335]: I0702 08:10:13.560437 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b066486-d17f-4270-a662-2e04b47d27da-host-proc-sys-net\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.562473 kubelet[3335]: I0702 08:10:13.560478 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b066486-d17f-4270-a662-2e04b47d27da-bpf-maps\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.562473 kubelet[3335]: I0702 08:10:13.560523 3335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b066486-d17f-4270-a662-2e04b47d27da-clustermesh-secrets\") pod \"cilium-mf99q\" (UID: \"4b066486-d17f-4270-a662-2e04b47d27da\") " pod="kube-system/cilium-mf99q" Jul 2 08:10:13.579422 sshd[5132]: Accepted publickey for core from 139.178.89.65 port 47314 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:10:13.582103 sshd[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:10:13.590447 systemd-logind[1991]: New session 30 of user core. Jul 2 08:10:13.599215 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 2 08:10:13.742053 sshd[5132]: pam_unix(sshd:session): session closed for user core Jul 2 08:10:13.748725 systemd[1]: sshd@29-172.31.20.19:22-139.178.89.65:47314.service: Deactivated successfully. Jul 2 08:10:13.753428 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 08:10:13.755978 systemd-logind[1991]: Session 30 logged out. Waiting for processes to exit. Jul 2 08:10:13.758377 systemd-logind[1991]: Removed session 30. Jul 2 08:10:13.764811 containerd[2020]: time="2024-07-02T08:10:13.764738314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mf99q,Uid:4b066486-d17f-4270-a662-2e04b47d27da,Namespace:kube-system,Attempt:0,}" Jul 2 08:10:13.793449 systemd[1]: Started sshd@30-172.31.20.19:22-139.178.89.65:47320.service - OpenSSH per-connection server daemon (139.178.89.65:47320). Jul 2 08:10:13.820076 containerd[2020]: time="2024-07-02T08:10:13.819129755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:10:13.820076 containerd[2020]: time="2024-07-02T08:10:13.819247931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:10:13.820076 containerd[2020]: time="2024-07-02T08:10:13.819292919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:10:13.820076 containerd[2020]: time="2024-07-02T08:10:13.819327515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:10:13.854212 systemd[1]: Started cri-containerd-09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5.scope - libcontainer container 09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5. Jul 2 08:10:13.897605 containerd[2020]: time="2024-07-02T08:10:13.897224219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mf99q,Uid:4b066486-d17f-4270-a662-2e04b47d27da,Namespace:kube-system,Attempt:0,} returns sandbox id \"09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5\"" Jul 2 08:10:13.902759 containerd[2020]: time="2024-07-02T08:10:13.902684723Z" level=info msg="CreateContainer within sandbox \"09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:10:13.927552 containerd[2020]: time="2024-07-02T08:10:13.927406283Z" level=info msg="CreateContainer within sandbox \"09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5cd07735f5b972b24e2b4ed6afaa37e3203cd458cccfd76267b878d214ce3a72\"" Jul 2 08:10:13.929982 containerd[2020]: time="2024-07-02T08:10:13.928403795Z" level=info msg="StartContainer for \"5cd07735f5b972b24e2b4ed6afaa37e3203cd458cccfd76267b878d214ce3a72\"" Jul 2 08:10:13.985186 systemd[1]: Started cri-containerd-5cd07735f5b972b24e2b4ed6afaa37e3203cd458cccfd76267b878d214ce3a72.scope - libcontainer container 5cd07735f5b972b24e2b4ed6afaa37e3203cd458cccfd76267b878d214ce3a72. Jul 2 08:10:13.988047 sshd[5144]: Accepted publickey for core from 139.178.89.65 port 47320 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:10:13.994057 sshd[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:10:14.006867 systemd-logind[1991]: New session 31 of user core. Jul 2 08:10:14.012179 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 2 08:10:14.046811 containerd[2020]: time="2024-07-02T08:10:14.046733324Z" level=info msg="StartContainer for \"5cd07735f5b972b24e2b4ed6afaa37e3203cd458cccfd76267b878d214ce3a72\" returns successfully" Jul 2 08:10:14.063248 systemd[1]: cri-containerd-5cd07735f5b972b24e2b4ed6afaa37e3203cd458cccfd76267b878d214ce3a72.scope: Deactivated successfully. Jul 2 08:10:14.133320 containerd[2020]: time="2024-07-02T08:10:14.132971300Z" level=info msg="shim disconnected" id=5cd07735f5b972b24e2b4ed6afaa37e3203cd458cccfd76267b878d214ce3a72 namespace=k8s.io Jul 2 08:10:14.133320 containerd[2020]: time="2024-07-02T08:10:14.133245524Z" level=warning msg="cleaning up after shim disconnected" id=5cd07735f5b972b24e2b4ed6afaa37e3203cd458cccfd76267b878d214ce3a72 namespace=k8s.io Jul 2 08:10:14.133320 containerd[2020]: time="2024-07-02T08:10:14.133271756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:14.241366 containerd[2020]: time="2024-07-02T08:10:14.240308205Z" level=info msg="CreateContainer within sandbox \"09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:10:14.268162 containerd[2020]: time="2024-07-02T08:10:14.268079805Z" level=info msg="CreateContainer within sandbox \"09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9cff44544212fb27bc038db31da1a8fde17e8d20eb43f270a804a529b2b40adf\"" Jul 2 08:10:14.269934 containerd[2020]: time="2024-07-02T08:10:14.269236125Z" level=info msg="StartContainer for \"9cff44544212fb27bc038db31da1a8fde17e8d20eb43f270a804a529b2b40adf\"" Jul 2 08:10:14.327238 systemd[1]: Started cri-containerd-9cff44544212fb27bc038db31da1a8fde17e8d20eb43f270a804a529b2b40adf.scope - libcontainer container 9cff44544212fb27bc038db31da1a8fde17e8d20eb43f270a804a529b2b40adf. Jul 2 08:10:14.381415 containerd[2020]: time="2024-07-02T08:10:14.381344553Z" level=info msg="StartContainer for \"9cff44544212fb27bc038db31da1a8fde17e8d20eb43f270a804a529b2b40adf\" returns successfully" Jul 2 08:10:14.395692 systemd[1]: cri-containerd-9cff44544212fb27bc038db31da1a8fde17e8d20eb43f270a804a529b2b40adf.scope: Deactivated successfully. Jul 2 08:10:14.442366 containerd[2020]: time="2024-07-02T08:10:14.442220506Z" level=info msg="shim disconnected" id=9cff44544212fb27bc038db31da1a8fde17e8d20eb43f270a804a529b2b40adf namespace=k8s.io Jul 2 08:10:14.442366 containerd[2020]: time="2024-07-02T08:10:14.442337578Z" level=warning msg="cleaning up after shim disconnected" id=9cff44544212fb27bc038db31da1a8fde17e8d20eb43f270a804a529b2b40adf namespace=k8s.io Jul 2 08:10:14.442366 containerd[2020]: time="2024-07-02T08:10:14.442361578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:15.242575 containerd[2020]: time="2024-07-02T08:10:15.242507242Z" level=info msg="CreateContainer within sandbox \"09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:10:15.272171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146427382.mount: Deactivated successfully. Jul 2 08:10:15.274642 containerd[2020]: time="2024-07-02T08:10:15.274044634Z" level=info msg="CreateContainer within sandbox \"09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"397153f95728f518eb6387bfff9bacd85183795c600a4dade9fb461f24a2d669\"" Jul 2 08:10:15.275964 containerd[2020]: time="2024-07-02T08:10:15.275603830Z" level=info msg="StartContainer for \"397153f95728f518eb6387bfff9bacd85183795c600a4dade9fb461f24a2d669\"" Jul 2 08:10:15.337189 systemd[1]: Started cri-containerd-397153f95728f518eb6387bfff9bacd85183795c600a4dade9fb461f24a2d669.scope - libcontainer container 397153f95728f518eb6387bfff9bacd85183795c600a4dade9fb461f24a2d669. Jul 2 08:10:15.364332 kubelet[3335]: I0702 08:10:15.362992 3335 setters.go:552] "Node became not ready" node="ip-172-31-20-19" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T08:10:15Z","lastTransitionTime":"2024-07-02T08:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 08:10:15.413579 containerd[2020]: time="2024-07-02T08:10:15.413469238Z" level=info msg="StartContainer for \"397153f95728f518eb6387bfff9bacd85183795c600a4dade9fb461f24a2d669\" returns successfully" Jul 2 08:10:15.422037 systemd[1]: cri-containerd-397153f95728f518eb6387bfff9bacd85183795c600a4dade9fb461f24a2d669.scope: Deactivated successfully. Jul 2 08:10:15.472230 containerd[2020]: time="2024-07-02T08:10:15.472076063Z" level=info msg="shim disconnected" id=397153f95728f518eb6387bfff9bacd85183795c600a4dade9fb461f24a2d669 namespace=k8s.io Jul 2 08:10:15.472495 containerd[2020]: time="2024-07-02T08:10:15.472217579Z" level=warning msg="cleaning up after shim disconnected" id=397153f95728f518eb6387bfff9bacd85183795c600a4dade9fb461f24a2d669 namespace=k8s.io Jul 2 08:10:15.472495 containerd[2020]: time="2024-07-02T08:10:15.472263407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:15.670965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-397153f95728f518eb6387bfff9bacd85183795c600a4dade9fb461f24a2d669-rootfs.mount: Deactivated successfully. Jul 2 08:10:16.257916 containerd[2020]: time="2024-07-02T08:10:16.257220767Z" level=info msg="CreateContainer within sandbox \"09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:10:16.295710 containerd[2020]: time="2024-07-02T08:10:16.295610279Z" level=info msg="CreateContainer within sandbox \"09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8b0c4dc9bd4434fffda263c31818b4f14ec7d1a213ecac3c8166d051b70c2ce6\"" Jul 2 08:10:16.296977 containerd[2020]: time="2024-07-02T08:10:16.296757755Z" level=info msg="StartContainer for \"8b0c4dc9bd4434fffda263c31818b4f14ec7d1a213ecac3c8166d051b70c2ce6\"" Jul 2 08:10:16.367250 systemd[1]: Started cri-containerd-8b0c4dc9bd4434fffda263c31818b4f14ec7d1a213ecac3c8166d051b70c2ce6.scope - libcontainer container 8b0c4dc9bd4434fffda263c31818b4f14ec7d1a213ecac3c8166d051b70c2ce6. Jul 2 08:10:16.459548 systemd[1]: cri-containerd-8b0c4dc9bd4434fffda263c31818b4f14ec7d1a213ecac3c8166d051b70c2ce6.scope: Deactivated successfully. Jul 2 08:10:16.462503 containerd[2020]: time="2024-07-02T08:10:16.462347976Z" level=info msg="StartContainer for \"8b0c4dc9bd4434fffda263c31818b4f14ec7d1a213ecac3c8166d051b70c2ce6\" returns successfully" Jul 2 08:10:16.511373 containerd[2020]: time="2024-07-02T08:10:16.510965904Z" level=info msg="shim disconnected" id=8b0c4dc9bd4434fffda263c31818b4f14ec7d1a213ecac3c8166d051b70c2ce6 namespace=k8s.io Jul 2 08:10:16.511373 containerd[2020]: time="2024-07-02T08:10:16.511042740Z" level=warning msg="cleaning up after shim disconnected" id=8b0c4dc9bd4434fffda263c31818b4f14ec7d1a213ecac3c8166d051b70c2ce6 namespace=k8s.io Jul 2 08:10:16.511373 containerd[2020]: time="2024-07-02T08:10:16.511066068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:16.671003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b0c4dc9bd4434fffda263c31818b4f14ec7d1a213ecac3c8166d051b70c2ce6-rootfs.mount: Deactivated successfully. Jul 2 08:10:17.257152 containerd[2020]: time="2024-07-02T08:10:17.256800120Z" level=info msg="CreateContainer within sandbox \"09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:10:17.293678 containerd[2020]: time="2024-07-02T08:10:17.293617812Z" level=info msg="CreateContainer within sandbox \"09ee755ff7fa8a7695aeb8f1398c40e351f9f31a33f894e4fb4a580eee1d48f5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1a5c06743577436a59c79563696c72853453d88871b64b7d9e7bf0f8a8515021\"" Jul 2 08:10:17.296231 containerd[2020]: time="2024-07-02T08:10:17.295163700Z" level=info msg="StartContainer for \"1a5c06743577436a59c79563696c72853453d88871b64b7d9e7bf0f8a8515021\"" Jul 2 08:10:17.350547 systemd[1]: Started cri-containerd-1a5c06743577436a59c79563696c72853453d88871b64b7d9e7bf0f8a8515021.scope - libcontainer container 1a5c06743577436a59c79563696c72853453d88871b64b7d9e7bf0f8a8515021. Jul 2 08:10:17.410942 containerd[2020]: time="2024-07-02T08:10:17.409170456Z" level=info msg="StartContainer for \"1a5c06743577436a59c79563696c72853453d88871b64b7d9e7bf0f8a8515021\" returns successfully" Jul 2 08:10:18.181935 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 2 08:10:18.290272 kubelet[3335]: I0702 08:10:18.290215 3335 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mf99q" podStartSLOduration=5.290129917 podCreationTimestamp="2024-07-02 08:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:10:18.289476049 +0000 UTC m=+115.871154325" watchObservedRunningTime="2024-07-02 08:10:18.290129917 +0000 UTC m=+115.871808181" Jul 2 08:10:22.334032 (udev-worker)[5986]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:10:22.337259 (udev-worker)[5987]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:10:22.373862 systemd-networkd[1929]: lxc_health: Link UP Jul 2 08:10:22.381852 systemd-networkd[1929]: lxc_health: Gained carrier Jul 2 08:10:22.629396 containerd[2020]: time="2024-07-02T08:10:22.629330274Z" level=info msg="StopPodSandbox for \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\"" Jul 2 08:10:22.630025 containerd[2020]: time="2024-07-02T08:10:22.629478834Z" level=info msg="TearDown network for sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" successfully" Jul 2 08:10:22.630025 containerd[2020]: time="2024-07-02T08:10:22.629543658Z" level=info msg="StopPodSandbox for \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" returns successfully" Jul 2 08:10:22.631961 containerd[2020]: time="2024-07-02T08:10:22.631501350Z" level=info msg="RemovePodSandbox for \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\"" Jul 2 08:10:22.631961 containerd[2020]: time="2024-07-02T08:10:22.631555914Z" level=info msg="Forcibly stopping sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\"" Jul 2 08:10:22.631961 containerd[2020]: time="2024-07-02T08:10:22.631698882Z" level=info msg="TearDown network for sandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" successfully" Jul 2 08:10:22.642104 containerd[2020]: time="2024-07-02T08:10:22.640699698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:10:22.642104 containerd[2020]: time="2024-07-02T08:10:22.640983114Z" level=info msg="RemovePodSandbox \"f0bc6df210b8b0838af4ecc53bf345e32c75db8229e189adb5d830358daa0c3f\" returns successfully" Jul 2 08:10:22.642600 containerd[2020]: time="2024-07-02T08:10:22.642467850Z" level=info msg="StopPodSandbox for \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\"" Jul 2 08:10:22.642753 containerd[2020]: time="2024-07-02T08:10:22.642670542Z" level=info msg="TearDown network for sandbox \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\" successfully" Jul 2 08:10:22.642824 containerd[2020]: time="2024-07-02T08:10:22.642756666Z" level=info msg="StopPodSandbox for \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\" returns successfully" Jul 2 08:10:22.646913 containerd[2020]: time="2024-07-02T08:10:22.644219442Z" level=info msg="RemovePodSandbox for \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\"" Jul 2 08:10:22.646913 containerd[2020]: time="2024-07-02T08:10:22.644278098Z" level=info msg="Forcibly stopping sandbox \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\"" Jul 2 08:10:22.646913 containerd[2020]: time="2024-07-02T08:10:22.644419950Z" level=info msg="TearDown network for sandbox \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\" successfully" Jul 2 08:10:22.651801 containerd[2020]: time="2024-07-02T08:10:22.651721962Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:10:22.652111 containerd[2020]: time="2024-07-02T08:10:22.652055154Z" level=info msg="RemovePodSandbox \"5f177066535bb8de78f37ba1c627ac94253240f1760a9b5dec9801ff5806a957\" returns successfully" Jul 2 08:10:24.335275 systemd-networkd[1929]: lxc_health: Gained IPv6LL Jul 2 08:10:26.396342 ntpd[1985]: Listen normally on 15 lxc_health [fe80::2897:d0ff:fe86:1245%14]:123 Jul 2 08:10:26.396918 ntpd[1985]: 2 Jul 08:10:26 ntpd[1985]: Listen normally on 15 lxc_health [fe80::2897:d0ff:fe86:1245%14]:123 Jul 2 08:10:30.168067 sshd[5144]: pam_unix(sshd:session): session closed for user core Jul 2 08:10:30.176032 systemd[1]: sshd@30-172.31.20.19:22-139.178.89.65:47320.service: Deactivated successfully. Jul 2 08:10:30.183296 systemd[1]: session-31.scope: Deactivated successfully. Jul 2 08:10:30.188589 systemd-logind[1991]: Session 31 logged out. Waiting for processes to exit. Jul 2 08:10:30.190761 systemd-logind[1991]: Removed session 31. Jul 2 08:10:44.083238 systemd[1]: cri-containerd-1d682c3f775b4d1b7e362340b0c915e8f3aa3d07b0a4495acc6371aae64c7828.scope: Deactivated successfully. Jul 2 08:10:44.083904 systemd[1]: cri-containerd-1d682c3f775b4d1b7e362340b0c915e8f3aa3d07b0a4495acc6371aae64c7828.scope: Consumed 5.517s CPU time, 22.0M memory peak, 0B memory swap peak. Jul 2 08:10:44.129005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d682c3f775b4d1b7e362340b0c915e8f3aa3d07b0a4495acc6371aae64c7828-rootfs.mount: Deactivated successfully. Jul 2 08:10:44.139378 containerd[2020]: time="2024-07-02T08:10:44.139215205Z" level=info msg="shim disconnected" id=1d682c3f775b4d1b7e362340b0c915e8f3aa3d07b0a4495acc6371aae64c7828 namespace=k8s.io Jul 2 08:10:44.139378 containerd[2020]: time="2024-07-02T08:10:44.139324189Z" level=warning msg="cleaning up after shim disconnected" id=1d682c3f775b4d1b7e362340b0c915e8f3aa3d07b0a4495acc6371aae64c7828 namespace=k8s.io Jul 2 08:10:44.140131 containerd[2020]: time="2024-07-02T08:10:44.139345765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:44.340207 kubelet[3335]: I0702 08:10:44.338753 3335 scope.go:117] "RemoveContainer" containerID="1d682c3f775b4d1b7e362340b0c915e8f3aa3d07b0a4495acc6371aae64c7828" Jul 2 08:10:44.345479 containerd[2020]: time="2024-07-02T08:10:44.345196250Z" level=info msg="CreateContainer within sandbox \"ef0d0d689cc6e421f5d4ed11253f8fea6d59afdd6d93d4ed08ab02e5a31cf759\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 08:10:44.366680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1772921471.mount: Deactivated successfully. Jul 2 08:10:44.373710 containerd[2020]: time="2024-07-02T08:10:44.373644218Z" level=info msg="CreateContainer within sandbox \"ef0d0d689cc6e421f5d4ed11253f8fea6d59afdd6d93d4ed08ab02e5a31cf759\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f1412c9247e00aaf4080df1248698601935bb3e4302c0b201813f7a26a2642ce\"" Jul 2 08:10:44.375100 containerd[2020]: time="2024-07-02T08:10:44.374948570Z" level=info msg="StartContainer for \"f1412c9247e00aaf4080df1248698601935bb3e4302c0b201813f7a26a2642ce\"" Jul 2 08:10:44.430202 systemd[1]: Started cri-containerd-f1412c9247e00aaf4080df1248698601935bb3e4302c0b201813f7a26a2642ce.scope - libcontainer container f1412c9247e00aaf4080df1248698601935bb3e4302c0b201813f7a26a2642ce. Jul 2 08:10:44.504392 containerd[2020]: time="2024-07-02T08:10:44.503706423Z" level=info msg="StartContainer for \"f1412c9247e00aaf4080df1248698601935bb3e4302c0b201813f7a26a2642ce\" returns successfully" Jul 2 08:10:45.489872 kubelet[3335]: E0702 08:10:45.487160 3335 controller.go:193] "Failed to update lease" err="Put \"https://172.31.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-19?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:10:50.015304 systemd[1]: cri-containerd-f5e344c4aa86d66dbbd6209825936287f4103f927905bbc4a615398a4b36d84b.scope: Deactivated successfully. Jul 2 08:10:50.017079 systemd[1]: cri-containerd-f5e344c4aa86d66dbbd6209825936287f4103f927905bbc4a615398a4b36d84b.scope: Consumed 2.369s CPU time, 14.2M memory peak, 0B memory swap peak. Jul 2 08:10:50.062395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5e344c4aa86d66dbbd6209825936287f4103f927905bbc4a615398a4b36d84b-rootfs.mount: Deactivated successfully. Jul 2 08:10:50.077467 containerd[2020]: time="2024-07-02T08:10:50.077387611Z" level=info msg="shim disconnected" id=f5e344c4aa86d66dbbd6209825936287f4103f927905bbc4a615398a4b36d84b namespace=k8s.io Jul 2 08:10:50.079049 containerd[2020]: time="2024-07-02T08:10:50.077629603Z" level=warning msg="cleaning up after shim disconnected" id=f5e344c4aa86d66dbbd6209825936287f4103f927905bbc4a615398a4b36d84b namespace=k8s.io Jul 2 08:10:50.079049 containerd[2020]: time="2024-07-02T08:10:50.077657791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:50.359637 kubelet[3335]: I0702 08:10:50.359515 3335 scope.go:117] "RemoveContainer" containerID="f5e344c4aa86d66dbbd6209825936287f4103f927905bbc4a615398a4b36d84b" Jul 2 08:10:50.364065 containerd[2020]: time="2024-07-02T08:10:50.363998492Z" level=info msg="CreateContainer within sandbox \"b2b7a69d96ffefa644720c11bd66f3f71b5b0e3fb82305bbcdfec4fc6388082e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 08:10:50.389594 containerd[2020]: time="2024-07-02T08:10:50.389510792Z" level=info msg="CreateContainer within sandbox \"b2b7a69d96ffefa644720c11bd66f3f71b5b0e3fb82305bbcdfec4fc6388082e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"96bdea43d18da8160fbffadbf96ce719263f60532b0455e1b5d1ff5c01d09c67\"" Jul 2 08:10:50.390321 containerd[2020]: time="2024-07-02T08:10:50.390222008Z" level=info msg="StartContainer for \"96bdea43d18da8160fbffadbf96ce719263f60532b0455e1b5d1ff5c01d09c67\"" Jul 2 08:10:50.445187 systemd[1]: Started cri-containerd-96bdea43d18da8160fbffadbf96ce719263f60532b0455e1b5d1ff5c01d09c67.scope - libcontainer container 96bdea43d18da8160fbffadbf96ce719263f60532b0455e1b5d1ff5c01d09c67. Jul 2 08:10:50.512963 containerd[2020]: time="2024-07-02T08:10:50.512769825Z" level=info msg="StartContainer for \"96bdea43d18da8160fbffadbf96ce719263f60532b0455e1b5d1ff5c01d09c67\" returns successfully" Jul 2 08:10:51.061790 systemd[1]: run-containerd-runc-k8s.io-96bdea43d18da8160fbffadbf96ce719263f60532b0455e1b5d1ff5c01d09c67-runc.r5ONjf.mount: Deactivated successfully. Jul 2 08:10:55.488142 kubelet[3335]: E0702 08:10:55.487635 3335 controller.go:193] "Failed to update lease" err="Put \"https://172.31.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-19?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"