Jul 2 08:57:45.200108 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 2 08:57:45.200153 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 08:57:45.200179 kernel: KASLR disabled due to lack of seed Jul 2 08:57:45.200195 kernel: efi: EFI v2.7 by EDK II Jul 2 08:57:45.200211 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Jul 2 08:57:45.200226 kernel: ACPI: Early table checksum verification disabled Jul 2 08:57:45.200244 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 2 08:57:45.200260 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 2 08:57:45.200276 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 08:57:45.200292 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 2 08:57:45.200313 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 08:57:45.200329 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 2 08:57:45.200344 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 2 08:57:45.200360 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 2 08:57:45.200378 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 08:57:45.200399 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 2 08:57:45.200417 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 2 08:57:45.200433 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 2 08:57:45.200505 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 2 08:57:45.200529 kernel: printk: bootconsole [uart0] enabled Jul 2 08:57:45.200546 kernel: NUMA: Failed to initialise from firmware Jul 2 08:57:45.200563 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 08:57:45.200580 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 2 08:57:45.200596 kernel: Zone ranges: Jul 2 08:57:45.200613 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 2 08:57:45.200629 kernel: DMA32 empty Jul 2 08:57:45.200652 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 2 08:57:45.200669 kernel: Movable zone start for each node Jul 2 08:57:45.200685 kernel: Early memory node ranges Jul 2 08:57:45.200701 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 2 08:57:45.200717 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 2 08:57:45.200733 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 2 08:57:45.200749 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 2 08:57:45.200765 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 2 08:57:45.200782 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 2 08:57:45.200798 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 2 08:57:45.200814 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 2 08:57:45.200831 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 08:57:45.200852 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 2 08:57:45.200869 kernel: psci: probing for conduit method from ACPI. Jul 2 08:57:45.200893 kernel: psci: PSCIv1.0 detected in firmware. Jul 2 08:57:45.200910 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 08:57:45.200928 kernel: psci: Trusted OS migration not required Jul 2 08:57:45.200950 kernel: psci: SMC Calling Convention v1.1 Jul 2 08:57:45.200968 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 08:57:45.200985 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 08:57:45.201002 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 08:57:45.201019 kernel: Detected PIPT I-cache on CPU0 Jul 2 08:57:45.201036 kernel: CPU features: detected: GIC system register CPU interface Jul 2 08:57:45.201053 kernel: CPU features: detected: Spectre-v2 Jul 2 08:57:45.201070 kernel: CPU features: detected: Spectre-v3a Jul 2 08:57:45.201088 kernel: CPU features: detected: Spectre-BHB Jul 2 08:57:45.201105 kernel: CPU features: detected: ARM erratum 1742098 Jul 2 08:57:45.201122 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 2 08:57:45.201144 kernel: alternatives: applying boot alternatives Jul 2 08:57:45.201164 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 08:57:45.201182 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:57:45.201200 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 08:57:45.201217 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:57:45.201235 kernel: Fallback order for Node 0: 0 Jul 2 08:57:45.201252 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 2 08:57:45.201269 kernel: Policy zone: Normal Jul 2 08:57:45.201286 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:57:45.201303 kernel: software IO TLB: area num 2. Jul 2 08:57:45.201320 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 2 08:57:45.201343 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Jul 2 08:57:45.201361 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:57:45.201378 kernel: trace event string verifier disabled Jul 2 08:57:45.201395 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 08:57:45.201413 kernel: rcu: RCU event tracing is enabled. Jul 2 08:57:45.201431 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:57:45.201448 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 08:57:45.203570 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:57:45.203590 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:57:45.203608 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:57:45.203626 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 08:57:45.203653 kernel: GICv3: 96 SPIs implemented Jul 2 08:57:45.203671 kernel: GICv3: 0 Extended SPIs implemented Jul 2 08:57:45.203688 kernel: Root IRQ handler: gic_handle_irq Jul 2 08:57:45.203705 kernel: GICv3: GICv3 features: 16 PPIs Jul 2 08:57:45.203723 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 2 08:57:45.203759 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 2 08:57:45.203779 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 08:57:45.203797 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jul 2 08:57:45.203814 kernel: GICv3: using LPI property table @0x00000004000e0000 Jul 2 08:57:45.203832 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 2 08:57:45.203850 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jul 2 08:57:45.203867 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 08:57:45.203890 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 2 08:57:45.203908 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 2 08:57:45.203925 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 2 08:57:45.203943 kernel: Console: colour dummy device 80x25 Jul 2 08:57:45.203960 kernel: printk: console [tty1] enabled Jul 2 08:57:45.203978 kernel: ACPI: Core revision 20230628 Jul 2 08:57:45.203996 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 2 08:57:45.204014 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:57:45.204032 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 08:57:45.204049 kernel: SELinux: Initializing. Jul 2 08:57:45.204072 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:57:45.204090 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:57:45.204108 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:57:45.204125 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:57:45.204143 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:57:45.204161 kernel: rcu: Max phase no-delay instances is 400. Jul 2 08:57:45.204179 kernel: Platform MSI: ITS@0x10080000 domain created Jul 2 08:57:45.204197 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 2 08:57:45.204214 kernel: Remapping and enabling EFI services. Jul 2 08:57:45.204237 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:57:45.204255 kernel: Detected PIPT I-cache on CPU1 Jul 2 08:57:45.204273 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 2 08:57:45.204291 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jul 2 08:57:45.204310 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 2 08:57:45.204328 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:57:45.204345 kernel: SMP: Total of 2 processors activated. Jul 2 08:57:45.204363 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 08:57:45.204380 kernel: CPU features: detected: 32-bit EL1 Support Jul 2 08:57:45.204405 kernel: CPU features: detected: CRC32 instructions Jul 2 08:57:45.204423 kernel: CPU: All CPU(s) started at EL1 Jul 2 08:57:45.206345 kernel: alternatives: applying system-wide alternatives Jul 2 08:57:45.206393 kernel: devtmpfs: initialized Jul 2 08:57:45.206412 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:57:45.206432 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:57:45.206469 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:57:45.206519 kernel: SMBIOS 3.0.0 present. Jul 2 08:57:45.206539 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 2 08:57:45.206565 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:57:45.206584 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 08:57:45.206602 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 08:57:45.206621 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 08:57:45.206640 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:57:45.206658 kernel: audit: type=2000 audit(0.293:1): state=initialized audit_enabled=0 res=1 Jul 2 08:57:45.206676 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:57:45.206700 kernel: cpuidle: using governor menu Jul 2 08:57:45.206719 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 08:57:45.206737 kernel: ASID allocator initialised with 65536 entries Jul 2 08:57:45.206755 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:57:45.206774 kernel: Serial: AMBA PL011 UART driver Jul 2 08:57:45.206792 kernel: Modules: 17600 pages in range for non-PLT usage Jul 2 08:57:45.206811 kernel: Modules: 509120 pages in range for PLT usage Jul 2 08:57:45.206829 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:57:45.206848 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 08:57:45.206871 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 08:57:45.206890 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 08:57:45.206908 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:57:45.206926 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 08:57:45.206946 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 08:57:45.206965 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 08:57:45.206984 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:57:45.207003 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:57:45.207022 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:57:45.207045 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:57:45.207064 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:57:45.207082 kernel: ACPI: Interpreter enabled Jul 2 08:57:45.207100 kernel: ACPI: Using GIC for interrupt routing Jul 2 08:57:45.208263 kernel: ACPI: MCFG table detected, 1 entries Jul 2 08:57:45.208321 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 2 08:57:45.208752 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:57:45.208976 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 08:57:45.209187 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 08:57:45.209388 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 2 08:57:45.209619 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 2 08:57:45.209646 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 2 08:57:45.209665 kernel: acpiphp: Slot [1] registered Jul 2 08:57:45.209683 kernel: acpiphp: Slot [2] registered Jul 2 08:57:45.209702 kernel: acpiphp: Slot [3] registered Jul 2 08:57:45.209720 kernel: acpiphp: Slot [4] registered Jul 2 08:57:45.209738 kernel: acpiphp: Slot [5] registered Jul 2 08:57:45.209763 kernel: acpiphp: Slot [6] registered Jul 2 08:57:45.209781 kernel: acpiphp: Slot [7] registered Jul 2 08:57:45.209799 kernel: acpiphp: Slot [8] registered Jul 2 08:57:45.209817 kernel: acpiphp: Slot [9] registered Jul 2 08:57:45.209835 kernel: acpiphp: Slot [10] registered Jul 2 08:57:45.209854 kernel: acpiphp: Slot [11] registered Jul 2 08:57:45.209872 kernel: acpiphp: Slot [12] registered Jul 2 08:57:45.209890 kernel: acpiphp: Slot [13] registered Jul 2 08:57:45.209909 kernel: acpiphp: Slot [14] registered Jul 2 08:57:45.209932 kernel: acpiphp: Slot [15] registered Jul 2 08:57:45.209950 kernel: acpiphp: Slot [16] registered Jul 2 08:57:45.209968 kernel: acpiphp: Slot [17] registered Jul 2 08:57:45.209986 kernel: acpiphp: Slot [18] registered Jul 2 08:57:45.210004 kernel: acpiphp: Slot [19] registered Jul 2 08:57:45.210022 kernel: acpiphp: Slot [20] registered Jul 2 08:57:45.210040 kernel: acpiphp: Slot [21] registered Jul 2 08:57:45.210058 kernel: acpiphp: Slot [22] registered Jul 2 08:57:45.210076 kernel: acpiphp: Slot [23] registered Jul 2 08:57:45.210095 kernel: acpiphp: Slot [24] registered Jul 2 08:57:45.210118 kernel: acpiphp: Slot [25] registered Jul 2 08:57:45.210136 kernel: acpiphp: Slot [26] registered Jul 2 08:57:45.210154 kernel: acpiphp: Slot [27] registered Jul 2 08:57:45.210173 kernel: acpiphp: Slot [28] registered Jul 2 08:57:45.210191 kernel: acpiphp: Slot [29] registered Jul 2 08:57:45.210209 kernel: acpiphp: Slot [30] registered Jul 2 08:57:45.210227 kernel: acpiphp: Slot [31] registered Jul 2 08:57:45.210244 kernel: PCI host bridge to bus 0000:00 Jul 2 08:57:45.210448 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 2 08:57:45.210817 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 08:57:45.211884 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 2 08:57:45.212102 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 2 08:57:45.212343 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 2 08:57:45.212600 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 2 08:57:45.212815 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 2 08:57:45.213045 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 08:57:45.214206 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 2 08:57:45.214432 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:57:45.214702 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 08:57:45.214910 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 2 08:57:45.215112 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 2 08:57:45.215319 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 2 08:57:45.216853 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:57:45.217074 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 2 08:57:45.217277 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 2 08:57:45.217565 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 2 08:57:45.217773 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 2 08:57:45.217979 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 2 08:57:45.218165 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 2 08:57:45.218359 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 08:57:45.219554 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 2 08:57:45.219596 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 08:57:45.219616 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 08:57:45.219635 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 08:57:45.219653 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 08:57:45.219672 kernel: iommu: Default domain type: Translated Jul 2 08:57:45.219691 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 08:57:45.219720 kernel: efivars: Registered efivars operations Jul 2 08:57:45.219756 kernel: vgaarb: loaded Jul 2 08:57:45.219777 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 08:57:45.219795 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:57:45.219814 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:57:45.219832 kernel: pnp: PnP ACPI init Jul 2 08:57:45.220074 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 2 08:57:45.220105 kernel: pnp: PnP ACPI: found 1 devices Jul 2 08:57:45.220130 kernel: NET: Registered PF_INET protocol family Jul 2 08:57:45.220149 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 08:57:45.220168 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 08:57:45.220187 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:57:45.220206 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:57:45.220225 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 08:57:45.220243 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 08:57:45.220261 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:57:45.220280 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:57:45.220303 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:57:45.220322 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:57:45.220340 kernel: kvm [1]: HYP mode not available Jul 2 08:57:45.220359 kernel: Initialise system trusted keyrings Jul 2 08:57:45.220378 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 08:57:45.220396 kernel: Key type asymmetric registered Jul 2 08:57:45.220414 kernel: Asymmetric key parser 'x509' registered Jul 2 08:57:45.220432 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 08:57:45.220491 kernel: io scheduler mq-deadline registered Jul 2 08:57:45.220523 kernel: io scheduler kyber registered Jul 2 08:57:45.220542 kernel: io scheduler bfq registered Jul 2 08:57:45.220787 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 2 08:57:45.220817 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 08:57:45.220836 kernel: ACPI: button: Power Button [PWRB] Jul 2 08:57:45.220855 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 2 08:57:45.220874 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 08:57:45.220892 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:57:45.220917 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 2 08:57:45.221135 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 2 08:57:45.221162 kernel: printk: console [ttyS0] disabled Jul 2 08:57:45.221181 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 2 08:57:45.221200 kernel: printk: console [ttyS0] enabled Jul 2 08:57:45.221218 kernel: printk: bootconsole [uart0] disabled Jul 2 08:57:45.221236 kernel: thunder_xcv, ver 1.0 Jul 2 08:57:45.221254 kernel: thunder_bgx, ver 1.0 Jul 2 08:57:45.221272 kernel: nicpf, ver 1.0 Jul 2 08:57:45.221290 kernel: nicvf, ver 1.0 Jul 2 08:57:45.221582 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 08:57:45.222751 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T08:57:44 UTC (1719910664) Jul 2 08:57:45.222797 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 08:57:45.222817 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 2 08:57:45.222836 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 08:57:45.222855 kernel: watchdog: Hard watchdog permanently disabled Jul 2 08:57:45.222874 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:57:45.222892 kernel: Segment Routing with IPv6 Jul 2 08:57:45.222920 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:57:45.222939 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:57:45.222957 kernel: Key type dns_resolver registered Jul 2 08:57:45.222976 kernel: registered taskstats version 1 Jul 2 08:57:45.222994 kernel: Loading compiled-in X.509 certificates Jul 2 08:57:45.223013 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 08:57:45.223031 kernel: Key type .fscrypt registered Jul 2 08:57:45.223050 kernel: Key type fscrypt-provisioning registered Jul 2 08:57:45.223068 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:57:45.223091 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:57:45.223110 kernel: ima: No architecture policies found Jul 2 08:57:45.223128 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 08:57:45.223146 kernel: clk: Disabling unused clocks Jul 2 08:57:45.223164 kernel: Freeing unused kernel memory: 39040K Jul 2 08:57:45.223182 kernel: Run /init as init process Jul 2 08:57:45.223200 kernel: with arguments: Jul 2 08:57:45.223218 kernel: /init Jul 2 08:57:45.223236 kernel: with environment: Jul 2 08:57:45.223258 kernel: HOME=/ Jul 2 08:57:45.223277 kernel: TERM=linux Jul 2 08:57:45.223295 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:57:45.223317 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:57:45.223341 systemd[1]: Detected virtualization amazon. Jul 2 08:57:45.223361 systemd[1]: Detected architecture arm64. Jul 2 08:57:45.223380 systemd[1]: Running in initrd. Jul 2 08:57:45.223399 systemd[1]: No hostname configured, using default hostname. Jul 2 08:57:45.223424 systemd[1]: Hostname set to . Jul 2 08:57:45.223444 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:57:45.223513 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:57:45.223540 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:57:45.223561 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:57:45.223582 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 08:57:45.223603 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:57:45.223630 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 08:57:45.223652 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 08:57:45.223675 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 08:57:45.223696 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 08:57:45.223716 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:57:45.223753 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:57:45.223778 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:57:45.223805 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:57:45.223825 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:57:45.223845 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:57:45.223865 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:57:45.223885 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:57:45.223905 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:57:45.223925 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:57:45.223945 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:57:45.223966 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:57:45.223991 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:57:45.224011 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:57:45.224031 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 08:57:45.224051 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:57:45.224072 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 08:57:45.224092 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:57:45.224112 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:57:45.224132 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:57:45.224157 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:57:45.224177 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 08:57:45.224197 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:57:45.224217 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:57:45.224239 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:57:45.224265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:57:45.224328 systemd-journald[251]: Collecting audit messages is disabled. Jul 2 08:57:45.224373 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:57:45.224394 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:57:45.224420 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:57:45.224439 kernel: Bridge firewalling registered Jul 2 08:57:45.224515 systemd-journald[251]: Journal started Jul 2 08:57:45.224634 systemd-journald[251]: Runtime Journal (/run/log/journal/ec235d0ee9fff899f4ea77c5fe7dbf15) is 8.0M, max 75.3M, 67.3M free. Jul 2 08:57:45.226645 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:57:45.179545 systemd-modules-load[252]: Inserted module 'overlay' Jul 2 08:57:45.220509 systemd-modules-load[252]: Inserted module 'br_netfilter' Jul 2 08:57:45.232479 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:57:45.233208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:57:45.247067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:57:45.256744 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:57:45.282374 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:57:45.298241 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:57:45.303130 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:57:45.306849 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:57:45.320117 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 08:57:45.330765 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:57:45.347976 dracut-cmdline[288]: dracut-dracut-053 Jul 2 08:57:45.354296 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 08:57:45.420823 systemd-resolved[290]: Positive Trust Anchors: Jul 2 08:57:45.420858 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:57:45.420920 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:57:45.505493 kernel: SCSI subsystem initialized Jul 2 08:57:45.514488 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:57:45.526488 kernel: iscsi: registered transport (tcp) Jul 2 08:57:45.549494 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:57:45.549565 kernel: QLogic iSCSI HBA Driver Jul 2 08:57:45.642483 kernel: random: crng init done Jul 2 08:57:45.642737 systemd-resolved[290]: Defaulting to hostname 'linux'. Jul 2 08:57:45.646125 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:57:45.648680 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:57:45.670309 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 08:57:45.680738 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 08:57:45.726739 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:57:45.726845 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:57:45.728352 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 08:57:45.793496 kernel: raid6: neonx8 gen() 6747 MB/s Jul 2 08:57:45.810486 kernel: raid6: neonx4 gen() 6561 MB/s Jul 2 08:57:45.827485 kernel: raid6: neonx2 gen() 5449 MB/s Jul 2 08:57:45.844484 kernel: raid6: neonx1 gen() 3960 MB/s Jul 2 08:57:45.861483 kernel: raid6: int64x8 gen() 3833 MB/s Jul 2 08:57:45.878483 kernel: raid6: int64x4 gen() 3726 MB/s Jul 2 08:57:45.895483 kernel: raid6: int64x2 gen() 3621 MB/s Jul 2 08:57:45.913129 kernel: raid6: int64x1 gen() 2775 MB/s Jul 2 08:57:45.913163 kernel: raid6: using algorithm neonx8 gen() 6747 MB/s Jul 2 08:57:45.931107 kernel: raid6: .... xor() 4888 MB/s, rmw enabled Jul 2 08:57:45.931149 kernel: raid6: using neon recovery algorithm Jul 2 08:57:45.938489 kernel: xor: measuring software checksum speed Jul 2 08:57:45.940488 kernel: 8regs : 11029 MB/sec Jul 2 08:57:45.942482 kernel: 32regs : 11921 MB/sec Jul 2 08:57:45.944496 kernel: arm64_neon : 9601 MB/sec Jul 2 08:57:45.944531 kernel: xor: using function: 32regs (11921 MB/sec) Jul 2 08:57:46.029514 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 08:57:46.048315 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:57:46.058796 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:57:46.100310 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 2 08:57:46.109056 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:57:46.122311 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 08:57:46.157134 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Jul 2 08:57:46.212441 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:57:46.224786 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:57:46.352137 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:57:46.362769 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 08:57:46.419994 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 08:57:46.435159 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:57:46.437303 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:57:46.439386 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:57:46.462784 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 08:57:46.509955 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:57:46.558418 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 08:57:46.558536 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 2 08:57:46.587372 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 08:57:46.587666 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 08:57:46.587942 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:87:7c:cd:dc:e1 Jul 2 08:57:46.563857 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:57:46.563973 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:57:46.567071 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:57:46.569108 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:57:46.569213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:57:46.571339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:57:46.579779 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:57:46.610604 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 2 08:57:46.610645 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 08:57:46.599066 (udev-worker)[529]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:57:46.619023 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 08:57:46.625301 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:57:46.625375 kernel: GPT:9289727 != 16777215 Jul 2 08:57:46.625401 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:57:46.629420 kernel: GPT:9289727 != 16777215 Jul 2 08:57:46.629515 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:57:46.630820 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:57:46.632681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:57:46.644841 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:57:46.690904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:57:46.752516 kernel: BTRFS: device fsid ad4b0605-c88d-4cc1-aa96-32e9393058b1 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (522) Jul 2 08:57:46.761277 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 08:57:46.776198 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (523) Jul 2 08:57:46.861337 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 08:57:46.887946 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 08:57:46.892689 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 08:57:46.910249 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 08:57:46.920743 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 08:57:46.935046 disk-uuid[663]: Primary Header is updated. Jul 2 08:57:46.935046 disk-uuid[663]: Secondary Entries is updated. Jul 2 08:57:46.935046 disk-uuid[663]: Secondary Header is updated. Jul 2 08:57:46.943537 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:57:46.949969 kernel: GPT:disk_guids don't match. Jul 2 08:57:46.950032 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:57:46.950780 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:57:46.959477 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:57:47.960548 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:57:47.961904 disk-uuid[664]: The operation has completed successfully. Jul 2 08:57:48.124988 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:57:48.127548 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 08:57:48.185767 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 08:57:48.206223 sh[1007]: Success Jul 2 08:57:48.235569 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 08:57:48.334501 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 08:57:48.344652 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 08:57:48.348390 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 08:57:48.381656 kernel: BTRFS info (device dm-0): first mount of filesystem ad4b0605-c88d-4cc1-aa96-32e9393058b1 Jul 2 08:57:48.381718 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:57:48.381745 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 08:57:48.384333 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 08:57:48.384380 kernel: BTRFS info (device dm-0): using free space tree Jul 2 08:57:48.509489 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 08:57:48.531490 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 08:57:48.532586 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 08:57:48.545851 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 08:57:48.550760 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 08:57:48.576584 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:57:48.576666 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:57:48.577952 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:57:48.585107 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:57:48.603187 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:57:48.605574 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:57:48.622521 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 08:57:48.634850 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 08:57:48.732034 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:57:48.742794 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:57:48.801293 systemd-networkd[1199]: lo: Link UP Jul 2 08:57:48.801316 systemd-networkd[1199]: lo: Gained carrier Jul 2 08:57:48.804897 systemd-networkd[1199]: Enumeration completed Jul 2 08:57:48.805678 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:57:48.805685 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:57:48.807110 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:57:48.817924 systemd[1]: Reached target network.target - Network. Jul 2 08:57:48.819617 systemd-networkd[1199]: eth0: Link UP Jul 2 08:57:48.819625 systemd-networkd[1199]: eth0: Gained carrier Jul 2 08:57:48.819643 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:57:48.850911 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.30.172/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 08:57:48.953883 ignition[1119]: Ignition 2.18.0 Jul 2 08:57:48.954400 ignition[1119]: Stage: fetch-offline Jul 2 08:57:48.954967 ignition[1119]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:48.954991 ignition[1119]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:48.956141 ignition[1119]: Ignition finished successfully Jul 2 08:57:48.964606 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:57:48.973768 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 08:57:49.005949 ignition[1209]: Ignition 2.18.0 Jul 2 08:57:49.006433 ignition[1209]: Stage: fetch Jul 2 08:57:49.007061 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:49.007086 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:49.007239 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:49.018172 ignition[1209]: PUT result: OK Jul 2 08:57:49.021026 ignition[1209]: parsed url from cmdline: "" Jul 2 08:57:49.021046 ignition[1209]: no config URL provided Jul 2 08:57:49.021063 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:57:49.021089 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:57:49.021120 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:49.023974 ignition[1209]: PUT result: OK Jul 2 08:57:49.024798 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 08:57:49.031143 ignition[1209]: GET result: OK Jul 2 08:57:49.032019 ignition[1209]: parsing config with SHA512: 257ca9fbc2970e2c04f7682583ff2c582a86ad65869997a326494377dc79e00a11016f1d7c1556a85d0db5a04057cba876692a002449957dd5e7905dc860cb6d Jul 2 08:57:49.039931 unknown[1209]: fetched base config from "system" Jul 2 08:57:49.039959 unknown[1209]: fetched base config from "system" Jul 2 08:57:49.042440 ignition[1209]: fetch: fetch complete Jul 2 08:57:49.039973 unknown[1209]: fetched user config from "aws" Jul 2 08:57:49.042490 ignition[1209]: fetch: fetch passed Jul 2 08:57:49.047898 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 08:57:49.042597 ignition[1209]: Ignition finished successfully Jul 2 08:57:49.068862 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 08:57:49.090162 ignition[1216]: Ignition 2.18.0 Jul 2 08:57:49.090684 ignition[1216]: Stage: kargs Jul 2 08:57:49.091287 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:49.091311 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:49.091494 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:49.095468 ignition[1216]: PUT result: OK Jul 2 08:57:49.103009 ignition[1216]: kargs: kargs passed Jul 2 08:57:49.103283 ignition[1216]: Ignition finished successfully Jul 2 08:57:49.107421 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 08:57:49.125902 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 08:57:49.149271 ignition[1223]: Ignition 2.18.0 Jul 2 08:57:49.149293 ignition[1223]: Stage: disks Jul 2 08:57:49.150028 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:49.150237 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:49.150375 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:49.153903 ignition[1223]: PUT result: OK Jul 2 08:57:49.161168 ignition[1223]: disks: disks passed Jul 2 08:57:49.162611 ignition[1223]: Ignition finished successfully Jul 2 08:57:49.166311 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 08:57:49.167117 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 08:57:49.167368 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:57:49.167939 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:57:49.168222 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:57:49.168780 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:57:49.183776 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 08:57:49.234670 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 08:57:49.242363 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 08:57:49.253620 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 08:57:49.348491 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c1692a6b-74d8-4bda-be0c-9d706985f1ed r/w with ordered data mode. Quota mode: none. Jul 2 08:57:49.350345 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 08:57:49.353720 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 08:57:49.377626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:57:49.382626 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 08:57:49.386234 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 08:57:49.386491 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:57:49.386544 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:57:49.403977 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 08:57:49.415837 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 08:57:49.423492 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1251) Jul 2 08:57:49.428492 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:57:49.428544 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:57:49.428571 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:57:49.433489 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:57:49.436134 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:57:49.713030 initrd-setup-root[1276]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:57:49.720855 initrd-setup-root[1283]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:57:49.729520 initrd-setup-root[1290]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:57:49.737740 initrd-setup-root[1297]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:57:49.972629 systemd-networkd[1199]: eth0: Gained IPv6LL Jul 2 08:57:50.004029 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 08:57:50.013678 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 08:57:50.026023 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 08:57:50.047377 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 08:57:50.049348 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:57:50.074734 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 08:57:50.090593 ignition[1366]: INFO : Ignition 2.18.0 Jul 2 08:57:50.090593 ignition[1366]: INFO : Stage: mount Jul 2 08:57:50.093925 ignition[1366]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:50.093925 ignition[1366]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:50.097850 ignition[1366]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:50.100875 ignition[1366]: INFO : PUT result: OK Jul 2 08:57:50.104815 ignition[1366]: INFO : mount: mount passed Jul 2 08:57:50.107875 ignition[1366]: INFO : Ignition finished successfully Jul 2 08:57:50.110573 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 08:57:50.125113 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 08:57:50.144814 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:57:50.174494 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1377) Jul 2 08:57:50.178083 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:57:50.178128 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:57:50.178154 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:57:50.183488 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:57:50.187027 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:57:50.225599 ignition[1394]: INFO : Ignition 2.18.0 Jul 2 08:57:50.225599 ignition[1394]: INFO : Stage: files Jul 2 08:57:50.228695 ignition[1394]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:50.228695 ignition[1394]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:50.228695 ignition[1394]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:50.235435 ignition[1394]: INFO : PUT result: OK Jul 2 08:57:50.239739 ignition[1394]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:57:50.242859 ignition[1394]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:57:50.242859 ignition[1394]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:57:50.260910 ignition[1394]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:57:50.263665 ignition[1394]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:57:50.266372 unknown[1394]: wrote ssh authorized keys file for user: core Jul 2 08:57:50.270674 ignition[1394]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:57:50.274470 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:57:50.274470 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 08:57:50.328583 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:57:50.430346 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:57:50.430346 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:57:50.430346 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 08:57:50.881727 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 08:57:51.036360 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:57:51.039644 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:57:51.039644 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:57:51.039644 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:57:51.039644 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:57:51.039644 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:57:51.039644 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:57:51.039644 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:57:51.039644 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:57:51.063510 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:57:51.063510 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:57:51.063510 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 08:57:51.063510 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 08:57:51.063510 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 08:57:51.063510 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jul 2 08:57:51.427564 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 08:57:51.733875 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 08:57:51.733875 ignition[1394]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 08:57:51.739796 ignition[1394]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:57:51.739796 ignition[1394]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:57:51.739796 ignition[1394]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 08:57:51.739796 ignition[1394]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:57:51.739796 ignition[1394]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:57:51.739796 ignition[1394]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:57:51.739796 ignition[1394]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:57:51.739796 ignition[1394]: INFO : files: files passed Jul 2 08:57:51.739796 ignition[1394]: INFO : Ignition finished successfully Jul 2 08:57:51.764542 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 08:57:51.774826 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 08:57:51.787843 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 08:57:51.792790 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:57:51.793950 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 08:57:51.825891 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:57:51.825891 initrd-setup-root-after-ignition[1423]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:57:51.831663 initrd-setup-root-after-ignition[1427]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:57:51.836536 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:57:51.839680 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 08:57:51.851755 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 08:57:51.901276 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:57:51.902306 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 08:57:51.909514 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 08:57:51.911572 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 08:57:51.917071 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 08:57:51.924740 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 08:57:51.955955 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:57:51.974828 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 08:57:51.999257 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:57:52.001958 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:57:52.008309 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 08:57:52.010210 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:57:52.010517 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:57:52.018240 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 08:57:52.020991 systemd[1]: Stopped target basic.target - Basic System. Jul 2 08:57:52.023984 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 08:57:52.026958 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:57:52.029465 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 08:57:52.036890 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 08:57:52.038878 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:57:52.041569 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 08:57:52.048561 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 08:57:52.050918 systemd[1]: Stopped target swap.target - Swaps. Jul 2 08:57:52.055400 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:57:52.056190 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:57:52.061639 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:57:52.063923 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:57:52.069703 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 08:57:52.075818 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:57:52.080398 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:57:52.080648 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 08:57:52.082896 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:57:52.083114 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:57:52.085600 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:57:52.085792 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 08:57:52.102968 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 08:57:52.127315 ignition[1447]: INFO : Ignition 2.18.0 Jul 2 08:57:52.127315 ignition[1447]: INFO : Stage: umount Jul 2 08:57:52.127315 ignition[1447]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:52.127315 ignition[1447]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:52.127315 ignition[1447]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:52.136344 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 08:57:52.139647 ignition[1447]: INFO : PUT result: OK Jul 2 08:57:52.145004 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:57:52.145301 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:57:52.152647 ignition[1447]: INFO : umount: umount passed Jul 2 08:57:52.152647 ignition[1447]: INFO : Ignition finished successfully Jul 2 08:57:52.150273 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:57:52.151353 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:57:52.171310 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:57:52.171525 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 08:57:52.180531 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:57:52.182552 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 08:57:52.187433 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:57:52.188042 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 08:57:52.200663 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:57:52.200787 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 08:57:52.202652 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:57:52.202745 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 08:57:52.204556 systemd[1]: Stopped target network.target - Network. Jul 2 08:57:52.206065 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:57:52.206150 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:57:52.208751 systemd[1]: Stopped target paths.target - Path Units. Jul 2 08:57:52.217910 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:57:52.224427 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:57:52.233305 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 08:57:52.234903 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 08:57:52.236624 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:57:52.236710 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:57:52.238483 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:57:52.238553 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:57:52.240828 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:57:52.241068 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 08:57:52.244193 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 08:57:52.244282 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 08:57:52.247043 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 08:57:52.255518 systemd-networkd[1199]: eth0: DHCPv6 lease lost Jul 2 08:57:52.272437 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 08:57:52.276039 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:57:52.277212 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:57:52.278795 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 08:57:52.285605 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:57:52.287491 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 08:57:52.292643 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:57:52.294161 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:57:52.299135 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:57:52.299234 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 08:57:52.313705 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 08:57:52.316179 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:57:52.316294 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:57:52.321868 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:57:52.335359 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:57:52.335600 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 08:57:52.348031 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:57:52.348994 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:57:52.371291 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:57:52.371397 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 08:57:52.376953 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:57:52.377028 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:57:52.379036 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:57:52.379123 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:57:52.390393 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:57:52.390662 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 08:57:52.396295 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:57:52.396388 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:57:52.410835 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 08:57:52.413607 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:57:52.413713 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:57:52.415904 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:57:52.415987 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 08:57:52.418170 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:57:52.418246 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:57:52.420940 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 08:57:52.421016 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:57:52.423503 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:57:52.423579 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:57:52.426175 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 08:57:52.426249 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:57:52.428657 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:57:52.428734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:57:52.431601 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:57:52.431941 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 08:57:52.481603 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:57:52.482006 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 08:57:52.488960 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 08:57:52.501789 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 08:57:52.520822 systemd[1]: Switching root. Jul 2 08:57:52.575635 systemd-journald[251]: Journal stopped Jul 2 08:57:55.604443 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jul 2 08:57:55.607072 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:57:55.607125 kernel: SELinux: policy capability open_perms=1 Jul 2 08:57:55.607157 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:57:55.607187 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:57:55.607219 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:57:55.607256 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:57:55.607285 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:57:55.607316 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:57:55.607346 kernel: audit: type=1403 audit(1719910673.826:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:57:55.607385 systemd[1]: Successfully loaded SELinux policy in 63.436ms. Jul 2 08:57:55.607438 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.764ms. Jul 2 08:57:55.607540 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:57:55.607576 systemd[1]: Detected virtualization amazon. Jul 2 08:57:55.607608 systemd[1]: Detected architecture arm64. Jul 2 08:57:55.607644 systemd[1]: Detected first boot. Jul 2 08:57:55.607676 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:57:55.607727 zram_generator::config[1490]: No configuration found. Jul 2 08:57:55.607767 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:57:55.607797 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:57:55.607827 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 08:57:55.607860 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:57:55.607893 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 08:57:55.607928 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 08:57:55.607960 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 08:57:55.607992 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 08:57:55.608023 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 08:57:55.608053 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 08:57:55.608084 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 08:57:55.608114 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 08:57:55.608145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:57:55.608177 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:57:55.608213 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 08:57:55.608242 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 08:57:55.608272 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 08:57:55.608306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:57:55.608335 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 08:57:55.608364 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:57:55.608396 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 08:57:55.608428 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 08:57:55.609625 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 08:57:55.609665 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 08:57:55.609700 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:57:55.609732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:57:55.609762 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:57:55.609794 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:57:55.609823 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 08:57:55.609853 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 08:57:55.609890 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:57:55.609920 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:57:55.609949 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:57:55.609981 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 08:57:55.610011 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 08:57:55.610041 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 08:57:55.610071 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 08:57:55.610103 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 08:57:55.610132 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 08:57:55.610167 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 08:57:55.610200 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:57:55.610230 systemd[1]: Reached target machines.target - Containers. Jul 2 08:57:55.610264 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 08:57:55.610293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:57:55.610327 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:57:55.610356 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 08:57:55.610385 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:57:55.610415 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:57:55.610463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:57:55.610512 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 08:57:55.610545 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:57:55.610575 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:57:55.610606 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:57:55.610636 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 08:57:55.610665 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:57:55.610694 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:57:55.610731 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:57:55.610763 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:57:55.610795 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 08:57:55.610826 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 08:57:55.610856 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:57:55.610887 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:57:55.610916 systemd[1]: Stopped verity-setup.service. Jul 2 08:57:55.610944 kernel: fuse: init (API version 7.39) Jul 2 08:57:55.610974 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 08:57:55.611008 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 08:57:55.611038 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 08:57:55.611067 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 08:57:55.611096 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 08:57:55.611124 kernel: loop: module loaded Jul 2 08:57:55.611156 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 08:57:55.611186 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:57:55.611215 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 08:57:55.611244 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:57:55.611272 kernel: ACPI: bus type drm_connector registered Jul 2 08:57:55.611300 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 08:57:55.611330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:57:55.611359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:57:55.611388 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:57:55.611422 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:57:55.613509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:57:55.613570 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:57:55.613601 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:57:55.613640 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 08:57:55.613677 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:57:55.613707 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:57:55.613791 systemd-journald[1574]: Collecting audit messages is disabled. Jul 2 08:57:55.613852 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:57:55.613883 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 08:57:55.613913 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 08:57:55.613943 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 08:57:55.613971 systemd-journald[1574]: Journal started Jul 2 08:57:55.614025 systemd-journald[1574]: Runtime Journal (/run/log/journal/ec235d0ee9fff899f4ea77c5fe7dbf15) is 8.0M, max 75.3M, 67.3M free. Jul 2 08:57:54.948810 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:57:54.991167 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 08:57:54.991984 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:57:55.629488 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 08:57:55.642495 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 08:57:55.650505 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:57:55.650597 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:57:55.657281 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 08:57:55.677870 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 08:57:55.685843 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 08:57:55.689548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:57:55.701242 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 08:57:55.701330 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:57:55.714364 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 08:57:55.718509 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:57:55.737691 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:57:55.752076 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 08:57:55.761951 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:57:55.766799 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:57:55.771159 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:57:55.773654 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 08:57:55.776012 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 08:57:55.778882 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 08:57:55.798830 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 08:57:55.863288 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 08:57:55.870706 kernel: loop0: detected capacity change from 0 to 194512 Jul 2 08:57:55.870801 kernel: block loop0: the capability attribute has been deprecated. Jul 2 08:57:55.872051 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jul 2 08:57:55.872577 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jul 2 08:57:55.883945 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 08:57:55.889307 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 08:57:55.916490 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:57:55.913725 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 08:57:55.917180 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:57:55.920286 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:57:55.940815 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 08:57:55.960163 systemd-journald[1574]: Time spent on flushing to /var/log/journal/ec235d0ee9fff899f4ea77c5fe7dbf15 is 43.752ms for 927 entries. Jul 2 08:57:55.960163 systemd-journald[1574]: System Journal (/var/log/journal/ec235d0ee9fff899f4ea77c5fe7dbf15) is 8.0M, max 195.6M, 187.6M free. Jul 2 08:57:56.016352 systemd-journald[1574]: Received client request to flush runtime journal. Jul 2 08:57:56.016424 kernel: loop1: detected capacity change from 0 to 59672 Jul 2 08:57:56.023330 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 08:57:56.030745 udevadm[1631]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 08:57:56.060005 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:57:56.064336 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 08:57:56.077215 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 08:57:56.092104 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:57:56.099495 kernel: loop2: detected capacity change from 0 to 113672 Jul 2 08:57:56.148126 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Jul 2 08:57:56.148167 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Jul 2 08:57:56.166501 kernel: loop3: detected capacity change from 0 to 51896 Jul 2 08:57:56.168153 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:57:56.271482 kernel: loop4: detected capacity change from 0 to 194512 Jul 2 08:57:56.301489 kernel: loop5: detected capacity change from 0 to 59672 Jul 2 08:57:56.319126 kernel: loop6: detected capacity change from 0 to 113672 Jul 2 08:57:56.331511 kernel: loop7: detected capacity change from 0 to 51896 Jul 2 08:57:56.342691 (sd-merge)[1646]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 08:57:56.345535 (sd-merge)[1646]: Merged extensions into '/usr'. Jul 2 08:57:56.352843 systemd[1]: Reloading requested from client PID 1600 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 08:57:56.353051 systemd[1]: Reloading... Jul 2 08:57:56.550674 zram_generator::config[1673]: No configuration found. Jul 2 08:57:56.883047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:57:57.001565 systemd[1]: Reloading finished in 647 ms. Jul 2 08:57:57.040631 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 08:57:57.043615 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 08:57:57.056800 systemd[1]: Starting ensure-sysext.service... Jul 2 08:57:57.060836 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:57:57.074316 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:57:57.103779 systemd[1]: Reloading requested from client PID 1722 ('systemctl') (unit ensure-sysext.service)... Jul 2 08:57:57.103808 systemd[1]: Reloading... Jul 2 08:57:57.116830 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:57:57.117577 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 08:57:57.124611 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:57:57.125165 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Jul 2 08:57:57.125439 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Jul 2 08:57:57.138866 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:57:57.138893 systemd-tmpfiles[1723]: Skipping /boot Jul 2 08:57:57.173928 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:57:57.173959 systemd-tmpfiles[1723]: Skipping /boot Jul 2 08:57:57.220387 systemd-udevd[1724]: Using default interface naming scheme 'v255'. Jul 2 08:57:57.312504 ldconfig[1596]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:57:57.337566 zram_generator::config[1757]: No configuration found. Jul 2 08:57:57.434485 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1764) Jul 2 08:57:57.507261 (udev-worker)[1768]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:57:57.680923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:57:57.783497 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1760) Jul 2 08:57:57.842374 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 08:57:57.845353 systemd[1]: Reloading finished in 740 ms. Jul 2 08:57:57.888392 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:57:57.892515 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 08:57:57.895588 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:57:57.968538 systemd[1]: Finished ensure-sysext.service. Jul 2 08:57:58.009153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 08:57:58.012107 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 08:57:58.022806 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:57:58.036858 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 08:57:58.039245 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:57:58.043832 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 08:57:58.052032 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:57:58.068829 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:57:58.073684 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:57:58.078302 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:57:58.081694 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:57:58.092082 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 08:57:58.098350 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 08:57:58.113121 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:57:58.122791 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:57:58.124838 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 08:57:58.130514 lvm[1922]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:57:58.150781 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 08:57:58.159797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:57:58.163383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:57:58.165543 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:57:58.173230 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:57:58.175568 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:57:58.188829 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 08:57:58.237800 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:57:58.239546 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:57:58.242406 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:57:58.246899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:57:58.248798 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:57:58.252894 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:57:58.265033 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 08:57:58.289366 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 08:57:58.290154 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:57:58.297762 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 08:57:58.306533 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 08:57:58.315737 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 08:57:58.318304 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 08:57:58.324733 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:57:58.337802 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 08:57:58.346663 augenrules[1961]: No rules Jul 2 08:57:58.349371 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:57:58.355521 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 08:57:58.385816 lvm[1960]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:57:58.405821 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 08:57:58.440392 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 08:57:58.516777 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:57:58.544076 systemd-resolved[1936]: Positive Trust Anchors: Jul 2 08:57:58.544107 systemd-resolved[1936]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:57:58.544167 systemd-resolved[1936]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:57:58.549152 systemd-networkd[1935]: lo: Link UP Jul 2 08:57:58.549179 systemd-networkd[1935]: lo: Gained carrier Jul 2 08:57:58.551753 systemd-networkd[1935]: Enumeration completed Jul 2 08:57:58.552034 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:57:58.554543 systemd-networkd[1935]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:57:58.554564 systemd-networkd[1935]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:57:58.556685 systemd-networkd[1935]: eth0: Link UP Jul 2 08:57:58.557055 systemd-networkd[1935]: eth0: Gained carrier Jul 2 08:57:58.557103 systemd-networkd[1935]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:57:58.566923 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 08:57:58.571501 systemd-resolved[1936]: Defaulting to hostname 'linux'. Jul 2 08:57:58.571572 systemd-networkd[1935]: eth0: DHCPv4 address 172.31.30.172/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 08:57:58.578165 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:57:58.581002 systemd[1]: Reached target network.target - Network. Jul 2 08:57:58.583363 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:57:58.585571 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:57:58.587669 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 08:57:58.589905 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 08:57:58.592422 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 08:57:58.594616 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 08:57:58.597023 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 08:57:58.599204 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:57:58.599255 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:57:58.600892 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:57:58.603598 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 08:57:58.609478 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 08:57:58.617740 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 08:57:58.620966 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 08:57:58.623780 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:57:58.625615 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:57:58.627380 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:57:58.627431 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:57:58.638816 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 08:57:58.644327 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 08:57:58.648842 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 08:57:58.661657 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 08:57:58.668062 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 08:57:58.669971 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 08:57:58.672780 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 08:57:58.681997 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 08:57:58.698966 jq[1986]: false Jul 2 08:57:58.702992 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 08:57:58.710701 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 08:57:58.715829 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 08:57:58.721821 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 08:57:58.735795 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 08:57:58.738427 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:57:58.739293 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:57:58.744777 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 08:57:58.776252 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 08:57:58.809208 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:57:58.809640 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 08:57:58.842545 jq[1998]: true Jul 2 08:57:58.856539 extend-filesystems[1987]: Found loop4 Jul 2 08:57:58.859822 ntpd[1989]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: ---------------------------------------------------- Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: corporation. Support and training for ntp-4 are Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: available at https://www.nwtime.org/support Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: ---------------------------------------------------- Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: proto: precision = 0.108 usec (-23) Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: basedate set to 2024-06-19 Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: gps base set to 2024-06-23 (week 2320) Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: Listen normally on 3 eth0 172.31.30.172:123 Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: Listen normally on 4 lo [::1]:123 Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: bind(21) AF_INET6 fe80::487:7cff:fecd:dce1%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: unable to create socket on eth0 (5) for fe80::487:7cff:fecd:dce1%2#123 Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: failed to init interface for address fe80::487:7cff:fecd:dce1%2 Jul 2 08:57:58.871161 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: Listening on routing socket on fd #21 for interface updates Jul 2 08:57:58.887786 extend-filesystems[1987]: Found loop5 Jul 2 08:57:58.887786 extend-filesystems[1987]: Found loop6 Jul 2 08:57:58.887786 extend-filesystems[1987]: Found loop7 Jul 2 08:57:58.887786 extend-filesystems[1987]: Found nvme0n1 Jul 2 08:57:58.887786 extend-filesystems[1987]: Found nvme0n1p1 Jul 2 08:57:58.887786 extend-filesystems[1987]: Found nvme0n1p2 Jul 2 08:57:58.887786 extend-filesystems[1987]: Found nvme0n1p3 Jul 2 08:57:58.887786 extend-filesystems[1987]: Found usr Jul 2 08:57:58.887786 extend-filesystems[1987]: Found nvme0n1p4 Jul 2 08:57:58.887786 extend-filesystems[1987]: Found nvme0n1p6 Jul 2 08:57:58.887786 extend-filesystems[1987]: Found nvme0n1p7 Jul 2 08:57:58.887786 extend-filesystems[1987]: Found nvme0n1p9 Jul 2 08:57:58.887786 extend-filesystems[1987]: Checking size of /dev/nvme0n1p9 Jul 2 08:57:58.859870 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 08:57:58.994915 update_engine[1997]: I0702 08:57:58.944326 1997 main.cc:92] Flatcar Update Engine starting Jul 2 08:57:58.917134 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:57:59.003419 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:57:59.003419 ntpd[1989]: 2 Jul 08:57:58 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.905 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.917 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.919 INFO Fetch successful Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.919 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.942 INFO Fetch successful Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.942 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.945 INFO Fetch successful Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.945 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.950 INFO Fetch successful Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.950 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.952 INFO Fetch failed with 404: resource not found Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.952 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.954 INFO Fetch successful Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.954 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.958 INFO Fetch successful Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.958 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.958 INFO Fetch successful Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.958 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.960 INFO Fetch successful Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.960 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 08:57:59.003623 coreos-metadata[1984]: Jul 02 08:57:58.963 INFO Fetch successful Jul 2 08:57:58.859890 ntpd[1989]: ---------------------------------------------------- Jul 2 08:57:58.917600 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 08:57:59.005405 extend-filesystems[1987]: Resized partition /dev/nvme0n1p9 Jul 2 08:57:58.859909 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Jul 2 08:57:58.940073 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 08:57:58.859928 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 08:57:58.949839 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:57:58.859946 ntpd[1989]: corporation. Support and training for ntp-4 are Jul 2 08:57:58.949885 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 08:57:58.859964 ntpd[1989]: available at https://www.nwtime.org/support Jul 2 08:57:58.964675 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:57:58.859984 ntpd[1989]: ---------------------------------------------------- Jul 2 08:57:58.964716 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 08:57:58.863108 ntpd[1989]: proto: precision = 0.108 usec (-23) Jul 2 08:57:58.972899 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:57:58.863852 ntpd[1989]: basedate set to 2024-06-19 Jul 2 08:57:58.973297 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 08:57:58.863883 ntpd[1989]: gps base set to 2024-06-23 (week 2320) Jul 2 08:57:58.868487 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 08:57:58.868570 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 08:57:58.870699 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 08:57:58.870773 ntpd[1989]: Listen normally on 3 eth0 172.31.30.172:123 Jul 2 08:57:58.870852 ntpd[1989]: Listen normally on 4 lo [::1]:123 Jul 2 08:57:58.870929 ntpd[1989]: bind(21) AF_INET6 fe80::487:7cff:fecd:dce1%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:57:58.870968 ntpd[1989]: unable to create socket on eth0 (5) for fe80::487:7cff:fecd:dce1%2#123 Jul 2 08:57:58.870995 ntpd[1989]: failed to init interface for address fe80::487:7cff:fecd:dce1%2 Jul 2 08:57:58.871048 ntpd[1989]: Listening on routing socket on fd #21 for interface updates Jul 2 08:57:58.889658 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:57:58.889707 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:57:58.939483 dbus-daemon[1985]: [system] SELinux support is enabled Jul 2 08:57:59.031287 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 08:57:59.031343 extend-filesystems[2033]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 08:57:59.024618 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 08:57:58.982264 dbus-daemon[1985]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1935 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 08:57:59.039395 update_engine[1997]: I0702 08:57:59.019011 1997 update_check_scheduler.cc:74] Next update check in 4m51s Jul 2 08:57:59.039467 tar[2003]: linux-arm64/helm Jul 2 08:57:59.029267 systemd[1]: Started update-engine.service - Update Engine. Jul 2 08:57:58.995997 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 08:57:59.051109 (ntainerd)[2019]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 08:57:59.059024 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 08:57:59.076152 jq[2012]: true Jul 2 08:57:59.129474 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 08:57:59.152125 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 08:57:59.155936 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 08:57:59.174145 extend-filesystems[2033]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 08:57:59.174145 extend-filesystems[2033]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 08:57:59.174145 extend-filesystems[2033]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 08:57:59.166829 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 08:57:59.188606 extend-filesystems[1987]: Resized filesystem in /dev/nvme0n1p9 Jul 2 08:57:59.180421 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:57:59.180928 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 08:57:59.322401 bash[2070]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:57:59.325514 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1768) Jul 2 08:57:59.336616 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 08:57:59.353341 systemd[1]: Starting sshkeys.service... Jul 2 08:57:59.375890 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 08:57:59.382892 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 08:57:59.419034 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 08:57:59.428585 systemd-logind[1994]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 08:57:59.428636 systemd-logind[1994]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 2 08:57:59.430815 systemd-logind[1994]: New seat seat0. Jul 2 08:57:59.432613 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 08:57:59.562276 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 08:57:59.562757 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 08:57:59.566630 dbus-daemon[1985]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2032 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 08:57:59.580235 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 08:57:59.652278 polkitd[2127]: Started polkitd version 121 Jul 2 08:57:59.672261 polkitd[2127]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 08:57:59.672392 polkitd[2127]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 08:57:59.675422 polkitd[2127]: Finished loading, compiling and executing 2 rules Jul 2 08:57:59.677448 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 08:57:59.677156 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 08:57:59.681351 polkitd[2127]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 08:57:59.690226 locksmithd[2037]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:57:59.715742 systemd-hostnamed[2032]: Hostname set to (transient) Jul 2 08:57:59.715915 systemd-resolved[1936]: System hostname changed to 'ip-172-31-30-172'. Jul 2 08:57:59.807654 coreos-metadata[2086]: Jul 02 08:57:59.806 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 08:57:59.810580 coreos-metadata[2086]: Jul 02 08:57:59.810 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 08:57:59.816542 coreos-metadata[2086]: Jul 02 08:57:59.816 INFO Fetch successful Jul 2 08:57:59.816542 coreos-metadata[2086]: Jul 02 08:57:59.816 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 08:57:59.820515 coreos-metadata[2086]: Jul 02 08:57:59.820 INFO Fetch successful Jul 2 08:57:59.830129 unknown[2086]: wrote ssh authorized keys file for user: core Jul 2 08:57:59.861334 ntpd[1989]: bind(24) AF_INET6 fe80::487:7cff:fecd:dce1%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:57:59.862174 ntpd[1989]: 2 Jul 08:57:59 ntpd[1989]: bind(24) AF_INET6 fe80::487:7cff:fecd:dce1%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:57:59.862174 ntpd[1989]: 2 Jul 08:57:59 ntpd[1989]: unable to create socket on eth0 (6) for fe80::487:7cff:fecd:dce1%2#123 Jul 2 08:57:59.862174 ntpd[1989]: 2 Jul 08:57:59 ntpd[1989]: failed to init interface for address fe80::487:7cff:fecd:dce1%2 Jul 2 08:57:59.861943 ntpd[1989]: unable to create socket on eth0 (6) for fe80::487:7cff:fecd:dce1%2#123 Jul 2 08:57:59.861972 ntpd[1989]: failed to init interface for address fe80::487:7cff:fecd:dce1%2 Jul 2 08:57:59.891491 update-ssh-keys[2174]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:57:59.895155 containerd[2019]: time="2024-07-02T08:57:59.895016040Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 08:57:59.906132 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 08:57:59.919055 systemd[1]: Finished sshkeys.service. Jul 2 08:58:00.018941 containerd[2019]: time="2024-07-02T08:58:00.018805197Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 08:58:00.019166 containerd[2019]: time="2024-07-02T08:58:00.019130949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:00.022850 containerd[2019]: time="2024-07-02T08:58:00.022782717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:58:00.023026 containerd[2019]: time="2024-07-02T08:58:00.022995585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:00.023538 containerd[2019]: time="2024-07-02T08:58:00.023435313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:58:00.023681 containerd[2019]: time="2024-07-02T08:58:00.023651217Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:58:00.023961 containerd[2019]: time="2024-07-02T08:58:00.023931609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:00.024178 containerd[2019]: time="2024-07-02T08:58:00.024138669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:58:00.024279 containerd[2019]: time="2024-07-02T08:58:00.024251121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:00.024594 containerd[2019]: time="2024-07-02T08:58:00.024556533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:00.025087 containerd[2019]: time="2024-07-02T08:58:00.025053921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:00.025234 containerd[2019]: time="2024-07-02T08:58:00.025202853Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:58:00.025344 containerd[2019]: time="2024-07-02T08:58:00.025308501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:00.025702 containerd[2019]: time="2024-07-02T08:58:00.025666401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:58:00.025833 containerd[2019]: time="2024-07-02T08:58:00.025803573Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:58:00.026030 containerd[2019]: time="2024-07-02T08:58:00.026000673Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:58:00.026140 containerd[2019]: time="2024-07-02T08:58:00.026105949Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:58:00.032659 containerd[2019]: time="2024-07-02T08:58:00.032602941Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:58:00.033962 containerd[2019]: time="2024-07-02T08:58:00.033933333Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:58:00.034092 containerd[2019]: time="2024-07-02T08:58:00.034065297Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:58:00.034290 containerd[2019]: time="2024-07-02T08:58:00.034232997Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 08:58:00.034495 containerd[2019]: time="2024-07-02T08:58:00.034439097Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 08:58:00.034662 containerd[2019]: time="2024-07-02T08:58:00.034577109Z" level=info msg="NRI interface is disabled by configuration." Jul 2 08:58:00.034662 containerd[2019]: time="2024-07-02T08:58:00.034612785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:58:00.035933 containerd[2019]: time="2024-07-02T08:58:00.035341053Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 08:58:00.035933 containerd[2019]: time="2024-07-02T08:58:00.035398893Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 08:58:00.035933 containerd[2019]: time="2024-07-02T08:58:00.035430213Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 08:58:00.036207 containerd[2019]: time="2024-07-02T08:58:00.036139461Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 08:58:00.036383 containerd[2019]: time="2024-07-02T08:58:00.036186105Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:58:00.037191 containerd[2019]: time="2024-07-02T08:58:00.036960813Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:58:00.037191 containerd[2019]: time="2024-07-02T08:58:00.037024857Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:58:00.037191 containerd[2019]: time="2024-07-02T08:58:00.037059009Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:58:00.037191 containerd[2019]: time="2024-07-02T08:58:00.037115181Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:58:00.037191 containerd[2019]: time="2024-07-02T08:58:00.037150641Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:58:00.037782 containerd[2019]: time="2024-07-02T08:58:00.037611501Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:58:00.037782 containerd[2019]: time="2024-07-02T08:58:00.037652337Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:58:00.038230 containerd[2019]: time="2024-07-02T08:58:00.038199441Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.038889645Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.038945853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.038978829Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.039027393Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.039150825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.039185097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.039214773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.039243837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.039276801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.039305889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.039341721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.039370293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.039410805Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:58:00.040495 containerd[2019]: time="2024-07-02T08:58:00.039723069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.041120 containerd[2019]: time="2024-07-02T08:58:00.039757821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.041120 containerd[2019]: time="2024-07-02T08:58:00.039785985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.041120 containerd[2019]: time="2024-07-02T08:58:00.039814413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.041120 containerd[2019]: time="2024-07-02T08:58:00.039844521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.041120 containerd[2019]: time="2024-07-02T08:58:00.039880317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.041120 containerd[2019]: time="2024-07-02T08:58:00.039911385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.041120 containerd[2019]: time="2024-07-02T08:58:00.039939813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:58:00.041402 containerd[2019]: time="2024-07-02T08:58:00.040350705Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:58:00.041862 containerd[2019]: time="2024-07-02T08:58:00.041828541Z" level=info msg="Connect containerd service" Jul 2 08:58:00.042090 containerd[2019]: time="2024-07-02T08:58:00.042060597Z" level=info msg="using legacy CRI server" Jul 2 08:58:00.042313 containerd[2019]: time="2024-07-02T08:58:00.042284793Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 08:58:00.042703 containerd[2019]: time="2024-07-02T08:58:00.042666357Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:58:00.044715 containerd[2019]: time="2024-07-02T08:58:00.044609037Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:58:00.044919 containerd[2019]: time="2024-07-02T08:58:00.044890557Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:58:00.045065 containerd[2019]: time="2024-07-02T08:58:00.045033309Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 08:58:00.045165 containerd[2019]: time="2024-07-02T08:58:00.045139353Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:58:00.045275 containerd[2019]: time="2024-07-02T08:58:00.045246585Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 08:58:00.046530 containerd[2019]: time="2024-07-02T08:58:00.045774921Z" level=info msg="Start subscribing containerd event" Jul 2 08:58:00.046530 containerd[2019]: time="2024-07-02T08:58:00.045838365Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:58:00.046530 containerd[2019]: time="2024-07-02T08:58:00.045891501Z" level=info msg="Start recovering state" Jul 2 08:58:00.046530 containerd[2019]: time="2024-07-02T08:58:00.045950565Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:58:00.046530 containerd[2019]: time="2024-07-02T08:58:00.046008213Z" level=info msg="Start event monitor" Jul 2 08:58:00.046530 containerd[2019]: time="2024-07-02T08:58:00.046032921Z" level=info msg="Start snapshots syncer" Jul 2 08:58:00.046530 containerd[2019]: time="2024-07-02T08:58:00.046053729Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:58:00.046530 containerd[2019]: time="2024-07-02T08:58:00.046073505Z" level=info msg="Start streaming server" Jul 2 08:58:00.046530 containerd[2019]: time="2024-07-02T08:58:00.046199061Z" level=info msg="containerd successfully booted in 0.171423s" Jul 2 08:58:00.046318 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 08:58:00.148649 systemd-networkd[1935]: eth0: Gained IPv6LL Jul 2 08:58:00.158515 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 08:58:00.162633 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 08:58:00.171991 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 08:58:00.181871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:00.194149 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 08:58:00.315433 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 08:58:00.342079 amazon-ssm-agent[2192]: Initializing new seelog logger Jul 2 08:58:00.342556 amazon-ssm-agent[2192]: New Seelog Logger Creation Complete Jul 2 08:58:00.342949 amazon-ssm-agent[2192]: 2024/07/02 08:58:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:00.342949 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:00.343707 amazon-ssm-agent[2192]: 2024/07/02 08:58:00 processing appconfig overrides Jul 2 08:58:00.348889 amazon-ssm-agent[2192]: 2024/07/02 08:58:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:00.348889 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:00.348889 amazon-ssm-agent[2192]: 2024/07/02 08:58:00 processing appconfig overrides Jul 2 08:58:00.349193 amazon-ssm-agent[2192]: 2024/07/02 08:58:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:00.349193 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:00.349290 amazon-ssm-agent[2192]: 2024/07/02 08:58:00 processing appconfig overrides Jul 2 08:58:00.351372 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO Proxy environment variables: Jul 2 08:58:00.358937 amazon-ssm-agent[2192]: 2024/07/02 08:58:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:00.358937 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:00.361295 amazon-ssm-agent[2192]: 2024/07/02 08:58:00 processing appconfig overrides Jul 2 08:58:00.462627 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO https_proxy: Jul 2 08:58:00.563486 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO http_proxy: Jul 2 08:58:00.660823 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO no_proxy: Jul 2 08:58:00.761960 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO Checking if agent identity type OnPrem can be assumed Jul 2 08:58:00.813511 tar[2003]: linux-arm64/LICENSE Jul 2 08:58:00.813511 tar[2003]: linux-arm64/README.md Jul 2 08:58:00.853064 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 08:58:00.860005 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO Checking if agent identity type EC2 can be assumed Jul 2 08:58:00.958015 sshd_keygen[2036]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:58:00.959051 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO Agent will take identity from EC2 Jul 2 08:58:01.049014 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 08:58:01.058623 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:58:01.059981 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 08:58:01.071031 systemd[1]: Started sshd@0-172.31.30.172:22-147.75.109.163:37916.service - OpenSSH per-connection server daemon (147.75.109.163:37916). Jul 2 08:58:01.112988 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:58:01.113329 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 08:58:01.131387 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 08:58:01.157580 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:58:01.180442 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 08:58:01.195398 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 08:58:01.210087 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 08:58:01.212548 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 08:58:01.258481 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:58:01.330018 sshd[2222]: Accepted publickey for core from 147.75.109.163 port 37916 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:01.334831 sshd[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:01.361933 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 08:58:01.361426 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 08:58:01.372891 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 08:58:01.382891 systemd-logind[1994]: New session 1 of user core. Jul 2 08:58:01.421013 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 08:58:01.437045 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 08:58:01.456658 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 2 08:58:01.456760 (systemd)[2233]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:01.559540 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 08:58:01.659480 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 08:58:01.710783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:01.722262 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 08:58:01.742155 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:58:01.748132 systemd[2233]: Queued start job for default target default.target. Jul 2 08:58:01.755220 systemd[2233]: Created slice app.slice - User Application Slice. Jul 2 08:58:01.755519 systemd[2233]: Reached target paths.target - Paths. Jul 2 08:58:01.755562 systemd[2233]: Reached target timers.target - Timers. Jul 2 08:58:01.760505 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO [Registrar] Starting registrar module Jul 2 08:58:01.763717 systemd[2233]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 08:58:01.789416 systemd[2233]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 08:58:01.790753 systemd[2233]: Reached target sockets.target - Sockets. Jul 2 08:58:01.790920 systemd[2233]: Reached target basic.target - Basic System. Jul 2 08:58:01.791010 systemd[2233]: Reached target default.target - Main User Target. Jul 2 08:58:01.791073 systemd[2233]: Startup finished in 319ms. Jul 2 08:58:01.791201 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 08:58:01.800758 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 08:58:01.802880 systemd[1]: Startup finished in 1.143s (kernel) + 9.016s (initrd) + 8.037s (userspace) = 18.197s. Jul 2 08:58:01.838114 amazon-ssm-agent[2192]: 2024-07-02 08:58:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 08:58:01.838114 amazon-ssm-agent[2192]: 2024-07-02 08:58:01 INFO [EC2Identity] EC2 registration was successful. Jul 2 08:58:01.838114 amazon-ssm-agent[2192]: 2024-07-02 08:58:01 INFO [CredentialRefresher] credentialRefresher has started Jul 2 08:58:01.838114 amazon-ssm-agent[2192]: 2024-07-02 08:58:01 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 08:58:01.838114 amazon-ssm-agent[2192]: 2024-07-02 08:58:01 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 08:58:01.859479 amazon-ssm-agent[2192]: 2024-07-02 08:58:01 INFO [CredentialRefresher] Next credential rotation will be in 31.891649466866667 minutes Jul 2 08:58:01.975639 systemd[1]: Started sshd@1-172.31.30.172:22-147.75.109.163:37926.service - OpenSSH per-connection server daemon (147.75.109.163:37926). Jul 2 08:58:02.158920 sshd[2258]: Accepted publickey for core from 147.75.109.163 port 37926 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:02.162041 sshd[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:02.173559 systemd-logind[1994]: New session 2 of user core. Jul 2 08:58:02.175829 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 08:58:02.309298 sshd[2258]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:02.315384 systemd-logind[1994]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:58:02.315817 systemd[1]: sshd@1-172.31.30.172:22-147.75.109.163:37926.service: Deactivated successfully. Jul 2 08:58:02.319445 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:58:02.325939 systemd-logind[1994]: Removed session 2. Jul 2 08:58:02.344024 systemd[1]: Started sshd@2-172.31.30.172:22-147.75.109.163:37930.service - OpenSSH per-connection server daemon (147.75.109.163:37930). Jul 2 08:58:02.518987 sshd[2266]: Accepted publickey for core from 147.75.109.163 port 37930 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:02.522188 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:02.533999 systemd-logind[1994]: New session 3 of user core. Jul 2 08:58:02.544778 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 08:58:02.579720 kubelet[2243]: E0702 08:58:02.579606 2243 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:58:02.584714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:58:02.585063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:58:02.585784 systemd[1]: kubelet.service: Consumed 1.301s CPU time. Jul 2 08:58:02.665793 sshd[2266]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:02.670732 systemd-logind[1994]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:58:02.671950 systemd[1]: sshd@2-172.31.30.172:22-147.75.109.163:37930.service: Deactivated successfully. Jul 2 08:58:02.675000 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:58:02.678871 systemd-logind[1994]: Removed session 3. Jul 2 08:58:02.705960 systemd[1]: Started sshd@3-172.31.30.172:22-147.75.109.163:56356.service - OpenSSH per-connection server daemon (147.75.109.163:56356). Jul 2 08:58:02.860917 ntpd[1989]: Listen normally on 7 eth0 [fe80::487:7cff:fecd:dce1%2]:123 Jul 2 08:58:02.863227 ntpd[1989]: 2 Jul 08:58:02 ntpd[1989]: Listen normally on 7 eth0 [fe80::487:7cff:fecd:dce1%2]:123 Jul 2 08:58:02.866156 amazon-ssm-agent[2192]: 2024-07-02 08:58:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 08:58:02.869229 sshd[2275]: Accepted publickey for core from 147.75.109.163 port 56356 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:02.872599 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:02.881035 systemd-logind[1994]: New session 4 of user core. Jul 2 08:58:02.894764 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 08:58:02.967494 amazon-ssm-agent[2192]: 2024-07-02 08:58:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2278) started Jul 2 08:58:03.029822 sshd[2275]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:03.041616 systemd[1]: sshd@3-172.31.30.172:22-147.75.109.163:56356.service: Deactivated successfully. Jul 2 08:58:03.045668 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:58:03.049823 systemd-logind[1994]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:58:03.070568 amazon-ssm-agent[2192]: 2024-07-02 08:58:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 08:58:03.072914 systemd[1]: Started sshd@4-172.31.30.172:22-147.75.109.163:56360.service - OpenSSH per-connection server daemon (147.75.109.163:56360). Jul 2 08:58:03.075196 systemd-logind[1994]: Removed session 4. Jul 2 08:58:03.246152 sshd[2289]: Accepted publickey for core from 147.75.109.163 port 56360 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:03.249221 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:03.257961 systemd-logind[1994]: New session 5 of user core. Jul 2 08:58:03.267731 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 08:58:03.387917 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 08:58:03.388488 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:58:03.402944 sudo[2296]: pam_unix(sudo:session): session closed for user root Jul 2 08:58:03.425875 sshd[2289]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:03.431425 systemd[1]: sshd@4-172.31.30.172:22-147.75.109.163:56360.service: Deactivated successfully. Jul 2 08:58:03.434825 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:58:03.438051 systemd-logind[1994]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:58:03.440694 systemd-logind[1994]: Removed session 5. Jul 2 08:58:03.465295 systemd[1]: Started sshd@5-172.31.30.172:22-147.75.109.163:56368.service - OpenSSH per-connection server daemon (147.75.109.163:56368). Jul 2 08:58:03.642652 sshd[2301]: Accepted publickey for core from 147.75.109.163 port 56368 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:03.645535 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:03.653507 systemd-logind[1994]: New session 6 of user core. Jul 2 08:58:03.662718 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 08:58:03.766332 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 08:58:03.766988 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:58:03.773267 sudo[2305]: pam_unix(sudo:session): session closed for user root Jul 2 08:58:03.783024 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 08:58:03.783687 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:58:03.805004 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 08:58:03.823105 auditctl[2308]: No rules Jul 2 08:58:03.823934 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 08:58:03.824369 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 08:58:03.833575 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:58:03.884064 augenrules[2326]: No rules Jul 2 08:58:03.887214 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:58:03.889989 sudo[2304]: pam_unix(sudo:session): session closed for user root Jul 2 08:58:03.913337 sshd[2301]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:03.920312 systemd-logind[1994]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:58:03.921946 systemd[1]: sshd@5-172.31.30.172:22-147.75.109.163:56368.service: Deactivated successfully. Jul 2 08:58:03.925308 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:58:03.927397 systemd-logind[1994]: Removed session 6. Jul 2 08:58:03.954954 systemd[1]: Started sshd@6-172.31.30.172:22-147.75.109.163:56380.service - OpenSSH per-connection server daemon (147.75.109.163:56380). Jul 2 08:58:04.120907 sshd[2334]: Accepted publickey for core from 147.75.109.163 port 56380 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:04.123812 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:04.132859 systemd-logind[1994]: New session 7 of user core. Jul 2 08:58:04.143272 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 08:58:04.246808 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:58:04.247320 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:58:04.429981 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 08:58:04.442963 (dockerd)[2346]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 08:58:04.810632 dockerd[2346]: time="2024-07-02T08:58:04.810547468Z" level=info msg="Starting up" Jul 2 08:58:05.265773 systemd[1]: var-lib-docker-metacopy\x2dcheck1026019440-merged.mount: Deactivated successfully. Jul 2 08:58:05.283107 dockerd[2346]: time="2024-07-02T08:58:05.282254679Z" level=info msg="Loading containers: start." Jul 2 08:58:05.430497 kernel: Initializing XFRM netlink socket Jul 2 08:58:05.491618 (udev-worker)[2361]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:58:05.574076 systemd-networkd[1935]: docker0: Link UP Jul 2 08:58:05.593180 dockerd[2346]: time="2024-07-02T08:58:05.593045824Z" level=info msg="Loading containers: done." Jul 2 08:58:05.696438 dockerd[2346]: time="2024-07-02T08:58:05.696375365Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:58:05.696769 dockerd[2346]: time="2024-07-02T08:58:05.696720473Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 08:58:05.696965 dockerd[2346]: time="2024-07-02T08:58:05.696931937Z" level=info msg="Daemon has completed initialization" Jul 2 08:58:05.750732 dockerd[2346]: time="2024-07-02T08:58:05.749516969Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:58:05.751603 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 08:58:05.693324 systemd-resolved[1936]: Clock change detected. Flushing caches. Jul 2 08:58:05.701441 systemd-journald[1574]: Time jumped backwards, rotating. Jul 2 08:58:06.578129 containerd[2019]: time="2024-07-02T08:58:06.577665984Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 08:58:07.280135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73992721.mount: Deactivated successfully. Jul 2 08:58:09.000262 containerd[2019]: time="2024-07-02T08:58:09.000192480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:09.002326 containerd[2019]: time="2024-07-02T08:58:09.002259276Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=32256347" Jul 2 08:58:09.003351 containerd[2019]: time="2024-07-02T08:58:09.003252804Z" level=info msg="ImageCreate event name:\"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:09.009001 containerd[2019]: time="2024-07-02T08:58:09.008898000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:09.011547 containerd[2019]: time="2024-07-02T08:58:09.011313204Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"32253147\" in 2.433583584s" Jul 2 08:58:09.011547 containerd[2019]: time="2024-07-02T08:58:09.011374440Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\"" Jul 2 08:58:09.052724 containerd[2019]: time="2024-07-02T08:58:09.052637556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 08:58:11.255120 containerd[2019]: time="2024-07-02T08:58:11.254767683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:11.256983 containerd[2019]: time="2024-07-02T08:58:11.256909491Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=29228084" Jul 2 08:58:11.258558 containerd[2019]: time="2024-07-02T08:58:11.258489759Z" level=info msg="ImageCreate event name:\"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:11.264292 containerd[2019]: time="2024-07-02T08:58:11.264184371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:11.266709 containerd[2019]: time="2024-07-02T08:58:11.266519079Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"30685210\" in 2.213655839s" Jul 2 08:58:11.266709 containerd[2019]: time="2024-07-02T08:58:11.266581863Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\"" Jul 2 08:58:11.307513 containerd[2019]: time="2024-07-02T08:58:11.307136835Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 08:58:12.668043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:58:12.679479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:12.718415 containerd[2019]: time="2024-07-02T08:58:12.718336806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:12.724218 containerd[2019]: time="2024-07-02T08:58:12.723415866Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=15578348" Jul 2 08:58:12.731586 containerd[2019]: time="2024-07-02T08:58:12.731528502Z" level=info msg="ImageCreate event name:\"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:12.739974 containerd[2019]: time="2024-07-02T08:58:12.739881523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:12.743566 containerd[2019]: time="2024-07-02T08:58:12.743478631Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"17035492\" in 1.436280452s" Jul 2 08:58:12.743566 containerd[2019]: time="2024-07-02T08:58:12.743554099Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\"" Jul 2 08:58:12.786430 containerd[2019]: time="2024-07-02T08:58:12.786376843Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 08:58:13.102360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:13.103267 (kubelet)[2565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:58:13.198112 kubelet[2565]: E0702 08:58:13.198019 2565 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:58:13.206980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:58:13.207396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:58:14.087058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3954012257.mount: Deactivated successfully. Jul 2 08:58:14.562681 containerd[2019]: time="2024-07-02T08:58:14.562292564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:14.564533 containerd[2019]: time="2024-07-02T08:58:14.564459788Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=25052710" Jul 2 08:58:14.566333 containerd[2019]: time="2024-07-02T08:58:14.566262428Z" level=info msg="ImageCreate event name:\"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:14.570616 containerd[2019]: time="2024-07-02T08:58:14.570520940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:14.572406 containerd[2019]: time="2024-07-02T08:58:14.571923332Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"25051729\" in 1.785486201s" Jul 2 08:58:14.572406 containerd[2019]: time="2024-07-02T08:58:14.571980608Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jul 2 08:58:14.612736 containerd[2019]: time="2024-07-02T08:58:14.612682196Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 08:58:15.179295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1251142875.mount: Deactivated successfully. Jul 2 08:58:16.561019 containerd[2019]: time="2024-07-02T08:58:16.560959725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:16.564012 containerd[2019]: time="2024-07-02T08:58:16.563957625Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jul 2 08:58:16.565910 containerd[2019]: time="2024-07-02T08:58:16.565828162Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:16.573843 containerd[2019]: time="2024-07-02T08:58:16.573737302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:16.576337 containerd[2019]: time="2024-07-02T08:58:16.576121018Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.963372918s" Jul 2 08:58:16.576337 containerd[2019]: time="2024-07-02T08:58:16.576187762Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 08:58:16.618878 containerd[2019]: time="2024-07-02T08:58:16.618776518Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:58:17.114190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788464162.mount: Deactivated successfully. Jul 2 08:58:17.122610 containerd[2019]: time="2024-07-02T08:58:17.122181368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:17.123953 containerd[2019]: time="2024-07-02T08:58:17.123899864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jul 2 08:58:17.125374 containerd[2019]: time="2024-07-02T08:58:17.125289104Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:17.131256 containerd[2019]: time="2024-07-02T08:58:17.131175884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:17.133042 containerd[2019]: time="2024-07-02T08:58:17.132854624Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 514.01921ms" Jul 2 08:58:17.133042 containerd[2019]: time="2024-07-02T08:58:17.132909692Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 08:58:17.171002 containerd[2019]: time="2024-07-02T08:58:17.170945973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 08:58:17.715044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3903960563.mount: Deactivated successfully. Jul 2 08:58:20.686844 containerd[2019]: time="2024-07-02T08:58:20.686774690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:20.689065 containerd[2019]: time="2024-07-02T08:58:20.688994858Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jul 2 08:58:20.690057 containerd[2019]: time="2024-07-02T08:58:20.689971598Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:20.696385 containerd[2019]: time="2024-07-02T08:58:20.696284810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:20.698997 containerd[2019]: time="2024-07-02T08:58:20.698801606Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.527794565s" Jul 2 08:58:20.698997 containerd[2019]: time="2024-07-02T08:58:20.698860646Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 08:58:23.281478 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:58:23.292273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:23.702490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:23.706684 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:58:23.795513 kubelet[2756]: E0702 08:58:23.795439 2756 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:58:23.800357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:58:23.800843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:58:28.807420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:28.817614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:28.863925 systemd[1]: Reloading requested from client PID 2770 ('systemctl') (unit session-7.scope)... Jul 2 08:58:28.863959 systemd[1]: Reloading... Jul 2 08:58:29.027149 zram_generator::config[2809]: No configuration found. Jul 2 08:58:29.304156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:58:29.474153 systemd[1]: Reloading finished in 609 ms. Jul 2 08:58:29.564583 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 08:58:29.564777 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 08:58:29.565527 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:29.573763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:29.583731 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 08:58:29.890163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:29.904618 (kubelet)[2875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:58:29.992549 kubelet[2875]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:58:29.992549 kubelet[2875]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:58:29.992549 kubelet[2875]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:58:29.993100 kubelet[2875]: I0702 08:58:29.992618 2875 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:58:30.875044 kubelet[2875]: I0702 08:58:30.874982 2875 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 08:58:30.875044 kubelet[2875]: I0702 08:58:30.875036 2875 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:58:30.875446 kubelet[2875]: I0702 08:58:30.875405 2875 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 08:58:30.903771 kubelet[2875]: I0702 08:58:30.903546 2875 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:58:30.904066 kubelet[2875]: E0702 08:58:30.904020 2875 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.172:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:30.924816 kubelet[2875]: I0702 08:58:30.924765 2875 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:58:30.927790 kubelet[2875]: I0702 08:58:30.926843 2875 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:58:30.927790 kubelet[2875]: I0702 08:58:30.927461 2875 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:58:30.927790 kubelet[2875]: I0702 08:58:30.927514 2875 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:58:30.927790 kubelet[2875]: I0702 08:58:30.927535 2875 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:58:30.927790 kubelet[2875]: I0702 08:58:30.927735 2875 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:58:30.932457 kubelet[2875]: I0702 08:58:30.932417 2875 kubelet.go:396] "Attempting to sync node with API server" Jul 2 08:58:30.932457 kubelet[2875]: I0702 08:58:30.932467 2875 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:58:30.934130 kubelet[2875]: I0702 08:58:30.932509 2875 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:58:30.934130 kubelet[2875]: I0702 08:58:30.932541 2875 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:58:30.936454 kubelet[2875]: I0702 08:58:30.936398 2875 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:58:30.937006 kubelet[2875]: I0702 08:58:30.936964 2875 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:58:30.937132 kubelet[2875]: W0702 08:58:30.937101 2875 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:58:30.938284 kubelet[2875]: I0702 08:58:30.938237 2875 server.go:1256] "Started kubelet" Jul 2 08:58:30.938504 kubelet[2875]: W0702 08:58:30.938433 2875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.172:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:30.938564 kubelet[2875]: E0702 08:58:30.938519 2875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.172:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:30.942900 kubelet[2875]: W0702 08:58:30.942835 2875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-172&limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:30.943171 kubelet[2875]: E0702 08:58:30.943146 2875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-172&limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:30.947499 kubelet[2875]: I0702 08:58:30.947439 2875 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:58:30.954929 kubelet[2875]: I0702 08:58:30.954859 2875 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:58:30.956438 kubelet[2875]: I0702 08:58:30.956399 2875 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:58:30.957500 kubelet[2875]: I0702 08:58:30.957437 2875 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:58:30.959623 kubelet[2875]: I0702 08:58:30.958263 2875 server.go:461] "Adding debug handlers to kubelet server" Jul 2 08:58:30.960179 kubelet[2875]: I0702 08:58:30.960131 2875 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:58:30.960612 kubelet[2875]: I0702 08:58:30.960466 2875 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:58:30.963624 kubelet[2875]: E0702 08:58:30.963192 2875 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.172:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.172:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-172.17de59addada41fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-172,UID:ip-172-31-30-172,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-172,},FirstTimestamp:2024-07-02 08:58:30.938059261 +0000 UTC m=+1.026618558,LastTimestamp:2024-07-02 08:58:30.938059261 +0000 UTC m=+1.026618558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-172,}" Jul 2 08:58:30.963624 kubelet[2875]: I0702 08:58:30.963318 2875 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:58:30.964003 kubelet[2875]: E0702 08:58:30.963749 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-172?timeout=10s\": dial tcp 172.31.30.172:6443: connect: connection refused" interval="200ms" Jul 2 08:58:30.965510 kubelet[2875]: W0702 08:58:30.965263 2875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:30.965510 kubelet[2875]: E0702 08:58:30.965387 2875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:30.966565 kubelet[2875]: I0702 08:58:30.966410 2875 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:58:30.967862 kubelet[2875]: I0702 08:58:30.967536 2875 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:58:30.967862 kubelet[2875]: E0702 08:58:30.967690 2875 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:58:30.975757 kubelet[2875]: I0702 08:58:30.975709 2875 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:58:31.001021 kubelet[2875]: I0702 08:58:31.000950 2875 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:58:31.005402 kubelet[2875]: I0702 08:58:31.003730 2875 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:58:31.005402 kubelet[2875]: I0702 08:58:31.003777 2875 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:58:31.005402 kubelet[2875]: I0702 08:58:31.003807 2875 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 08:58:31.005402 kubelet[2875]: E0702 08:58:31.003885 2875 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:58:31.008514 kubelet[2875]: W0702 08:58:31.008303 2875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:31.008514 kubelet[2875]: E0702 08:58:31.008387 2875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:31.027614 kubelet[2875]: I0702 08:58:31.026950 2875 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:58:31.027614 kubelet[2875]: I0702 08:58:31.026990 2875 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:58:31.027614 kubelet[2875]: I0702 08:58:31.027045 2875 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:58:31.033267 kubelet[2875]: I0702 08:58:31.033214 2875 policy_none.go:49] "None policy: Start" Jul 2 08:58:31.034909 kubelet[2875]: I0702 08:58:31.034557 2875 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:58:31.035040 kubelet[2875]: I0702 08:58:31.034973 2875 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:58:31.047459 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 08:58:31.060529 kubelet[2875]: I0702 08:58:31.060448 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-172" Jul 2 08:58:31.061173 kubelet[2875]: E0702 08:58:31.061122 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.172:6443/api/v1/nodes\": dial tcp 172.31.30.172:6443: connect: connection refused" node="ip-172-31-30-172" Jul 2 08:58:31.065668 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 08:58:31.083865 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 08:58:31.087969 kubelet[2875]: I0702 08:58:31.087918 2875 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:58:31.089170 kubelet[2875]: I0702 08:58:31.089020 2875 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:58:31.095730 kubelet[2875]: E0702 08:58:31.095641 2875 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-172\" not found" Jul 2 08:58:31.104170 kubelet[2875]: I0702 08:58:31.104117 2875 topology_manager.go:215] "Topology Admit Handler" podUID="c052580d9dfe9b67c29a1d8df78370d7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-172" Jul 2 08:58:31.106379 kubelet[2875]: I0702 08:58:31.106223 2875 topology_manager.go:215] "Topology Admit Handler" podUID="07cc41bdf1424574525fb5e143fb4019" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:31.108625 kubelet[2875]: I0702 08:58:31.108160 2875 topology_manager.go:215] "Topology Admit Handler" podUID="2acd8a162b226ea369c2ab9c1a2a4a67" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-172" Jul 2 08:58:31.120026 systemd[1]: Created slice kubepods-burstable-podc052580d9dfe9b67c29a1d8df78370d7.slice - libcontainer container kubepods-burstable-podc052580d9dfe9b67c29a1d8df78370d7.slice. Jul 2 08:58:31.144676 systemd[1]: Created slice kubepods-burstable-pod07cc41bdf1424574525fb5e143fb4019.slice - libcontainer container kubepods-burstable-pod07cc41bdf1424574525fb5e143fb4019.slice. Jul 2 08:58:31.165286 kubelet[2875]: E0702 08:58:31.164307 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-172?timeout=10s\": dial tcp 172.31.30.172:6443: connect: connection refused" interval="400ms" Jul 2 08:58:31.164955 systemd[1]: Created slice kubepods-burstable-pod2acd8a162b226ea369c2ab9c1a2a4a67.slice - libcontainer container kubepods-burstable-pod2acd8a162b226ea369c2ab9c1a2a4a67.slice. Jul 2 08:58:31.264118 kubelet[2875]: I0702 08:58:31.264017 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-172" Jul 2 08:58:31.265229 kubelet[2875]: E0702 08:58:31.264555 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.172:6443/api/v1/nodes\": dial tcp 172.31.30.172:6443: connect: connection refused" node="ip-172-31-30-172" Jul 2 08:58:31.265229 kubelet[2875]: I0702 08:58:31.264713 2875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/07cc41bdf1424574525fb5e143fb4019-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-172\" (UID: \"07cc41bdf1424574525fb5e143fb4019\") " pod="kube-system/kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:31.265229 kubelet[2875]: I0702 08:58:31.264760 2875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07cc41bdf1424574525fb5e143fb4019-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-172\" (UID: \"07cc41bdf1424574525fb5e143fb4019\") " pod="kube-system/kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:31.265229 kubelet[2875]: I0702 08:58:31.264810 2875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07cc41bdf1424574525fb5e143fb4019-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-172\" (UID: \"07cc41bdf1424574525fb5e143fb4019\") " pod="kube-system/kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:31.265229 kubelet[2875]: I0702 08:58:31.264856 2875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2acd8a162b226ea369c2ab9c1a2a4a67-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-172\" (UID: \"2acd8a162b226ea369c2ab9c1a2a4a67\") " pod="kube-system/kube-scheduler-ip-172-31-30-172" Jul 2 08:58:31.265555 kubelet[2875]: I0702 08:58:31.264903 2875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c052580d9dfe9b67c29a1d8df78370d7-ca-certs\") pod \"kube-apiserver-ip-172-31-30-172\" (UID: \"c052580d9dfe9b67c29a1d8df78370d7\") " pod="kube-system/kube-apiserver-ip-172-31-30-172" Jul 2 08:58:31.265555 kubelet[2875]: I0702 08:58:31.264950 2875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c052580d9dfe9b67c29a1d8df78370d7-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-172\" (UID: \"c052580d9dfe9b67c29a1d8df78370d7\") " pod="kube-system/kube-apiserver-ip-172-31-30-172" Jul 2 08:58:31.265555 kubelet[2875]: I0702 08:58:31.264994 2875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c052580d9dfe9b67c29a1d8df78370d7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-172\" (UID: \"c052580d9dfe9b67c29a1d8df78370d7\") " pod="kube-system/kube-apiserver-ip-172-31-30-172" Jul 2 08:58:31.265555 kubelet[2875]: I0702 08:58:31.265035 2875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07cc41bdf1424574525fb5e143fb4019-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-172\" (UID: \"07cc41bdf1424574525fb5e143fb4019\") " pod="kube-system/kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:31.265555 kubelet[2875]: I0702 08:58:31.265115 2875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07cc41bdf1424574525fb5e143fb4019-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-172\" (UID: \"07cc41bdf1424574525fb5e143fb4019\") " pod="kube-system/kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:31.440194 containerd[2019]: time="2024-07-02T08:58:31.440001803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-172,Uid:c052580d9dfe9b67c29a1d8df78370d7,Namespace:kube-system,Attempt:0,}" Jul 2 08:58:31.459004 containerd[2019]: time="2024-07-02T08:58:31.458813783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-172,Uid:07cc41bdf1424574525fb5e143fb4019,Namespace:kube-system,Attempt:0,}" Jul 2 08:58:31.477600 containerd[2019]: time="2024-07-02T08:58:31.477244284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-172,Uid:2acd8a162b226ea369c2ab9c1a2a4a67,Namespace:kube-system,Attempt:0,}" Jul 2 08:58:31.565657 kubelet[2875]: E0702 08:58:31.565620 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-172?timeout=10s\": dial tcp 172.31.30.172:6443: connect: connection refused" interval="800ms" Jul 2 08:58:31.667366 kubelet[2875]: I0702 08:58:31.667297 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-172" Jul 2 08:58:31.667803 kubelet[2875]: E0702 08:58:31.667767 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.172:6443/api/v1/nodes\": dial tcp 172.31.30.172:6443: connect: connection refused" node="ip-172-31-30-172" Jul 2 08:58:31.926244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191233791.mount: Deactivated successfully. Jul 2 08:58:31.939958 containerd[2019]: time="2024-07-02T08:58:31.939549962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:58:31.941378 containerd[2019]: time="2024-07-02T08:58:31.941303630Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:58:31.942892 containerd[2019]: time="2024-07-02T08:58:31.942827510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:58:31.944318 containerd[2019]: time="2024-07-02T08:58:31.944237894Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 2 08:58:31.946829 containerd[2019]: time="2024-07-02T08:58:31.946758194Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:58:31.948930 containerd[2019]: time="2024-07-02T08:58:31.948746750Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:58:31.949618 containerd[2019]: time="2024-07-02T08:58:31.949162478Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:58:31.956565 containerd[2019]: time="2024-07-02T08:58:31.956482250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:58:31.958586 containerd[2019]: time="2024-07-02T08:58:31.958519790Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 481.144514ms" Jul 2 08:58:31.965542 containerd[2019]: time="2024-07-02T08:58:31.964604306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 505.650207ms" Jul 2 08:58:31.971032 containerd[2019]: time="2024-07-02T08:58:31.970923818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.062455ms" Jul 2 08:58:31.995761 kubelet[2875]: W0702 08:58:31.995687 2875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-172&limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:31.996005 kubelet[2875]: E0702 08:58:31.995947 2875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-172&limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:32.116407 kubelet[2875]: W0702 08:58:32.116268 2875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:32.116407 kubelet[2875]: E0702 08:58:32.116367 2875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:32.190912 kubelet[2875]: W0702 08:58:32.190665 2875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.172:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:32.190912 kubelet[2875]: E0702 08:58:32.190753 2875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.172:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:32.202174 containerd[2019]: time="2024-07-02T08:58:32.201920279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:32.202174 containerd[2019]: time="2024-07-02T08:58:32.202048523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:32.202513 containerd[2019]: time="2024-07-02T08:58:32.202142387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:32.202513 containerd[2019]: time="2024-07-02T08:58:32.202180379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:32.202819 containerd[2019]: time="2024-07-02T08:58:32.202674851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:32.202997 containerd[2019]: time="2024-07-02T08:58:32.202790495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:32.202997 containerd[2019]: time="2024-07-02T08:58:32.202837139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:32.202997 containerd[2019]: time="2024-07-02T08:58:32.202871171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:32.207278 containerd[2019]: time="2024-07-02T08:58:32.207023531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:32.207278 containerd[2019]: time="2024-07-02T08:58:32.207191699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:32.207278 containerd[2019]: time="2024-07-02T08:58:32.207224195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:32.207851 containerd[2019]: time="2024-07-02T08:58:32.207249287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:32.249955 systemd[1]: Started cri-containerd-26e8d0ba3cf9db19aea29d0cfbf5f3114a8cc2017c9ddc6b41cd1f350dee20b8.scope - libcontainer container 26e8d0ba3cf9db19aea29d0cfbf5f3114a8cc2017c9ddc6b41cd1f350dee20b8. Jul 2 08:58:32.263765 systemd[1]: Started cri-containerd-09fb10830ec8b873a4f5c06581f1bd7bd55bfdb0a1e1b09c751a36c7bfe9f0fe.scope - libcontainer container 09fb10830ec8b873a4f5c06581f1bd7bd55bfdb0a1e1b09c751a36c7bfe9f0fe. Jul 2 08:58:32.281391 systemd[1]: Started cri-containerd-a588caba8fe23281655f32629738d95aa30ebc206d5b9ca08aec69b93bf62333.scope - libcontainer container a588caba8fe23281655f32629738d95aa30ebc206d5b9ca08aec69b93bf62333. Jul 2 08:58:32.327326 kubelet[2875]: W0702 08:58:32.327186 2875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:32.327326 kubelet[2875]: E0702 08:58:32.327283 2875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.172:6443: connect: connection refused Jul 2 08:58:32.366966 kubelet[2875]: E0702 08:58:32.366320 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-172?timeout=10s\": dial tcp 172.31.30.172:6443: connect: connection refused" interval="1.6s" Jul 2 08:58:32.386833 containerd[2019]: time="2024-07-02T08:58:32.386507256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-172,Uid:07cc41bdf1424574525fb5e143fb4019,Namespace:kube-system,Attempt:0,} returns sandbox id \"09fb10830ec8b873a4f5c06581f1bd7bd55bfdb0a1e1b09c751a36c7bfe9f0fe\"" Jul 2 08:58:32.390094 containerd[2019]: time="2024-07-02T08:58:32.389778420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-172,Uid:c052580d9dfe9b67c29a1d8df78370d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a588caba8fe23281655f32629738d95aa30ebc206d5b9ca08aec69b93bf62333\"" Jul 2 08:58:32.399737 containerd[2019]: time="2024-07-02T08:58:32.399465216Z" level=info msg="CreateContainer within sandbox \"09fb10830ec8b873a4f5c06581f1bd7bd55bfdb0a1e1b09c751a36c7bfe9f0fe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:58:32.401937 containerd[2019]: time="2024-07-02T08:58:32.401762424Z" level=info msg="CreateContainer within sandbox \"a588caba8fe23281655f32629738d95aa30ebc206d5b9ca08aec69b93bf62333\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:58:32.405271 containerd[2019]: time="2024-07-02T08:58:32.405196884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-172,Uid:2acd8a162b226ea369c2ab9c1a2a4a67,Namespace:kube-system,Attempt:0,} returns sandbox id \"26e8d0ba3cf9db19aea29d0cfbf5f3114a8cc2017c9ddc6b41cd1f350dee20b8\"" Jul 2 08:58:32.411713 containerd[2019]: time="2024-07-02T08:58:32.411649920Z" level=info msg="CreateContainer within sandbox \"26e8d0ba3cf9db19aea29d0cfbf5f3114a8cc2017c9ddc6b41cd1f350dee20b8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:58:32.442493 containerd[2019]: time="2024-07-02T08:58:32.442340676Z" level=info msg="CreateContainer within sandbox \"a588caba8fe23281655f32629738d95aa30ebc206d5b9ca08aec69b93bf62333\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c310fbeea11a9aa5db0c379f505d372f6556d153dc14b315df5eecd9f5154fb5\"" Jul 2 08:58:32.445136 containerd[2019]: time="2024-07-02T08:58:32.444481356Z" level=info msg="StartContainer for \"c310fbeea11a9aa5db0c379f505d372f6556d153dc14b315df5eecd9f5154fb5\"" Jul 2 08:58:32.463385 containerd[2019]: time="2024-07-02T08:58:32.463326168Z" level=info msg="CreateContainer within sandbox \"09fb10830ec8b873a4f5c06581f1bd7bd55bfdb0a1e1b09c751a36c7bfe9f0fe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"03c6a2dc34988def880118e07123bce8c044074755b16fe01bd5ef140ec1bfa3\"" Jul 2 08:58:32.464362 containerd[2019]: time="2024-07-02T08:58:32.464311260Z" level=info msg="StartContainer for \"03c6a2dc34988def880118e07123bce8c044074755b16fe01bd5ef140ec1bfa3\"" Jul 2 08:58:32.471749 kubelet[2875]: I0702 08:58:32.471145 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-172" Jul 2 08:58:32.471749 kubelet[2875]: E0702 08:58:32.471624 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.172:6443/api/v1/nodes\": dial tcp 172.31.30.172:6443: connect: connection refused" node="ip-172-31-30-172" Jul 2 08:58:32.471990 containerd[2019]: time="2024-07-02T08:58:32.468659928Z" level=info msg="CreateContainer within sandbox \"26e8d0ba3cf9db19aea29d0cfbf5f3114a8cc2017c9ddc6b41cd1f350dee20b8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7ac10a88f624d10af43254b5ed4f0687e3bd0ab5d2a93351614f3c7123594802\"" Jul 2 08:58:32.472695 containerd[2019]: time="2024-07-02T08:58:32.472634785Z" level=info msg="StartContainer for \"7ac10a88f624d10af43254b5ed4f0687e3bd0ab5d2a93351614f3c7123594802\"" Jul 2 08:58:32.506415 systemd[1]: Started cri-containerd-c310fbeea11a9aa5db0c379f505d372f6556d153dc14b315df5eecd9f5154fb5.scope - libcontainer container c310fbeea11a9aa5db0c379f505d372f6556d153dc14b315df5eecd9f5154fb5. Jul 2 08:58:32.552391 systemd[1]: Started cri-containerd-7ac10a88f624d10af43254b5ed4f0687e3bd0ab5d2a93351614f3c7123594802.scope - libcontainer container 7ac10a88f624d10af43254b5ed4f0687e3bd0ab5d2a93351614f3c7123594802. Jul 2 08:58:32.562380 systemd[1]: Started cri-containerd-03c6a2dc34988def880118e07123bce8c044074755b16fe01bd5ef140ec1bfa3.scope - libcontainer container 03c6a2dc34988def880118e07123bce8c044074755b16fe01bd5ef140ec1bfa3. Jul 2 08:58:32.655395 containerd[2019]: time="2024-07-02T08:58:32.655206769Z" level=info msg="StartContainer for \"c310fbeea11a9aa5db0c379f505d372f6556d153dc14b315df5eecd9f5154fb5\" returns successfully" Jul 2 08:58:32.691328 containerd[2019]: time="2024-07-02T08:58:32.691255922Z" level=info msg="StartContainer for \"7ac10a88f624d10af43254b5ed4f0687e3bd0ab5d2a93351614f3c7123594802\" returns successfully" Jul 2 08:58:32.708696 containerd[2019]: time="2024-07-02T08:58:32.708462986Z" level=info msg="StartContainer for \"03c6a2dc34988def880118e07123bce8c044074755b16fe01bd5ef140ec1bfa3\" returns successfully" Jul 2 08:58:34.076234 kubelet[2875]: I0702 08:58:34.074134 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-172" Jul 2 08:58:35.940097 kubelet[2875]: I0702 08:58:35.939745 2875 apiserver.go:52] "Watching apiserver" Jul 2 08:58:35.965769 kubelet[2875]: E0702 08:58:35.965718 2875 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-172\" not found" node="ip-172-31-30-172" Jul 2 08:58:36.009581 kubelet[2875]: I0702 08:58:36.009338 2875 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-172" Jul 2 08:58:36.065701 kubelet[2875]: I0702 08:58:36.065656 2875 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:58:36.295099 kubelet[2875]: E0702 08:58:36.292912 2875 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-30-172.17de59addada41fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-172,UID:ip-172-31-30-172,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-172,},FirstTimestamp:2024-07-02 08:58:30.938059261 +0000 UTC m=+1.026618558,LastTimestamp:2024-07-02 08:58:30.938059261 +0000 UTC m=+1.026618558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-172,}" Jul 2 08:58:39.094565 systemd[1]: Reloading requested from client PID 3150 ('systemctl') (unit session-7.scope)... Jul 2 08:58:39.094593 systemd[1]: Reloading... Jul 2 08:58:39.274133 zram_generator::config[3192]: No configuration found. Jul 2 08:58:39.508209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:58:39.709166 systemd[1]: Reloading finished in 613 ms. Jul 2 08:58:39.784863 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:39.785365 kubelet[2875]: I0702 08:58:39.785259 2875 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:58:39.794561 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:58:39.795004 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:39.795115 systemd[1]: kubelet.service: Consumed 1.729s CPU time, 112.3M memory peak, 0B memory swap peak. Jul 2 08:58:39.802592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:40.148055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:40.165679 (kubelet)[3248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:58:40.288567 kubelet[3248]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:58:40.288567 kubelet[3248]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:58:40.288567 kubelet[3248]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:58:40.289116 kubelet[3248]: I0702 08:58:40.288671 3248 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:58:40.297447 kubelet[3248]: I0702 08:58:40.297326 3248 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 08:58:40.297447 kubelet[3248]: I0702 08:58:40.297384 3248 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:58:40.300049 kubelet[3248]: I0702 08:58:40.298157 3248 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 08:58:40.300383 sudo[3260]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 08:58:40.301933 kubelet[3248]: I0702 08:58:40.301206 3248 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:58:40.301679 sudo[3260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 08:58:40.307240 kubelet[3248]: I0702 08:58:40.306018 3248 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:58:40.333990 kubelet[3248]: I0702 08:58:40.333895 3248 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:58:40.334592 kubelet[3248]: I0702 08:58:40.334325 3248 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:58:40.334683 kubelet[3248]: I0702 08:58:40.334588 3248 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:58:40.334683 kubelet[3248]: I0702 08:58:40.334615 3248 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:58:40.334683 kubelet[3248]: I0702 08:58:40.334635 3248 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:58:40.334901 kubelet[3248]: I0702 08:58:40.334690 3248 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:58:40.334901 kubelet[3248]: I0702 08:58:40.334895 3248 kubelet.go:396] "Attempting to sync node with API server" Jul 2 08:58:40.336818 kubelet[3248]: I0702 08:58:40.335562 3248 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:58:40.336818 kubelet[3248]: I0702 08:58:40.335667 3248 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:58:40.336818 kubelet[3248]: I0702 08:58:40.335712 3248 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:58:40.338474 kubelet[3248]: I0702 08:58:40.338426 3248 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:58:40.339131 kubelet[3248]: I0702 08:58:40.338748 3248 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:58:40.364350 kubelet[3248]: I0702 08:58:40.363126 3248 server.go:1256] "Started kubelet" Jul 2 08:58:40.377844 kubelet[3248]: I0702 08:58:40.377789 3248 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:58:40.383139 kubelet[3248]: I0702 08:58:40.383060 3248 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:58:40.386401 kubelet[3248]: I0702 08:58:40.386352 3248 server.go:461] "Adding debug handlers to kubelet server" Jul 2 08:58:40.387218 kubelet[3248]: I0702 08:58:40.386920 3248 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:58:40.415177 kubelet[3248]: I0702 08:58:40.414977 3248 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:58:40.415177 kubelet[3248]: I0702 08:58:40.406092 3248 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:58:40.436848 kubelet[3248]: I0702 08:58:40.406134 3248 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:58:40.436848 kubelet[3248]: I0702 08:58:40.436694 3248 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:58:40.446749 kubelet[3248]: I0702 08:58:40.446390 3248 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:58:40.483449 kubelet[3248]: I0702 08:58:40.483207 3248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:58:40.485938 kubelet[3248]: I0702 08:58:40.485896 3248 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:58:40.486129 kubelet[3248]: I0702 08:58:40.486110 3248 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:58:40.503327 kubelet[3248]: I0702 08:58:40.503260 3248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:58:40.503327 kubelet[3248]: I0702 08:58:40.503318 3248 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:58:40.503512 kubelet[3248]: I0702 08:58:40.503348 3248 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 08:58:40.503512 kubelet[3248]: E0702 08:58:40.503452 3248 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:58:40.519183 kubelet[3248]: E0702 08:58:40.518594 3248 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:58:40.522648 kubelet[3248]: E0702 08:58:40.522614 3248 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jul 2 08:58:40.528717 kubelet[3248]: I0702 08:58:40.528680 3248 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-172" Jul 2 08:58:40.561560 kubelet[3248]: I0702 08:58:40.561524 3248 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-30-172" Jul 2 08:58:40.562011 kubelet[3248]: I0702 08:58:40.561962 3248 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-172" Jul 2 08:58:40.604133 kubelet[3248]: E0702 08:58:40.603722 3248 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 08:58:40.650409 kubelet[3248]: I0702 08:58:40.649903 3248 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:58:40.650409 kubelet[3248]: I0702 08:58:40.649941 3248 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:58:40.650409 kubelet[3248]: I0702 08:58:40.649971 3248 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:58:40.650409 kubelet[3248]: I0702 08:58:40.650246 3248 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:58:40.650409 kubelet[3248]: I0702 08:58:40.650285 3248 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:58:40.650409 kubelet[3248]: I0702 08:58:40.650302 3248 policy_none.go:49] "None policy: Start" Jul 2 08:58:40.653147 kubelet[3248]: I0702 08:58:40.651968 3248 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:58:40.653147 kubelet[3248]: I0702 08:58:40.652018 3248 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:58:40.653147 kubelet[3248]: I0702 08:58:40.652365 3248 state_mem.go:75] "Updated machine memory state" Jul 2 08:58:40.667242 kubelet[3248]: I0702 08:58:40.667112 3248 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:58:40.672753 kubelet[3248]: I0702 08:58:40.672717 3248 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:58:40.806305 kubelet[3248]: I0702 08:58:40.804441 3248 topology_manager.go:215] "Topology Admit Handler" podUID="2acd8a162b226ea369c2ab9c1a2a4a67" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-172" Jul 2 08:58:40.806305 kubelet[3248]: I0702 08:58:40.804575 3248 topology_manager.go:215] "Topology Admit Handler" podUID="c052580d9dfe9b67c29a1d8df78370d7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-172" Jul 2 08:58:40.806305 kubelet[3248]: I0702 08:58:40.804677 3248 topology_manager.go:215] "Topology Admit Handler" podUID="07cc41bdf1424574525fb5e143fb4019" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:40.819397 kubelet[3248]: E0702 08:58:40.818576 3248 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-172\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:40.821830 kubelet[3248]: E0702 08:58:40.821789 3248 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-30-172\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-172" Jul 2 08:58:40.840056 kubelet[3248]: I0702 08:58:40.840013 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2acd8a162b226ea369c2ab9c1a2a4a67-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-172\" (UID: \"2acd8a162b226ea369c2ab9c1a2a4a67\") " pod="kube-system/kube-scheduler-ip-172-31-30-172" Jul 2 08:58:40.840837 kubelet[3248]: I0702 08:58:40.840592 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c052580d9dfe9b67c29a1d8df78370d7-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-172\" (UID: \"c052580d9dfe9b67c29a1d8df78370d7\") " pod="kube-system/kube-apiserver-ip-172-31-30-172" Jul 2 08:58:40.840837 kubelet[3248]: I0702 08:58:40.840653 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c052580d9dfe9b67c29a1d8df78370d7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-172\" (UID: \"c052580d9dfe9b67c29a1d8df78370d7\") " pod="kube-system/kube-apiserver-ip-172-31-30-172" Jul 2 08:58:40.840837 kubelet[3248]: I0702 08:58:40.840702 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/07cc41bdf1424574525fb5e143fb4019-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-172\" (UID: \"07cc41bdf1424574525fb5e143fb4019\") " pod="kube-system/kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:40.841641 kubelet[3248]: I0702 08:58:40.841446 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07cc41bdf1424574525fb5e143fb4019-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-172\" (UID: \"07cc41bdf1424574525fb5e143fb4019\") " pod="kube-system/kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:40.841641 kubelet[3248]: I0702 08:58:40.841541 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c052580d9dfe9b67c29a1d8df78370d7-ca-certs\") pod \"kube-apiserver-ip-172-31-30-172\" (UID: \"c052580d9dfe9b67c29a1d8df78370d7\") " pod="kube-system/kube-apiserver-ip-172-31-30-172" Jul 2 08:58:40.842666 kubelet[3248]: I0702 08:58:40.841926 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07cc41bdf1424574525fb5e143fb4019-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-172\" (UID: \"07cc41bdf1424574525fb5e143fb4019\") " pod="kube-system/kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:40.842666 kubelet[3248]: I0702 08:58:40.842019 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07cc41bdf1424574525fb5e143fb4019-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-172\" (UID: \"07cc41bdf1424574525fb5e143fb4019\") " pod="kube-system/kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:40.842666 kubelet[3248]: I0702 08:58:40.842103 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07cc41bdf1424574525fb5e143fb4019-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-172\" (UID: \"07cc41bdf1424574525fb5e143fb4019\") " pod="kube-system/kube-controller-manager-ip-172-31-30-172" Jul 2 08:58:41.199280 sudo[3260]: pam_unix(sudo:session): session closed for user root Jul 2 08:58:41.339718 kubelet[3248]: I0702 08:58:41.339370 3248 apiserver.go:52] "Watching apiserver" Jul 2 08:58:41.436694 kubelet[3248]: I0702 08:58:41.436600 3248 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:58:41.645144 kubelet[3248]: I0702 08:58:41.643944 3248 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-172" podStartSLOduration=1.64388239 podStartE2EDuration="1.64388239s" podCreationTimestamp="2024-07-02 08:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:41.599189446 +0000 UTC m=+1.421892248" watchObservedRunningTime="2024-07-02 08:58:41.64388239 +0000 UTC m=+1.466585204" Jul 2 08:58:41.688312 kubelet[3248]: I0702 08:58:41.687995 3248 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-172" podStartSLOduration=5.687940834 podStartE2EDuration="5.687940834s" podCreationTimestamp="2024-07-02 08:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:41.645728698 +0000 UTC m=+1.468431512" watchObservedRunningTime="2024-07-02 08:58:41.687940834 +0000 UTC m=+1.510643624" Jul 2 08:58:41.751101 kubelet[3248]: I0702 08:58:41.750442 3248 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-172" podStartSLOduration=4.750386243 podStartE2EDuration="4.750386243s" podCreationTimestamp="2024-07-02 08:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:41.68966305 +0000 UTC m=+1.512365864" watchObservedRunningTime="2024-07-02 08:58:41.750386243 +0000 UTC m=+1.573089021" Jul 2 08:58:43.476346 sudo[2337]: pam_unix(sudo:session): session closed for user root Jul 2 08:58:43.500420 sshd[2334]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:43.508028 systemd[1]: sshd@6-172.31.30.172:22-147.75.109.163:56380.service: Deactivated successfully. Jul 2 08:58:43.513500 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:58:43.514991 systemd[1]: session-7.scope: Consumed 11.518s CPU time, 134.4M memory peak, 0B memory swap peak. Jul 2 08:58:43.516524 systemd-logind[1994]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:58:43.518901 systemd-logind[1994]: Removed session 7. Jul 2 08:58:44.439951 update_engine[1997]: I0702 08:58:44.439818 1997 update_attempter.cc:509] Updating boot flags... Jul 2 08:58:44.532185 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3333) Jul 2 08:58:51.603334 kubelet[3248]: I0702 08:58:51.603294 3248 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:58:51.606625 containerd[2019]: time="2024-07-02T08:58:51.605387396Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:58:51.609274 kubelet[3248]: I0702 08:58:51.606299 3248 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:58:52.590458 kubelet[3248]: I0702 08:58:52.590396 3248 topology_manager.go:215] "Topology Admit Handler" podUID="c649178f-4a06-45e2-bcda-a59ad48806e7" podNamespace="kube-system" podName="kube-proxy-cd9bx" Jul 2 08:58:52.612366 kubelet[3248]: I0702 08:58:52.612294 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c649178f-4a06-45e2-bcda-a59ad48806e7-kube-proxy\") pod \"kube-proxy-cd9bx\" (UID: \"c649178f-4a06-45e2-bcda-a59ad48806e7\") " pod="kube-system/kube-proxy-cd9bx" Jul 2 08:58:52.612366 kubelet[3248]: I0702 08:58:52.612371 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c649178f-4a06-45e2-bcda-a59ad48806e7-lib-modules\") pod \"kube-proxy-cd9bx\" (UID: \"c649178f-4a06-45e2-bcda-a59ad48806e7\") " pod="kube-system/kube-proxy-cd9bx" Jul 2 08:58:52.612978 kubelet[3248]: I0702 08:58:52.612423 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blj28\" (UniqueName: \"kubernetes.io/projected/c649178f-4a06-45e2-bcda-a59ad48806e7-kube-api-access-blj28\") pod \"kube-proxy-cd9bx\" (UID: \"c649178f-4a06-45e2-bcda-a59ad48806e7\") " pod="kube-system/kube-proxy-cd9bx" Jul 2 08:58:52.612978 kubelet[3248]: I0702 08:58:52.612470 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c649178f-4a06-45e2-bcda-a59ad48806e7-xtables-lock\") pod \"kube-proxy-cd9bx\" (UID: \"c649178f-4a06-45e2-bcda-a59ad48806e7\") " pod="kube-system/kube-proxy-cd9bx" Jul 2 08:58:52.613475 systemd[1]: Created slice kubepods-besteffort-podc649178f_4a06_45e2_bcda_a59ad48806e7.slice - libcontainer container kubepods-besteffort-podc649178f_4a06_45e2_bcda_a59ad48806e7.slice. Jul 2 08:58:52.623058 kubelet[3248]: I0702 08:58:52.622995 3248 topology_manager.go:215] "Topology Admit Handler" podUID="7406341c-c44d-4a35-a784-a85760c61b26" podNamespace="kube-system" podName="cilium-bz8wk" Jul 2 08:58:52.644101 systemd[1]: Created slice kubepods-burstable-pod7406341c_c44d_4a35_a784_a85760c61b26.slice - libcontainer container kubepods-burstable-pod7406341c_c44d_4a35_a784_a85760c61b26.slice. Jul 2 08:58:52.713299 kubelet[3248]: I0702 08:58:52.713127 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7406341c-c44d-4a35-a784-a85760c61b26-cilium-config-path\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715110 kubelet[3248]: I0702 08:58:52.713222 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-cilium-run\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715110 kubelet[3248]: I0702 08:58:52.713584 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-xtables-lock\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715110 kubelet[3248]: I0702 08:58:52.713680 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7406341c-c44d-4a35-a784-a85760c61b26-clustermesh-secrets\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715110 kubelet[3248]: I0702 08:58:52.713758 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-host-proc-sys-kernel\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715110 kubelet[3248]: I0702 08:58:52.713806 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmkrq\" (UniqueName: \"kubernetes.io/projected/7406341c-c44d-4a35-a784-a85760c61b26-kube-api-access-pmkrq\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715110 kubelet[3248]: I0702 08:58:52.713856 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-cni-path\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715496 kubelet[3248]: I0702 08:58:52.713901 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-host-proc-sys-net\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715496 kubelet[3248]: I0702 08:58:52.713943 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7406341c-c44d-4a35-a784-a85760c61b26-hubble-tls\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715496 kubelet[3248]: I0702 08:58:52.713987 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-cilium-cgroup\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715496 kubelet[3248]: I0702 08:58:52.714090 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-etc-cni-netd\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715496 kubelet[3248]: I0702 08:58:52.714139 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-lib-modules\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715496 kubelet[3248]: I0702 08:58:52.714205 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-bpf-maps\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.715828 kubelet[3248]: I0702 08:58:52.714249 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-hostproc\") pod \"cilium-bz8wk\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " pod="kube-system/cilium-bz8wk" Jul 2 08:58:52.788931 kubelet[3248]: I0702 08:58:52.787900 3248 topology_manager.go:215] "Topology Admit Handler" podUID="bee1e216-d403-44de-8b44-339179cf3083" podNamespace="kube-system" podName="cilium-operator-5cc964979-gk588" Jul 2 08:58:52.805960 systemd[1]: Created slice kubepods-besteffort-podbee1e216_d403_44de_8b44_339179cf3083.slice - libcontainer container kubepods-besteffort-podbee1e216_d403_44de_8b44_339179cf3083.slice. Jul 2 08:58:52.815187 kubelet[3248]: I0702 08:58:52.814821 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjpbm\" (UniqueName: \"kubernetes.io/projected/bee1e216-d403-44de-8b44-339179cf3083-kube-api-access-sjpbm\") pod \"cilium-operator-5cc964979-gk588\" (UID: \"bee1e216-d403-44de-8b44-339179cf3083\") " pod="kube-system/cilium-operator-5cc964979-gk588" Jul 2 08:58:52.815608 kubelet[3248]: I0702 08:58:52.815561 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bee1e216-d403-44de-8b44-339179cf3083-cilium-config-path\") pod \"cilium-operator-5cc964979-gk588\" (UID: \"bee1e216-d403-44de-8b44-339179cf3083\") " pod="kube-system/cilium-operator-5cc964979-gk588" Jul 2 08:58:52.938173 containerd[2019]: time="2024-07-02T08:58:52.937709386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cd9bx,Uid:c649178f-4a06-45e2-bcda-a59ad48806e7,Namespace:kube-system,Attempt:0,}" Jul 2 08:58:53.015499 containerd[2019]: time="2024-07-02T08:58:53.014923015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:53.015957 containerd[2019]: time="2024-07-02T08:58:53.015542119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:53.015957 containerd[2019]: time="2024-07-02T08:58:53.015626743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:53.015957 containerd[2019]: time="2024-07-02T08:58:53.015663715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:53.054382 systemd[1]: Started cri-containerd-e895841667a42eb0afe3f9849a02e63ea1e9a0f3fe20225621e24f00671e1aa5.scope - libcontainer container e895841667a42eb0afe3f9849a02e63ea1e9a0f3fe20225621e24f00671e1aa5. Jul 2 08:58:53.097843 containerd[2019]: time="2024-07-02T08:58:53.097454431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cd9bx,Uid:c649178f-4a06-45e2-bcda-a59ad48806e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e895841667a42eb0afe3f9849a02e63ea1e9a0f3fe20225621e24f00671e1aa5\"" Jul 2 08:58:53.105227 containerd[2019]: time="2024-07-02T08:58:53.104740027Z" level=info msg="CreateContainer within sandbox \"e895841667a42eb0afe3f9849a02e63ea1e9a0f3fe20225621e24f00671e1aa5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:58:53.113811 containerd[2019]: time="2024-07-02T08:58:53.113759611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gk588,Uid:bee1e216-d403-44de-8b44-339179cf3083,Namespace:kube-system,Attempt:0,}" Jul 2 08:58:53.141444 containerd[2019]: time="2024-07-02T08:58:53.141375343Z" level=info msg="CreateContainer within sandbox \"e895841667a42eb0afe3f9849a02e63ea1e9a0f3fe20225621e24f00671e1aa5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ed5485ce38a17df159514505c94a394c7c3b5b230b111612edf97a5b11890a46\"" Jul 2 08:58:53.147221 containerd[2019]: time="2024-07-02T08:58:53.144451759Z" level=info msg="StartContainer for \"ed5485ce38a17df159514505c94a394c7c3b5b230b111612edf97a5b11890a46\"" Jul 2 08:58:53.172395 containerd[2019]: time="2024-07-02T08:58:53.172199515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:53.172987 containerd[2019]: time="2024-07-02T08:58:53.172342807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:53.172987 containerd[2019]: time="2024-07-02T08:58:53.172388743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:53.172987 containerd[2019]: time="2024-07-02T08:58:53.172423675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:53.211498 systemd[1]: Started cri-containerd-e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453.scope - libcontainer container e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453. Jul 2 08:58:53.220246 systemd[1]: Started cri-containerd-ed5485ce38a17df159514505c94a394c7c3b5b230b111612edf97a5b11890a46.scope - libcontainer container ed5485ce38a17df159514505c94a394c7c3b5b230b111612edf97a5b11890a46. Jul 2 08:58:53.251745 containerd[2019]: time="2024-07-02T08:58:53.251595500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bz8wk,Uid:7406341c-c44d-4a35-a784-a85760c61b26,Namespace:kube-system,Attempt:0,}" Jul 2 08:58:53.332792 containerd[2019]: time="2024-07-02T08:58:53.332731280Z" level=info msg="StartContainer for \"ed5485ce38a17df159514505c94a394c7c3b5b230b111612edf97a5b11890a46\" returns successfully" Jul 2 08:58:53.334672 containerd[2019]: time="2024-07-02T08:58:53.334303976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gk588,Uid:bee1e216-d403-44de-8b44-339179cf3083,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\"" Jul 2 08:58:53.336349 containerd[2019]: time="2024-07-02T08:58:53.334353536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:53.336349 containerd[2019]: time="2024-07-02T08:58:53.334460912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:53.336349 containerd[2019]: time="2024-07-02T08:58:53.334518488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:53.336349 containerd[2019]: time="2024-07-02T08:58:53.334553432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:53.347144 containerd[2019]: time="2024-07-02T08:58:53.345800792Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 08:58:53.384731 systemd[1]: Started cri-containerd-8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be.scope - libcontainer container 8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be. Jul 2 08:58:53.444515 containerd[2019]: time="2024-07-02T08:58:53.444419301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bz8wk,Uid:7406341c-c44d-4a35-a784-a85760c61b26,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\"" Jul 2 08:58:53.661203 kubelet[3248]: I0702 08:58:53.661140 3248 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cd9bx" podStartSLOduration=1.6610058859999999 podStartE2EDuration="1.661005886s" podCreationTimestamp="2024-07-02 08:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:53.657624394 +0000 UTC m=+13.480327208" watchObservedRunningTime="2024-07-02 08:58:53.661005886 +0000 UTC m=+13.483708688" Jul 2 08:58:54.613002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1281821472.mount: Deactivated successfully. Jul 2 08:58:55.190149 containerd[2019]: time="2024-07-02T08:58:55.189580965Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:55.191217 containerd[2019]: time="2024-07-02T08:58:55.191152461Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138282" Jul 2 08:58:55.192787 containerd[2019]: time="2024-07-02T08:58:55.192716697Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:55.195964 containerd[2019]: time="2024-07-02T08:58:55.195764481Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.849888605s" Jul 2 08:58:55.195964 containerd[2019]: time="2024-07-02T08:58:55.195826953Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 08:58:55.198099 containerd[2019]: time="2024-07-02T08:58:55.197664213Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 08:58:55.200509 containerd[2019]: time="2024-07-02T08:58:55.200437161Z" level=info msg="CreateContainer within sandbox \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 08:58:55.232436 containerd[2019]: time="2024-07-02T08:58:55.232358854Z" level=info msg="CreateContainer within sandbox \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\"" Jul 2 08:58:55.235221 containerd[2019]: time="2024-07-02T08:58:55.233130898Z" level=info msg="StartContainer for \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\"" Jul 2 08:58:55.292395 systemd[1]: Started cri-containerd-6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49.scope - libcontainer container 6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49. Jul 2 08:58:55.344277 containerd[2019]: time="2024-07-02T08:58:55.344212486Z" level=info msg="StartContainer for \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\" returns successfully" Jul 2 08:59:00.540385 kubelet[3248]: I0702 08:59:00.540228 3248 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-gk588" podStartSLOduration=6.686230767 podStartE2EDuration="8.540128992s" podCreationTimestamp="2024-07-02 08:58:52 +0000 UTC" firstStartedPulling="2024-07-02 08:58:53.342757736 +0000 UTC m=+13.165460514" lastFinishedPulling="2024-07-02 08:58:55.196655949 +0000 UTC m=+15.019358739" observedRunningTime="2024-07-02 08:58:55.677133036 +0000 UTC m=+15.499835850" watchObservedRunningTime="2024-07-02 08:59:00.540128992 +0000 UTC m=+20.362831878" Jul 2 08:59:00.609889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907436432.mount: Deactivated successfully. Jul 2 08:59:03.281586 containerd[2019]: time="2024-07-02T08:59:03.281512434Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:03.283752 containerd[2019]: time="2024-07-02T08:59:03.283678926Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651546" Jul 2 08:59:03.285157 containerd[2019]: time="2024-07-02T08:59:03.285050718Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:03.290411 containerd[2019]: time="2024-07-02T08:59:03.290347638Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.091929465s" Jul 2 08:59:03.290558 containerd[2019]: time="2024-07-02T08:59:03.290415462Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 08:59:03.294559 containerd[2019]: time="2024-07-02T08:59:03.294312894Z" level=info msg="CreateContainer within sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:59:03.319903 containerd[2019]: time="2024-07-02T08:59:03.319729230Z" level=info msg="CreateContainer within sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706\"" Jul 2 08:59:03.321210 containerd[2019]: time="2024-07-02T08:59:03.321024942Z" level=info msg="StartContainer for \"4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706\"" Jul 2 08:59:03.383396 systemd[1]: Started cri-containerd-4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706.scope - libcontainer container 4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706. Jul 2 08:59:03.439327 containerd[2019]: time="2024-07-02T08:59:03.439242030Z" level=info msg="StartContainer for \"4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706\" returns successfully" Jul 2 08:59:03.455942 systemd[1]: cri-containerd-4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706.scope: Deactivated successfully. Jul 2 08:59:04.288627 containerd[2019]: time="2024-07-02T08:59:04.288502435Z" level=info msg="shim disconnected" id=4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706 namespace=k8s.io Jul 2 08:59:04.288627 containerd[2019]: time="2024-07-02T08:59:04.288606679Z" level=warning msg="cleaning up after shim disconnected" id=4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706 namespace=k8s.io Jul 2 08:59:04.288627 containerd[2019]: time="2024-07-02T08:59:04.288628663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:59:04.312593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706-rootfs.mount: Deactivated successfully. Jul 2 08:59:04.678264 containerd[2019]: time="2024-07-02T08:59:04.677812700Z" level=info msg="CreateContainer within sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:59:04.703828 containerd[2019]: time="2024-07-02T08:59:04.703717581Z" level=info msg="CreateContainer within sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036\"" Jul 2 08:59:04.704780 containerd[2019]: time="2024-07-02T08:59:04.704719413Z" level=info msg="StartContainer for \"0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036\"" Jul 2 08:59:04.769387 systemd[1]: Started cri-containerd-0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036.scope - libcontainer container 0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036. Jul 2 08:59:04.814395 containerd[2019]: time="2024-07-02T08:59:04.814274805Z" level=info msg="StartContainer for \"0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036\" returns successfully" Jul 2 08:59:04.836972 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:59:04.837521 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:59:04.837646 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:59:04.849711 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:59:04.850219 systemd[1]: cri-containerd-0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036.scope: Deactivated successfully. Jul 2 08:59:04.894534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036-rootfs.mount: Deactivated successfully. Jul 2 08:59:04.897528 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:59:04.903924 containerd[2019]: time="2024-07-02T08:59:04.903739306Z" level=info msg="shim disconnected" id=0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036 namespace=k8s.io Jul 2 08:59:04.904264 containerd[2019]: time="2024-07-02T08:59:04.903977986Z" level=warning msg="cleaning up after shim disconnected" id=0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036 namespace=k8s.io Jul 2 08:59:04.904264 containerd[2019]: time="2024-07-02T08:59:04.904001542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:59:05.683582 containerd[2019]: time="2024-07-02T08:59:05.683459433Z" level=info msg="CreateContainer within sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:59:05.728189 containerd[2019]: time="2024-07-02T08:59:05.727993114Z" level=info msg="CreateContainer within sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277\"" Jul 2 08:59:05.728851 containerd[2019]: time="2024-07-02T08:59:05.728666194Z" level=info msg="StartContainer for \"a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277\"" Jul 2 08:59:05.792860 systemd[1]: Started cri-containerd-a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277.scope - libcontainer container a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277. Jul 2 08:59:05.854583 containerd[2019]: time="2024-07-02T08:59:05.854399890Z" level=info msg="StartContainer for \"a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277\" returns successfully" Jul 2 08:59:05.855394 systemd[1]: cri-containerd-a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277.scope: Deactivated successfully. Jul 2 08:59:05.901708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277-rootfs.mount: Deactivated successfully. Jul 2 08:59:05.911334 containerd[2019]: time="2024-07-02T08:59:05.911242919Z" level=info msg="shim disconnected" id=a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277 namespace=k8s.io Jul 2 08:59:05.911334 containerd[2019]: time="2024-07-02T08:59:05.911322131Z" level=warning msg="cleaning up after shim disconnected" id=a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277 namespace=k8s.io Jul 2 08:59:05.911832 containerd[2019]: time="2024-07-02T08:59:05.911350571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:59:06.693888 containerd[2019]: time="2024-07-02T08:59:06.693811498Z" level=info msg="CreateContainer within sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:59:06.723188 containerd[2019]: time="2024-07-02T08:59:06.722612711Z" level=info msg="CreateContainer within sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229\"" Jul 2 08:59:06.725669 containerd[2019]: time="2024-07-02T08:59:06.724333907Z" level=info msg="StartContainer for \"7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229\"" Jul 2 08:59:06.786422 systemd[1]: Started cri-containerd-7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229.scope - libcontainer container 7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229. Jul 2 08:59:06.831354 systemd[1]: cri-containerd-7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229.scope: Deactivated successfully. Jul 2 08:59:06.835867 containerd[2019]: time="2024-07-02T08:59:06.835766435Z" level=info msg="StartContainer for \"7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229\" returns successfully" Jul 2 08:59:06.875703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229-rootfs.mount: Deactivated successfully. Jul 2 08:59:06.882850 containerd[2019]: time="2024-07-02T08:59:06.882768035Z" level=info msg="shim disconnected" id=7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229 namespace=k8s.io Jul 2 08:59:06.882850 containerd[2019]: time="2024-07-02T08:59:06.882844331Z" level=warning msg="cleaning up after shim disconnected" id=7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229 namespace=k8s.io Jul 2 08:59:06.883350 containerd[2019]: time="2024-07-02T08:59:06.882867023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:59:06.902728 containerd[2019]: time="2024-07-02T08:59:06.902643696Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:59:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 08:59:07.700402 containerd[2019]: time="2024-07-02T08:59:07.700317263Z" level=info msg="CreateContainer within sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:59:07.728152 containerd[2019]: time="2024-07-02T08:59:07.727251168Z" level=info msg="CreateContainer within sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\"" Jul 2 08:59:07.736340 containerd[2019]: time="2024-07-02T08:59:07.736143384Z" level=info msg="StartContainer for \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\"" Jul 2 08:59:07.798374 systemd[1]: Started cri-containerd-e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c.scope - libcontainer container e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c. Jul 2 08:59:07.849567 containerd[2019]: time="2024-07-02T08:59:07.849459420Z" level=info msg="StartContainer for \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\" returns successfully" Jul 2 08:59:07.897437 systemd[1]: run-containerd-runc-k8s.io-e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c-runc.ihAFCr.mount: Deactivated successfully. Jul 2 08:59:08.012708 kubelet[3248]: I0702 08:59:08.012555 3248 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 08:59:08.064194 kubelet[3248]: I0702 08:59:08.062845 3248 topology_manager.go:215] "Topology Admit Handler" podUID="d965aea4-42ca-4617-8b82-9b10057305a1" podNamespace="kube-system" podName="coredns-76f75df574-dx9pv" Jul 2 08:59:08.067956 kubelet[3248]: I0702 08:59:08.067903 3248 topology_manager.go:215] "Topology Admit Handler" podUID="e5ae4da3-b878-4153-bac2-6d2c969c5b9a" podNamespace="kube-system" podName="coredns-76f75df574-f2fvq" Jul 2 08:59:08.082789 systemd[1]: Created slice kubepods-burstable-podd965aea4_42ca_4617_8b82_9b10057305a1.slice - libcontainer container kubepods-burstable-podd965aea4_42ca_4617_8b82_9b10057305a1.slice. Jul 2 08:59:08.099502 systemd[1]: Created slice kubepods-burstable-pode5ae4da3_b878_4153_bac2_6d2c969c5b9a.slice - libcontainer container kubepods-burstable-pode5ae4da3_b878_4153_bac2_6d2c969c5b9a.slice. Jul 2 08:59:08.130735 kubelet[3248]: I0702 08:59:08.130305 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw6rq\" (UniqueName: \"kubernetes.io/projected/d965aea4-42ca-4617-8b82-9b10057305a1-kube-api-access-tw6rq\") pod \"coredns-76f75df574-dx9pv\" (UID: \"d965aea4-42ca-4617-8b82-9b10057305a1\") " pod="kube-system/coredns-76f75df574-dx9pv" Jul 2 08:59:08.130735 kubelet[3248]: I0702 08:59:08.130419 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5ae4da3-b878-4153-bac2-6d2c969c5b9a-config-volume\") pod \"coredns-76f75df574-f2fvq\" (UID: \"e5ae4da3-b878-4153-bac2-6d2c969c5b9a\") " pod="kube-system/coredns-76f75df574-f2fvq" Jul 2 08:59:08.130735 kubelet[3248]: I0702 08:59:08.130491 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22qcz\" (UniqueName: \"kubernetes.io/projected/e5ae4da3-b878-4153-bac2-6d2c969c5b9a-kube-api-access-22qcz\") pod \"coredns-76f75df574-f2fvq\" (UID: \"e5ae4da3-b878-4153-bac2-6d2c969c5b9a\") " pod="kube-system/coredns-76f75df574-f2fvq" Jul 2 08:59:08.130735 kubelet[3248]: I0702 08:59:08.130543 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d965aea4-42ca-4617-8b82-9b10057305a1-config-volume\") pod \"coredns-76f75df574-dx9pv\" (UID: \"d965aea4-42ca-4617-8b82-9b10057305a1\") " pod="kube-system/coredns-76f75df574-dx9pv" Jul 2 08:59:08.397366 containerd[2019]: time="2024-07-02T08:59:08.396625139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dx9pv,Uid:d965aea4-42ca-4617-8b82-9b10057305a1,Namespace:kube-system,Attempt:0,}" Jul 2 08:59:08.408589 containerd[2019]: time="2024-07-02T08:59:08.408047759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f2fvq,Uid:e5ae4da3-b878-4153-bac2-6d2c969c5b9a,Namespace:kube-system,Attempt:0,}" Jul 2 08:59:10.778842 (udev-worker)[4134]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:59:10.780769 (udev-worker)[4170]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:59:10.789282 systemd-networkd[1935]: cilium_host: Link UP Jul 2 08:59:10.789672 systemd-networkd[1935]: cilium_net: Link UP Jul 2 08:59:10.791355 systemd-networkd[1935]: cilium_net: Gained carrier Jul 2 08:59:10.795292 systemd-networkd[1935]: cilium_host: Gained carrier Jul 2 08:59:10.795630 systemd-networkd[1935]: cilium_net: Gained IPv6LL Jul 2 08:59:10.795942 systemd-networkd[1935]: cilium_host: Gained IPv6LL Jul 2 08:59:11.009909 systemd-networkd[1935]: cilium_vxlan: Link UP Jul 2 08:59:11.009928 systemd-networkd[1935]: cilium_vxlan: Gained carrier Jul 2 08:59:11.503145 kernel: NET: Registered PF_ALG protocol family Jul 2 08:59:12.685700 systemd-networkd[1935]: cilium_vxlan: Gained IPv6LL Jul 2 08:59:12.840982 systemd-networkd[1935]: lxc_health: Link UP Jul 2 08:59:12.847814 (udev-worker)[4176]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:59:12.858422 systemd-networkd[1935]: lxc_health: Gained carrier Jul 2 08:59:13.289110 kubelet[3248]: I0702 08:59:13.289035 3248 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bz8wk" podStartSLOduration=11.448634622 podStartE2EDuration="21.288976791s" podCreationTimestamp="2024-07-02 08:58:52 +0000 UTC" firstStartedPulling="2024-07-02 08:58:53.450384345 +0000 UTC m=+13.273087123" lastFinishedPulling="2024-07-02 08:59:03.290726502 +0000 UTC m=+23.113429292" observedRunningTime="2024-07-02 08:59:08.774562273 +0000 UTC m=+28.597265099" watchObservedRunningTime="2024-07-02 08:59:13.288976791 +0000 UTC m=+33.111679605" Jul 2 08:59:13.509218 systemd-networkd[1935]: lxc05613d28c8cd: Link UP Jul 2 08:59:13.519170 kernel: eth0: renamed from tmp9d7e3 Jul 2 08:59:13.524234 systemd-networkd[1935]: lxc05613d28c8cd: Gained carrier Jul 2 08:59:13.545430 systemd-networkd[1935]: lxcb76d8bdcaef4: Link UP Jul 2 08:59:13.562140 kernel: eth0: renamed from tmpbdb7f Jul 2 08:59:13.569712 systemd-networkd[1935]: lxcb76d8bdcaef4: Gained carrier Jul 2 08:59:14.605353 systemd-networkd[1935]: lxc_health: Gained IPv6LL Jul 2 08:59:14.670340 systemd-networkd[1935]: lxc05613d28c8cd: Gained IPv6LL Jul 2 08:59:15.502308 systemd-networkd[1935]: lxcb76d8bdcaef4: Gained IPv6LL Jul 2 08:59:16.917268 systemd[1]: Started sshd@7-172.31.30.172:22-147.75.109.163:39580.service - OpenSSH per-connection server daemon (147.75.109.163:39580). Jul 2 08:59:17.095433 sshd[4540]: Accepted publickey for core from 147.75.109.163 port 39580 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:17.098186 sshd[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:17.111452 systemd-logind[1994]: New session 8 of user core. Jul 2 08:59:17.117698 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 08:59:17.467374 sshd[4540]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:17.475669 systemd-logind[1994]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:59:17.477954 systemd[1]: sshd@7-172.31.30.172:22-147.75.109.163:39580.service: Deactivated successfully. Jul 2 08:59:17.487768 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:59:17.495908 systemd-logind[1994]: Removed session 8. Jul 2 08:59:17.693432 ntpd[1989]: Listen normally on 8 cilium_host 192.168.0.76:123 Jul 2 08:59:17.694655 ntpd[1989]: 2 Jul 08:59:17 ntpd[1989]: Listen normally on 8 cilium_host 192.168.0.76:123 Jul 2 08:59:17.694655 ntpd[1989]: 2 Jul 08:59:17 ntpd[1989]: Listen normally on 9 cilium_net [fe80::986d:78ff:fe2a:972c%4]:123 Jul 2 08:59:17.694655 ntpd[1989]: 2 Jul 08:59:17 ntpd[1989]: Listen normally on 10 cilium_host [fe80::c064:78ff:fe04:43a3%5]:123 Jul 2 08:59:17.694655 ntpd[1989]: 2 Jul 08:59:17 ntpd[1989]: Listen normally on 11 cilium_vxlan [fe80::6059:bfff:fe0c:61b0%6]:123 Jul 2 08:59:17.694655 ntpd[1989]: 2 Jul 08:59:17 ntpd[1989]: Listen normally on 12 lxc_health [fe80::589c:bcff:fe38:2b06%8]:123 Jul 2 08:59:17.694655 ntpd[1989]: 2 Jul 08:59:17 ntpd[1989]: Listen normally on 13 lxc05613d28c8cd [fe80::14c1:88ff:fe78:ba9%10]:123 Jul 2 08:59:17.694655 ntpd[1989]: 2 Jul 08:59:17 ntpd[1989]: Listen normally on 14 lxcb76d8bdcaef4 [fe80::b89f:4cff:fecf:d24b%12]:123 Jul 2 08:59:17.693561 ntpd[1989]: Listen normally on 9 cilium_net [fe80::986d:78ff:fe2a:972c%4]:123 Jul 2 08:59:17.693644 ntpd[1989]: Listen normally on 10 cilium_host [fe80::c064:78ff:fe04:43a3%5]:123 Jul 2 08:59:17.693713 ntpd[1989]: Listen normally on 11 cilium_vxlan [fe80::6059:bfff:fe0c:61b0%6]:123 Jul 2 08:59:17.693779 ntpd[1989]: Listen normally on 12 lxc_health [fe80::589c:bcff:fe38:2b06%8]:123 Jul 2 08:59:17.693846 ntpd[1989]: Listen normally on 13 lxc05613d28c8cd [fe80::14c1:88ff:fe78:ba9%10]:123 Jul 2 08:59:17.693913 ntpd[1989]: Listen normally on 14 lxcb76d8bdcaef4 [fe80::b89f:4cff:fecf:d24b%12]:123 Jul 2 08:59:22.127527 containerd[2019]: time="2024-07-02T08:59:22.126823019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:22.127527 containerd[2019]: time="2024-07-02T08:59:22.126932843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:22.127527 containerd[2019]: time="2024-07-02T08:59:22.126978551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:22.127527 containerd[2019]: time="2024-07-02T08:59:22.127013135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:22.189522 systemd[1]: Started cri-containerd-9d7e3d3cee130f7f2dc9394a0880150f9e88cfe5be8a90518b2bac06e8ede2bb.scope - libcontainer container 9d7e3d3cee130f7f2dc9394a0880150f9e88cfe5be8a90518b2bac06e8ede2bb. Jul 2 08:59:22.207873 containerd[2019]: time="2024-07-02T08:59:22.207614268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:22.207873 containerd[2019]: time="2024-07-02T08:59:22.207715572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:22.207873 containerd[2019]: time="2024-07-02T08:59:22.207777096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:22.207873 containerd[2019]: time="2024-07-02T08:59:22.207823824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:22.281404 systemd[1]: Started cri-containerd-bdb7f1a60db1256915bde461dbafb5ca4d171bc21e9901ca51198132e6502281.scope - libcontainer container bdb7f1a60db1256915bde461dbafb5ca4d171bc21e9901ca51198132e6502281. Jul 2 08:59:22.344726 containerd[2019]: time="2024-07-02T08:59:22.344326704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dx9pv,Uid:d965aea4-42ca-4617-8b82-9b10057305a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d7e3d3cee130f7f2dc9394a0880150f9e88cfe5be8a90518b2bac06e8ede2bb\"" Jul 2 08:59:22.355418 containerd[2019]: time="2024-07-02T08:59:22.354886020Z" level=info msg="CreateContainer within sandbox \"9d7e3d3cee130f7f2dc9394a0880150f9e88cfe5be8a90518b2bac06e8ede2bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:59:22.395917 containerd[2019]: time="2024-07-02T08:59:22.395613000Z" level=info msg="CreateContainer within sandbox \"9d7e3d3cee130f7f2dc9394a0880150f9e88cfe5be8a90518b2bac06e8ede2bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e341e647e0e58264693866d0f76abff8d0f86d08c95a256dc0220dc763c3749d\"" Jul 2 08:59:22.399413 containerd[2019]: time="2024-07-02T08:59:22.397760616Z" level=info msg="StartContainer for \"e341e647e0e58264693866d0f76abff8d0f86d08c95a256dc0220dc763c3749d\"" Jul 2 08:59:22.432829 containerd[2019]: time="2024-07-02T08:59:22.432725269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f2fvq,Uid:e5ae4da3-b878-4153-bac2-6d2c969c5b9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdb7f1a60db1256915bde461dbafb5ca4d171bc21e9901ca51198132e6502281\"" Jul 2 08:59:22.454522 containerd[2019]: time="2024-07-02T08:59:22.454456249Z" level=info msg="CreateContainer within sandbox \"bdb7f1a60db1256915bde461dbafb5ca4d171bc21e9901ca51198132e6502281\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:59:22.491755 systemd[1]: Started cri-containerd-e341e647e0e58264693866d0f76abff8d0f86d08c95a256dc0220dc763c3749d.scope - libcontainer container e341e647e0e58264693866d0f76abff8d0f86d08c95a256dc0220dc763c3749d. Jul 2 08:59:22.518355 containerd[2019]: time="2024-07-02T08:59:22.516166777Z" level=info msg="CreateContainer within sandbox \"bdb7f1a60db1256915bde461dbafb5ca4d171bc21e9901ca51198132e6502281\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bd3a236cad1e1dd190c66d6d38041d34d078e3efdf922fa02b134f091af425b\"" Jul 2 08:59:22.519256 containerd[2019]: time="2024-07-02T08:59:22.518764525Z" level=info msg="StartContainer for \"5bd3a236cad1e1dd190c66d6d38041d34d078e3efdf922fa02b134f091af425b\"" Jul 2 08:59:22.521631 systemd[1]: Started sshd@8-172.31.30.172:22-147.75.109.163:50114.service - OpenSSH per-connection server daemon (147.75.109.163:50114). Jul 2 08:59:22.610584 containerd[2019]: time="2024-07-02T08:59:22.610404458Z" level=info msg="StartContainer for \"e341e647e0e58264693866d0f76abff8d0f86d08c95a256dc0220dc763c3749d\" returns successfully" Jul 2 08:59:22.651394 systemd[1]: Started cri-containerd-5bd3a236cad1e1dd190c66d6d38041d34d078e3efdf922fa02b134f091af425b.scope - libcontainer container 5bd3a236cad1e1dd190c66d6d38041d34d078e3efdf922fa02b134f091af425b. Jul 2 08:59:22.757604 containerd[2019]: time="2024-07-02T08:59:22.757255922Z" level=info msg="StartContainer for \"5bd3a236cad1e1dd190c66d6d38041d34d078e3efdf922fa02b134f091af425b\" returns successfully" Jul 2 08:59:22.769866 sshd[4663]: Accepted publickey for core from 147.75.109.163 port 50114 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:22.780633 sshd[4663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:22.797215 systemd-logind[1994]: New session 9 of user core. Jul 2 08:59:22.805851 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 08:59:22.838001 kubelet[3248]: I0702 08:59:22.837905 3248 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-f2fvq" podStartSLOduration=30.837821283 podStartE2EDuration="30.837821283s" podCreationTimestamp="2024-07-02 08:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:59:22.836484507 +0000 UTC m=+42.659187321" watchObservedRunningTime="2024-07-02 08:59:22.837821283 +0000 UTC m=+42.660524085" Jul 2 08:59:22.865702 kubelet[3248]: I0702 08:59:22.865480 3248 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dx9pv" podStartSLOduration=30.865232067 podStartE2EDuration="30.865232067s" podCreationTimestamp="2024-07-02 08:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:59:22.862530255 +0000 UTC m=+42.685233069" watchObservedRunningTime="2024-07-02 08:59:22.865232067 +0000 UTC m=+42.687934965" Jul 2 08:59:23.078481 sshd[4663]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:23.085022 systemd[1]: sshd@8-172.31.30.172:22-147.75.109.163:50114.service: Deactivated successfully. Jul 2 08:59:23.089975 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:59:23.091591 systemd-logind[1994]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:59:23.093724 systemd-logind[1994]: Removed session 9. Jul 2 08:59:28.120648 systemd[1]: Started sshd@9-172.31.30.172:22-147.75.109.163:50124.service - OpenSSH per-connection server daemon (147.75.109.163:50124). Jul 2 08:59:28.297351 sshd[4747]: Accepted publickey for core from 147.75.109.163 port 50124 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:28.299940 sshd[4747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:28.307719 systemd-logind[1994]: New session 10 of user core. Jul 2 08:59:28.319707 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 08:59:28.569570 sshd[4747]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:28.576041 systemd-logind[1994]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:59:28.577620 systemd[1]: sshd@9-172.31.30.172:22-147.75.109.163:50124.service: Deactivated successfully. Jul 2 08:59:28.583532 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:59:28.588029 systemd-logind[1994]: Removed session 10. Jul 2 08:59:33.616360 systemd[1]: Started sshd@10-172.31.30.172:22-147.75.109.163:55078.service - OpenSSH per-connection server daemon (147.75.109.163:55078). Jul 2 08:59:33.796813 sshd[4763]: Accepted publickey for core from 147.75.109.163 port 55078 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:33.799456 sshd[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:33.808874 systemd-logind[1994]: New session 11 of user core. Jul 2 08:59:33.818347 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 08:59:34.071548 sshd[4763]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:34.078997 systemd[1]: sshd@10-172.31.30.172:22-147.75.109.163:55078.service: Deactivated successfully. Jul 2 08:59:34.084135 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:59:34.086030 systemd-logind[1994]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:59:34.087939 systemd-logind[1994]: Removed session 11. Jul 2 08:59:34.108642 systemd[1]: Started sshd@11-172.31.30.172:22-147.75.109.163:55092.service - OpenSSH per-connection server daemon (147.75.109.163:55092). Jul 2 08:59:34.290811 sshd[4777]: Accepted publickey for core from 147.75.109.163 port 55092 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:34.293492 sshd[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:34.302002 systemd-logind[1994]: New session 12 of user core. Jul 2 08:59:34.308359 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 08:59:34.630388 sshd[4777]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:34.639666 systemd[1]: sshd@11-172.31.30.172:22-147.75.109.163:55092.service: Deactivated successfully. Jul 2 08:59:34.645788 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:59:34.653056 systemd-logind[1994]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:59:34.684350 systemd[1]: Started sshd@12-172.31.30.172:22-147.75.109.163:55104.service - OpenSSH per-connection server daemon (147.75.109.163:55104). Jul 2 08:59:34.686451 systemd-logind[1994]: Removed session 12. Jul 2 08:59:34.870230 sshd[4788]: Accepted publickey for core from 147.75.109.163 port 55104 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:34.872483 sshd[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:34.881128 systemd-logind[1994]: New session 13 of user core. Jul 2 08:59:34.886407 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 08:59:35.125383 sshd[4788]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:35.131565 systemd[1]: sshd@12-172.31.30.172:22-147.75.109.163:55104.service: Deactivated successfully. Jul 2 08:59:35.135573 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:59:35.138669 systemd-logind[1994]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:59:35.140915 systemd-logind[1994]: Removed session 13. Jul 2 08:59:40.163617 systemd[1]: Started sshd@13-172.31.30.172:22-147.75.109.163:55110.service - OpenSSH per-connection server daemon (147.75.109.163:55110). Jul 2 08:59:40.340441 sshd[4801]: Accepted publickey for core from 147.75.109.163 port 55110 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:40.341355 sshd[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:40.350321 systemd-logind[1994]: New session 14 of user core. Jul 2 08:59:40.356374 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 08:59:40.607748 sshd[4801]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:40.613765 systemd[1]: sshd@13-172.31.30.172:22-147.75.109.163:55110.service: Deactivated successfully. Jul 2 08:59:40.618327 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:59:40.621345 systemd-logind[1994]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:59:40.623523 systemd-logind[1994]: Removed session 14. Jul 2 08:59:45.650621 systemd[1]: Started sshd@14-172.31.30.172:22-147.75.109.163:46466.service - OpenSSH per-connection server daemon (147.75.109.163:46466). Jul 2 08:59:45.826341 sshd[4817]: Accepted publickey for core from 147.75.109.163 port 46466 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:45.828901 sshd[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:45.839064 systemd-logind[1994]: New session 15 of user core. Jul 2 08:59:45.846377 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 08:59:46.100999 sshd[4817]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:46.107198 systemd[1]: sshd@14-172.31.30.172:22-147.75.109.163:46466.service: Deactivated successfully. Jul 2 08:59:46.115347 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:59:46.117820 systemd-logind[1994]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:59:46.121028 systemd-logind[1994]: Removed session 15. Jul 2 08:59:51.141703 systemd[1]: Started sshd@15-172.31.30.172:22-147.75.109.163:46482.service - OpenSSH per-connection server daemon (147.75.109.163:46482). Jul 2 08:59:51.318365 sshd[4830]: Accepted publickey for core from 147.75.109.163 port 46482 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:51.320978 sshd[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:51.330045 systemd-logind[1994]: New session 16 of user core. Jul 2 08:59:51.338340 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 08:59:51.587641 sshd[4830]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:51.594145 systemd[1]: sshd@15-172.31.30.172:22-147.75.109.163:46482.service: Deactivated successfully. Jul 2 08:59:51.598642 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:59:51.600780 systemd-logind[1994]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:59:51.603648 systemd-logind[1994]: Removed session 16. Jul 2 08:59:56.626650 systemd[1]: Started sshd@16-172.31.30.172:22-147.75.109.163:45100.service - OpenSSH per-connection server daemon (147.75.109.163:45100). Jul 2 08:59:56.812973 sshd[4844]: Accepted publickey for core from 147.75.109.163 port 45100 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:56.815623 sshd[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:56.823880 systemd-logind[1994]: New session 17 of user core. Jul 2 08:59:56.831527 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 08:59:57.074959 sshd[4844]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:57.081160 systemd[1]: sshd@16-172.31.30.172:22-147.75.109.163:45100.service: Deactivated successfully. Jul 2 08:59:57.084830 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:59:57.086547 systemd-logind[1994]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:59:57.089457 systemd-logind[1994]: Removed session 17. Jul 2 08:59:57.112592 systemd[1]: Started sshd@17-172.31.30.172:22-147.75.109.163:45108.service - OpenSSH per-connection server daemon (147.75.109.163:45108). Jul 2 08:59:57.293900 sshd[4856]: Accepted publickey for core from 147.75.109.163 port 45108 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:57.296561 sshd[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:57.304622 systemd-logind[1994]: New session 18 of user core. Jul 2 08:59:57.311329 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 08:59:57.618414 sshd[4856]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:57.624563 systemd[1]: sshd@17-172.31.30.172:22-147.75.109.163:45108.service: Deactivated successfully. Jul 2 08:59:57.629227 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:59:57.630930 systemd-logind[1994]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:59:57.632954 systemd-logind[1994]: Removed session 18. Jul 2 08:59:57.660575 systemd[1]: Started sshd@18-172.31.30.172:22-147.75.109.163:45122.service - OpenSSH per-connection server daemon (147.75.109.163:45122). Jul 2 08:59:57.826250 sshd[4867]: Accepted publickey for core from 147.75.109.163 port 45122 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:57.829358 sshd[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:57.837728 systemd-logind[1994]: New session 19 of user core. Jul 2 08:59:57.840348 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 09:00:00.424908 sshd[4867]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:00.435783 systemd[1]: sshd@18-172.31.30.172:22-147.75.109.163:45122.service: Deactivated successfully. Jul 2 09:00:00.442527 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 09:00:00.445894 systemd-logind[1994]: Session 19 logged out. Waiting for processes to exit. Jul 2 09:00:00.464676 systemd[1]: Started sshd@19-172.31.30.172:22-147.75.109.163:45138.service - OpenSSH per-connection server daemon (147.75.109.163:45138). Jul 2 09:00:00.469964 systemd-logind[1994]: Removed session 19. Jul 2 09:00:00.657581 sshd[4884]: Accepted publickey for core from 147.75.109.163 port 45138 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:00.660240 sshd[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:00.669156 systemd-logind[1994]: New session 20 of user core. Jul 2 09:00:00.675373 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 09:00:01.167513 sshd[4884]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:01.174933 systemd[1]: sshd@19-172.31.30.172:22-147.75.109.163:45138.service: Deactivated successfully. Jul 2 09:00:01.176440 systemd-logind[1994]: Session 20 logged out. Waiting for processes to exit. Jul 2 09:00:01.179864 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 09:00:01.184968 systemd-logind[1994]: Removed session 20. Jul 2 09:00:01.202690 systemd[1]: Started sshd@20-172.31.30.172:22-147.75.109.163:45146.service - OpenSSH per-connection server daemon (147.75.109.163:45146). Jul 2 09:00:01.380275 sshd[4896]: Accepted publickey for core from 147.75.109.163 port 45146 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:01.382812 sshd[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:01.390343 systemd-logind[1994]: New session 21 of user core. Jul 2 09:00:01.401350 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 09:00:01.639435 sshd[4896]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:01.646723 systemd[1]: sshd@20-172.31.30.172:22-147.75.109.163:45146.service: Deactivated successfully. Jul 2 09:00:01.651447 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 09:00:01.653263 systemd-logind[1994]: Session 21 logged out. Waiting for processes to exit. Jul 2 09:00:01.655400 systemd-logind[1994]: Removed session 21. Jul 2 09:00:06.680585 systemd[1]: Started sshd@21-172.31.30.172:22-147.75.109.163:41100.service - OpenSSH per-connection server daemon (147.75.109.163:41100). Jul 2 09:00:06.857043 sshd[4910]: Accepted publickey for core from 147.75.109.163 port 41100 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:06.861217 sshd[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:06.870162 systemd-logind[1994]: New session 22 of user core. Jul 2 09:00:06.877371 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 09:00:07.110367 sshd[4910]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:07.117246 systemd[1]: sshd@21-172.31.30.172:22-147.75.109.163:41100.service: Deactivated successfully. Jul 2 09:00:07.120928 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 09:00:07.123304 systemd-logind[1994]: Session 22 logged out. Waiting for processes to exit. Jul 2 09:00:07.125839 systemd-logind[1994]: Removed session 22. Jul 2 09:00:12.149649 systemd[1]: Started sshd@22-172.31.30.172:22-147.75.109.163:41106.service - OpenSSH per-connection server daemon (147.75.109.163:41106). Jul 2 09:00:12.323820 sshd[4926]: Accepted publickey for core from 147.75.109.163 port 41106 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:12.326715 sshd[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:12.341484 systemd-logind[1994]: New session 23 of user core. Jul 2 09:00:12.348425 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 09:00:12.589329 sshd[4926]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:12.596460 systemd[1]: sshd@22-172.31.30.172:22-147.75.109.163:41106.service: Deactivated successfully. Jul 2 09:00:12.602424 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 09:00:12.605399 systemd-logind[1994]: Session 23 logged out. Waiting for processes to exit. Jul 2 09:00:12.607396 systemd-logind[1994]: Removed session 23. Jul 2 09:00:17.635597 systemd[1]: Started sshd@23-172.31.30.172:22-147.75.109.163:37226.service - OpenSSH per-connection server daemon (147.75.109.163:37226). Jul 2 09:00:17.813943 sshd[4938]: Accepted publickey for core from 147.75.109.163 port 37226 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:17.816589 sshd[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:17.825397 systemd-logind[1994]: New session 24 of user core. Jul 2 09:00:17.831349 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 09:00:18.070814 sshd[4938]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:18.076876 systemd[1]: sshd@23-172.31.30.172:22-147.75.109.163:37226.service: Deactivated successfully. Jul 2 09:00:18.080951 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 09:00:18.082989 systemd-logind[1994]: Session 24 logged out. Waiting for processes to exit. Jul 2 09:00:18.085547 systemd-logind[1994]: Removed session 24. Jul 2 09:00:23.112638 systemd[1]: Started sshd@24-172.31.30.172:22-147.75.109.163:34648.service - OpenSSH per-connection server daemon (147.75.109.163:34648). Jul 2 09:00:23.293113 sshd[4951]: Accepted publickey for core from 147.75.109.163 port 34648 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:23.295817 sshd[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:23.303890 systemd-logind[1994]: New session 25 of user core. Jul 2 09:00:23.313347 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 09:00:23.551060 sshd[4951]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:23.557782 systemd[1]: sshd@24-172.31.30.172:22-147.75.109.163:34648.service: Deactivated successfully. Jul 2 09:00:23.562873 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 09:00:23.565308 systemd-logind[1994]: Session 25 logged out. Waiting for processes to exit. Jul 2 09:00:23.567425 systemd-logind[1994]: Removed session 25. Jul 2 09:00:23.591782 systemd[1]: Started sshd@25-172.31.30.172:22-147.75.109.163:34654.service - OpenSSH per-connection server daemon (147.75.109.163:34654). Jul 2 09:00:23.773627 sshd[4965]: Accepted publickey for core from 147.75.109.163 port 34654 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:23.776878 sshd[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:23.785389 systemd-logind[1994]: New session 26 of user core. Jul 2 09:00:23.792352 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 09:00:26.172916 containerd[2019]: time="2024-07-02T09:00:26.172644181Z" level=info msg="StopContainer for \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\" with timeout 30 (s)" Jul 2 09:00:26.175216 containerd[2019]: time="2024-07-02T09:00:26.174778249Z" level=info msg="Stop container \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\" with signal terminated" Jul 2 09:00:26.223744 systemd[1]: cri-containerd-6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49.scope: Deactivated successfully. Jul 2 09:00:26.231620 containerd[2019]: time="2024-07-02T09:00:26.230815730Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:00:26.260506 containerd[2019]: time="2024-07-02T09:00:26.260430938Z" level=info msg="StopContainer for \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\" with timeout 2 (s)" Jul 2 09:00:26.262225 containerd[2019]: time="2024-07-02T09:00:26.262019330Z" level=info msg="Stop container \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\" with signal terminated" Jul 2 09:00:26.281062 systemd-networkd[1935]: lxc_health: Link DOWN Jul 2 09:00:26.281832 systemd-networkd[1935]: lxc_health: Lost carrier Jul 2 09:00:26.306697 systemd[1]: cri-containerd-e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c.scope: Deactivated successfully. Jul 2 09:00:26.307227 systemd[1]: cri-containerd-e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c.scope: Consumed 14.386s CPU time. Jul 2 09:00:26.317798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49-rootfs.mount: Deactivated successfully. Jul 2 09:00:26.336793 containerd[2019]: time="2024-07-02T09:00:26.336667850Z" level=info msg="shim disconnected" id=6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49 namespace=k8s.io Jul 2 09:00:26.336793 containerd[2019]: time="2024-07-02T09:00:26.336749486Z" level=warning msg="cleaning up after shim disconnected" id=6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49 namespace=k8s.io Jul 2 09:00:26.336793 containerd[2019]: time="2024-07-02T09:00:26.336770546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:26.364261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c-rootfs.mount: Deactivated successfully. Jul 2 09:00:26.369585 containerd[2019]: time="2024-07-02T09:00:26.369138242Z" level=info msg="shim disconnected" id=e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c namespace=k8s.io Jul 2 09:00:26.369585 containerd[2019]: time="2024-07-02T09:00:26.369214514Z" level=warning msg="cleaning up after shim disconnected" id=e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c namespace=k8s.io Jul 2 09:00:26.369585 containerd[2019]: time="2024-07-02T09:00:26.369234986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:26.380705 containerd[2019]: time="2024-07-02T09:00:26.380643074Z" level=info msg="StopContainer for \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\" returns successfully" Jul 2 09:00:26.382127 containerd[2019]: time="2024-07-02T09:00:26.381948434Z" level=info msg="StopPodSandbox for \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\"" Jul 2 09:00:26.382834 containerd[2019]: time="2024-07-02T09:00:26.382034606Z" level=info msg="Container to stop \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:00:26.388864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453-shm.mount: Deactivated successfully. Jul 2 09:00:26.405299 systemd[1]: cri-containerd-e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453.scope: Deactivated successfully. Jul 2 09:00:26.408336 containerd[2019]: time="2024-07-02T09:00:26.404914058Z" level=info msg="StopContainer for \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\" returns successfully" Jul 2 09:00:26.409307 containerd[2019]: time="2024-07-02T09:00:26.409183010Z" level=info msg="StopPodSandbox for \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\"" Jul 2 09:00:26.409804 containerd[2019]: time="2024-07-02T09:00:26.409278398Z" level=info msg="Container to stop \"4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:00:26.409804 containerd[2019]: time="2024-07-02T09:00:26.409475282Z" level=info msg="Container to stop \"0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:00:26.409804 containerd[2019]: time="2024-07-02T09:00:26.409503542Z" level=info msg="Container to stop \"a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:00:26.409804 containerd[2019]: time="2024-07-02T09:00:26.409528958Z" level=info msg="Container to stop \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:00:26.409804 containerd[2019]: time="2024-07-02T09:00:26.409552430Z" level=info msg="Container to stop \"7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:00:26.424335 systemd[1]: cri-containerd-8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be.scope: Deactivated successfully. Jul 2 09:00:26.472025 containerd[2019]: time="2024-07-02T09:00:26.471562635Z" level=info msg="shim disconnected" id=e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453 namespace=k8s.io Jul 2 09:00:26.473712 containerd[2019]: time="2024-07-02T09:00:26.473381703Z" level=warning msg="cleaning up after shim disconnected" id=e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453 namespace=k8s.io Jul 2 09:00:26.473712 containerd[2019]: time="2024-07-02T09:00:26.473474463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:26.483118 containerd[2019]: time="2024-07-02T09:00:26.482851767Z" level=info msg="shim disconnected" id=8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be namespace=k8s.io Jul 2 09:00:26.483118 containerd[2019]: time="2024-07-02T09:00:26.482939643Z" level=warning msg="cleaning up after shim disconnected" id=8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be namespace=k8s.io Jul 2 09:00:26.483118 containerd[2019]: time="2024-07-02T09:00:26.482960319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:26.502148 containerd[2019]: time="2024-07-02T09:00:26.501950415Z" level=info msg="TearDown network for sandbox \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\" successfully" Jul 2 09:00:26.502148 containerd[2019]: time="2024-07-02T09:00:26.502016943Z" level=info msg="StopPodSandbox for \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\" returns successfully" Jul 2 09:00:26.528599 containerd[2019]: time="2024-07-02T09:00:26.528243675Z" level=info msg="TearDown network for sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" successfully" Jul 2 09:00:26.528599 containerd[2019]: time="2024-07-02T09:00:26.528301755Z" level=info msg="StopPodSandbox for \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" returns successfully" Jul 2 09:00:26.536134 kubelet[3248]: I0702 09:00:26.535648 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjpbm\" (UniqueName: \"kubernetes.io/projected/bee1e216-d403-44de-8b44-339179cf3083-kube-api-access-sjpbm\") pod \"bee1e216-d403-44de-8b44-339179cf3083\" (UID: \"bee1e216-d403-44de-8b44-339179cf3083\") " Jul 2 09:00:26.537667 kubelet[3248]: I0702 09:00:26.536941 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bee1e216-d403-44de-8b44-339179cf3083-cilium-config-path\") pod \"bee1e216-d403-44de-8b44-339179cf3083\" (UID: \"bee1e216-d403-44de-8b44-339179cf3083\") " Jul 2 09:00:26.548649 kubelet[3248]: I0702 09:00:26.548581 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee1e216-d403-44de-8b44-339179cf3083-kube-api-access-sjpbm" (OuterVolumeSpecName: "kube-api-access-sjpbm") pod "bee1e216-d403-44de-8b44-339179cf3083" (UID: "bee1e216-d403-44de-8b44-339179cf3083"). InnerVolumeSpecName "kube-api-access-sjpbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:00:26.553375 kubelet[3248]: I0702 09:00:26.553263 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee1e216-d403-44de-8b44-339179cf3083-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bee1e216-d403-44de-8b44-339179cf3083" (UID: "bee1e216-d403-44de-8b44-339179cf3083"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 09:00:26.638505 kubelet[3248]: I0702 09:00:26.638455 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7406341c-c44d-4a35-a784-a85760c61b26-cilium-config-path\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.638661 kubelet[3248]: I0702 09:00:26.638529 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-host-proc-sys-kernel\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.638661 kubelet[3248]: I0702 09:00:26.638579 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmkrq\" (UniqueName: \"kubernetes.io/projected/7406341c-c44d-4a35-a784-a85760c61b26-kube-api-access-pmkrq\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.638661 kubelet[3248]: I0702 09:00:26.638621 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-cilium-run\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.638661 kubelet[3248]: I0702 09:00:26.638659 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-cni-path\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.638891 kubelet[3248]: I0702 09:00:26.638701 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7406341c-c44d-4a35-a784-a85760c61b26-hubble-tls\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.638891 kubelet[3248]: I0702 09:00:26.638740 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-lib-modules\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.638891 kubelet[3248]: I0702 09:00:26.638782 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-host-proc-sys-net\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.638891 kubelet[3248]: I0702 09:00:26.638820 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-cilium-cgroup\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.638891 kubelet[3248]: I0702 09:00:26.638857 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-etc-cni-netd\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.639186 kubelet[3248]: I0702 09:00:26.638902 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7406341c-c44d-4a35-a784-a85760c61b26-clustermesh-secrets\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.639186 kubelet[3248]: I0702 09:00:26.638940 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-hostproc\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.639186 kubelet[3248]: I0702 09:00:26.638981 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-xtables-lock\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.639186 kubelet[3248]: I0702 09:00:26.639017 3248 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-bpf-maps\") pod \"7406341c-c44d-4a35-a784-a85760c61b26\" (UID: \"7406341c-c44d-4a35-a784-a85760c61b26\") " Jul 2 09:00:26.639186 kubelet[3248]: I0702 09:00:26.639103 3248 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bee1e216-d403-44de-8b44-339179cf3083-cilium-config-path\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.639186 kubelet[3248]: I0702 09:00:26.639136 3248 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sjpbm\" (UniqueName: \"kubernetes.io/projected/bee1e216-d403-44de-8b44-339179cf3083-kube-api-access-sjpbm\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.639513 kubelet[3248]: I0702 09:00:26.639197 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:00:26.640879 kubelet[3248]: I0702 09:00:26.639660 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:00:26.640879 kubelet[3248]: I0702 09:00:26.639737 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:00:26.640879 kubelet[3248]: I0702 09:00:26.640792 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:00:26.641482 kubelet[3248]: I0702 09:00:26.641150 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-cni-path" (OuterVolumeSpecName: "cni-path") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:00:26.645385 kubelet[3248]: I0702 09:00:26.645224 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:00:26.645385 kubelet[3248]: I0702 09:00:26.645309 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:00:26.645385 kubelet[3248]: I0702 09:00:26.645377 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:00:26.645649 kubelet[3248]: I0702 09:00:26.645419 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-hostproc" (OuterVolumeSpecName: "hostproc") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:00:26.646665 kubelet[3248]: I0702 09:00:26.646414 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:00:26.646800 kubelet[3248]: I0702 09:00:26.646761 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7406341c-c44d-4a35-a784-a85760c61b26-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 09:00:26.648476 kubelet[3248]: I0702 09:00:26.648369 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7406341c-c44d-4a35-a784-a85760c61b26-kube-api-access-pmkrq" (OuterVolumeSpecName: "kube-api-access-pmkrq") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "kube-api-access-pmkrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:00:26.651860 kubelet[3248]: I0702 09:00:26.651770 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7406341c-c44d-4a35-a784-a85760c61b26-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 09:00:26.652471 kubelet[3248]: I0702 09:00:26.652411 3248 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7406341c-c44d-4a35-a784-a85760c61b26-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7406341c-c44d-4a35-a784-a85760c61b26" (UID: "7406341c-c44d-4a35-a784-a85760c61b26"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:00:26.740526 kubelet[3248]: I0702 09:00:26.739813 3248 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7406341c-c44d-4a35-a784-a85760c61b26-hubble-tls\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.740526 kubelet[3248]: I0702 09:00:26.739871 3248 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-lib-modules\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.740526 kubelet[3248]: I0702 09:00:26.739918 3248 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7406341c-c44d-4a35-a784-a85760c61b26-clustermesh-secrets\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.740526 kubelet[3248]: I0702 09:00:26.739955 3248 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-host-proc-sys-net\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.740526 kubelet[3248]: I0702 09:00:26.739980 3248 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-cilium-cgroup\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.740526 kubelet[3248]: I0702 09:00:26.740004 3248 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-etc-cni-netd\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.740526 kubelet[3248]: I0702 09:00:26.740028 3248 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-xtables-lock\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.740526 kubelet[3248]: I0702 09:00:26.740050 3248 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-hostproc\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.741010 kubelet[3248]: I0702 09:00:26.740099 3248 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-bpf-maps\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.741010 kubelet[3248]: I0702 09:00:26.740128 3248 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7406341c-c44d-4a35-a784-a85760c61b26-cilium-config-path\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.741010 kubelet[3248]: I0702 09:00:26.740153 3248 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-host-proc-sys-kernel\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.741010 kubelet[3248]: I0702 09:00:26.740177 3248 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pmkrq\" (UniqueName: \"kubernetes.io/projected/7406341c-c44d-4a35-a784-a85760c61b26-kube-api-access-pmkrq\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.741010 kubelet[3248]: I0702 09:00:26.740202 3248 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-cilium-run\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.741010 kubelet[3248]: I0702 09:00:26.740226 3248 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7406341c-c44d-4a35-a784-a85760c61b26-cni-path\") on node \"ip-172-31-30-172\" DevicePath \"\"" Jul 2 09:00:26.944275 kubelet[3248]: I0702 09:00:26.944218 3248 scope.go:117] "RemoveContainer" containerID="e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c" Jul 2 09:00:26.948825 containerd[2019]: time="2024-07-02T09:00:26.948748253Z" level=info msg="RemoveContainer for \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\"" Jul 2 09:00:26.963540 systemd[1]: Removed slice kubepods-burstable-pod7406341c_c44d_4a35_a784_a85760c61b26.slice - libcontainer container kubepods-burstable-pod7406341c_c44d_4a35_a784_a85760c61b26.slice. Jul 2 09:00:26.963763 systemd[1]: kubepods-burstable-pod7406341c_c44d_4a35_a784_a85760c61b26.slice: Consumed 14.528s CPU time. Jul 2 09:00:26.973317 containerd[2019]: time="2024-07-02T09:00:26.971797841Z" level=info msg="RemoveContainer for \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\" returns successfully" Jul 2 09:00:26.973464 kubelet[3248]: I0702 09:00:26.972352 3248 scope.go:117] "RemoveContainer" containerID="7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229" Jul 2 09:00:26.978382 systemd[1]: Removed slice kubepods-besteffort-podbee1e216_d403_44de_8b44_339179cf3083.slice - libcontainer container kubepods-besteffort-podbee1e216_d403_44de_8b44_339179cf3083.slice. Jul 2 09:00:26.984248 containerd[2019]: time="2024-07-02T09:00:26.983313905Z" level=info msg="RemoveContainer for \"7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229\"" Jul 2 09:00:26.989860 containerd[2019]: time="2024-07-02T09:00:26.989467733Z" level=info msg="RemoveContainer for \"7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229\" returns successfully" Jul 2 09:00:26.990033 kubelet[3248]: I0702 09:00:26.989971 3248 scope.go:117] "RemoveContainer" containerID="a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277" Jul 2 09:00:26.992888 containerd[2019]: time="2024-07-02T09:00:26.992578637Z" level=info msg="RemoveContainer for \"a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277\"" Jul 2 09:00:27.000910 containerd[2019]: time="2024-07-02T09:00:27.000732049Z" level=info msg="RemoveContainer for \"a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277\" returns successfully" Jul 2 09:00:27.001520 kubelet[3248]: I0702 09:00:27.001375 3248 scope.go:117] "RemoveContainer" containerID="0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036" Jul 2 09:00:27.007596 containerd[2019]: time="2024-07-02T09:00:27.006956089Z" level=info msg="RemoveContainer for \"0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036\"" Jul 2 09:00:27.015965 containerd[2019]: time="2024-07-02T09:00:27.015882781Z" level=info msg="RemoveContainer for \"0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036\" returns successfully" Jul 2 09:00:27.017826 kubelet[3248]: I0702 09:00:27.016329 3248 scope.go:117] "RemoveContainer" containerID="4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706" Jul 2 09:00:27.022813 containerd[2019]: time="2024-07-02T09:00:27.022753057Z" level=info msg="RemoveContainer for \"4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706\"" Jul 2 09:00:27.027816 containerd[2019]: time="2024-07-02T09:00:27.027755630Z" level=info msg="RemoveContainer for \"4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706\" returns successfully" Jul 2 09:00:27.028182 kubelet[3248]: I0702 09:00:27.028130 3248 scope.go:117] "RemoveContainer" containerID="e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c" Jul 2 09:00:27.028648 containerd[2019]: time="2024-07-02T09:00:27.028475630Z" level=error msg="ContainerStatus for \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\": not found" Jul 2 09:00:27.028860 kubelet[3248]: E0702 09:00:27.028824 3248 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\": not found" containerID="e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c" Jul 2 09:00:27.029004 kubelet[3248]: I0702 09:00:27.028974 3248 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c"} err="failed to get container status \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1b06c25b54a424b8b3481401403a001b0a1a99918376ba34351527da5be4f6c\": not found" Jul 2 09:00:27.029112 kubelet[3248]: I0702 09:00:27.029013 3248 scope.go:117] "RemoveContainer" containerID="7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229" Jul 2 09:00:27.029560 containerd[2019]: time="2024-07-02T09:00:27.029509418Z" level=error msg="ContainerStatus for \"7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229\": not found" Jul 2 09:00:27.029942 kubelet[3248]: E0702 09:00:27.029901 3248 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229\": not found" containerID="7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229" Jul 2 09:00:27.030016 kubelet[3248]: I0702 09:00:27.029958 3248 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229"} err="failed to get container status \"7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f18b70b948020a8003cd1d24928200930a5eebb2d55368e5abcdfdf1d941229\": not found" Jul 2 09:00:27.030016 kubelet[3248]: I0702 09:00:27.029983 3248 scope.go:117] "RemoveContainer" containerID="a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277" Jul 2 09:00:27.030482 containerd[2019]: time="2024-07-02T09:00:27.030319406Z" level=error msg="ContainerStatus for \"a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277\": not found" Jul 2 09:00:27.030697 kubelet[3248]: E0702 09:00:27.030654 3248 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277\": not found" containerID="a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277" Jul 2 09:00:27.030791 kubelet[3248]: I0702 09:00:27.030711 3248 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277"} err="failed to get container status \"a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5c0e469096ea74b6ef3f1aaacf62722aa27bc17ab1b24a2f9ddbed449e7e277\": not found" Jul 2 09:00:27.030791 kubelet[3248]: I0702 09:00:27.030734 3248 scope.go:117] "RemoveContainer" containerID="0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036" Jul 2 09:00:27.031310 containerd[2019]: time="2024-07-02T09:00:27.031188494Z" level=error msg="ContainerStatus for \"0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036\": not found" Jul 2 09:00:27.031621 kubelet[3248]: E0702 09:00:27.031578 3248 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036\": not found" containerID="0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036" Jul 2 09:00:27.031727 kubelet[3248]: I0702 09:00:27.031626 3248 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036"} err="failed to get container status \"0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ed0b6ab88797aa64b21dfb7455b3f5949f4be1d924efc3c14a1130ea1b83036\": not found" Jul 2 09:00:27.031727 kubelet[3248]: I0702 09:00:27.031650 3248 scope.go:117] "RemoveContainer" containerID="4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706" Jul 2 09:00:27.032021 containerd[2019]: time="2024-07-02T09:00:27.031968410Z" level=error msg="ContainerStatus for \"4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706\": not found" Jul 2 09:00:27.032378 kubelet[3248]: E0702 09:00:27.032346 3248 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706\": not found" containerID="4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706" Jul 2 09:00:27.032598 kubelet[3248]: I0702 09:00:27.032404 3248 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706"} err="failed to get container status \"4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706\": rpc error: code = NotFound desc = an error occurred when try to find container \"4900e34c95a427da2f1235dddc4d311a4c59acdceaa8de76c69fe75a1825a706\": not found" Jul 2 09:00:27.032598 kubelet[3248]: I0702 09:00:27.032427 3248 scope.go:117] "RemoveContainer" containerID="6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49" Jul 2 09:00:27.034522 containerd[2019]: time="2024-07-02T09:00:27.034465694Z" level=info msg="RemoveContainer for \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\"" Jul 2 09:00:27.039098 containerd[2019]: time="2024-07-02T09:00:27.039034562Z" level=info msg="RemoveContainer for \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\" returns successfully" Jul 2 09:00:27.039428 kubelet[3248]: I0702 09:00:27.039385 3248 scope.go:117] "RemoveContainer" containerID="6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49" Jul 2 09:00:27.039951 containerd[2019]: time="2024-07-02T09:00:27.039813314Z" level=error msg="ContainerStatus for \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\": not found" Jul 2 09:00:27.040352 kubelet[3248]: E0702 09:00:27.040089 3248 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\": not found" containerID="6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49" Jul 2 09:00:27.040352 kubelet[3248]: I0702 09:00:27.040149 3248 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49"} err="failed to get container status \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\": rpc error: code = NotFound desc = an error occurred when try to find container \"6961901591ef45838afe769bbaeba7cab03b467759431a161e05dd373659cc49\": not found" Jul 2 09:00:27.185448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be-rootfs.mount: Deactivated successfully. Jul 2 09:00:27.185617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be-shm.mount: Deactivated successfully. Jul 2 09:00:27.185758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453-rootfs.mount: Deactivated successfully. Jul 2 09:00:27.185890 systemd[1]: var-lib-kubelet-pods-bee1e216\x2dd403\x2d44de\x2d8b44\x2d339179cf3083-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjpbm.mount: Deactivated successfully. Jul 2 09:00:27.186025 systemd[1]: var-lib-kubelet-pods-7406341c\x2dc44d\x2d4a35\x2da784\x2da85760c61b26-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpmkrq.mount: Deactivated successfully. Jul 2 09:00:27.186197 systemd[1]: var-lib-kubelet-pods-7406341c\x2dc44d\x2d4a35\x2da784\x2da85760c61b26-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 09:00:27.186336 systemd[1]: var-lib-kubelet-pods-7406341c\x2dc44d\x2d4a35\x2da784\x2da85760c61b26-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 09:00:28.098912 sshd[4965]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:28.104605 systemd-logind[1994]: Session 26 logged out. Waiting for processes to exit. Jul 2 09:00:28.105862 systemd[1]: sshd@25-172.31.30.172:22-147.75.109.163:34654.service: Deactivated successfully. Jul 2 09:00:28.110218 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 09:00:28.110525 systemd[1]: session-26.scope: Consumed 1.622s CPU time. Jul 2 09:00:28.115032 systemd-logind[1994]: Removed session 26. Jul 2 09:00:28.135593 systemd[1]: Started sshd@26-172.31.30.172:22-147.75.109.163:34668.service - OpenSSH per-connection server daemon (147.75.109.163:34668). Jul 2 09:00:28.317202 sshd[5128]: Accepted publickey for core from 147.75.109.163 port 34668 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:28.319811 sshd[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:28.328474 systemd-logind[1994]: New session 27 of user core. Jul 2 09:00:28.338373 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 09:00:28.519430 kubelet[3248]: I0702 09:00:28.519287 3248 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7406341c-c44d-4a35-a784-a85760c61b26" path="/var/lib/kubelet/pods/7406341c-c44d-4a35-a784-a85760c61b26/volumes" Jul 2 09:00:28.522657 kubelet[3248]: I0702 09:00:28.522608 3248 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bee1e216-d403-44de-8b44-339179cf3083" path="/var/lib/kubelet/pods/bee1e216-d403-44de-8b44-339179cf3083/volumes" Jul 2 09:00:28.693256 ntpd[1989]: Deleting interface #12 lxc_health, fe80::589c:bcff:fe38:2b06%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Jul 2 09:00:28.694186 ntpd[1989]: 2 Jul 09:00:28 ntpd[1989]: Deleting interface #12 lxc_health, fe80::589c:bcff:fe38:2b06%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Jul 2 09:00:29.253413 sshd[5128]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:29.262363 systemd[1]: sshd@26-172.31.30.172:22-147.75.109.163:34668.service: Deactivated successfully. Jul 2 09:00:29.269588 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 09:00:29.276543 systemd-logind[1994]: Session 27 logged out. Waiting for processes to exit. Jul 2 09:00:29.315301 kubelet[3248]: I0702 09:00:29.313369 3248 topology_manager.go:215] "Topology Admit Handler" podUID="05819ed1-1e90-42d6-b449-a32682811dbd" podNamespace="kube-system" podName="cilium-bb4xj" Jul 2 09:00:29.315301 kubelet[3248]: E0702 09:00:29.313818 3248 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bee1e216-d403-44de-8b44-339179cf3083" containerName="cilium-operator" Jul 2 09:00:29.315301 kubelet[3248]: E0702 09:00:29.313873 3248 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7406341c-c44d-4a35-a784-a85760c61b26" containerName="mount-cgroup" Jul 2 09:00:29.315301 kubelet[3248]: E0702 09:00:29.313893 3248 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7406341c-c44d-4a35-a784-a85760c61b26" containerName="mount-bpf-fs" Jul 2 09:00:29.315301 kubelet[3248]: E0702 09:00:29.313911 3248 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7406341c-c44d-4a35-a784-a85760c61b26" containerName="cilium-agent" Jul 2 09:00:29.315301 kubelet[3248]: E0702 09:00:29.313954 3248 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7406341c-c44d-4a35-a784-a85760c61b26" containerName="apply-sysctl-overwrites" Jul 2 09:00:29.315301 kubelet[3248]: E0702 09:00:29.313978 3248 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7406341c-c44d-4a35-a784-a85760c61b26" containerName="clean-cilium-state" Jul 2 09:00:29.315301 kubelet[3248]: I0702 09:00:29.315160 3248 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee1e216-d403-44de-8b44-339179cf3083" containerName="cilium-operator" Jul 2 09:00:29.315301 kubelet[3248]: I0702 09:00:29.315227 3248 memory_manager.go:354] "RemoveStaleState removing state" podUID="7406341c-c44d-4a35-a784-a85760c61b26" containerName="cilium-agent" Jul 2 09:00:29.337442 systemd[1]: Started sshd@27-172.31.30.172:22-147.75.109.163:34682.service - OpenSSH per-connection server daemon (147.75.109.163:34682). Jul 2 09:00:29.339497 systemd-logind[1994]: Removed session 27. Jul 2 09:00:29.356684 kubelet[3248]: I0702 09:00:29.356496 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05819ed1-1e90-42d6-b449-a32682811dbd-hostproc\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357348 kubelet[3248]: I0702 09:00:29.357293 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05819ed1-1e90-42d6-b449-a32682811dbd-cilium-cgroup\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357476 kubelet[3248]: I0702 09:00:29.357395 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05819ed1-1e90-42d6-b449-a32682811dbd-bpf-maps\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357476 kubelet[3248]: I0702 09:00:29.357449 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05819ed1-1e90-42d6-b449-a32682811dbd-xtables-lock\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357581 kubelet[3248]: I0702 09:00:29.357493 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05819ed1-1e90-42d6-b449-a32682811dbd-cni-path\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357581 kubelet[3248]: I0702 09:00:29.357544 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05819ed1-1e90-42d6-b449-a32682811dbd-etc-cni-netd\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357674 kubelet[3248]: I0702 09:00:29.357587 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05819ed1-1e90-42d6-b449-a32682811dbd-clustermesh-secrets\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357674 kubelet[3248]: I0702 09:00:29.357629 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05819ed1-1e90-42d6-b449-a32682811dbd-cilium-config-path\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357811 kubelet[3248]: I0702 09:00:29.357704 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05819ed1-1e90-42d6-b449-a32682811dbd-host-proc-sys-kernel\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357811 kubelet[3248]: I0702 09:00:29.357750 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05819ed1-1e90-42d6-b449-a32682811dbd-hubble-tls\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357811 kubelet[3248]: I0702 09:00:29.357793 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t69bn\" (UniqueName: \"kubernetes.io/projected/05819ed1-1e90-42d6-b449-a32682811dbd-kube-api-access-t69bn\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357960 kubelet[3248]: I0702 09:00:29.357840 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05819ed1-1e90-42d6-b449-a32682811dbd-lib-modules\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357960 kubelet[3248]: I0702 09:00:29.357890 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05819ed1-1e90-42d6-b449-a32682811dbd-cilium-ipsec-secrets\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.357960 kubelet[3248]: I0702 09:00:29.357932 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05819ed1-1e90-42d6-b449-a32682811dbd-host-proc-sys-net\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.358143 kubelet[3248]: I0702 09:00:29.357979 3248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05819ed1-1e90-42d6-b449-a32682811dbd-cilium-run\") pod \"cilium-bb4xj\" (UID: \"05819ed1-1e90-42d6-b449-a32682811dbd\") " pod="kube-system/cilium-bb4xj" Jul 2 09:00:29.361664 systemd[1]: Created slice kubepods-burstable-pod05819ed1_1e90_42d6_b449_a32682811dbd.slice - libcontainer container kubepods-burstable-pod05819ed1_1e90_42d6_b449_a32682811dbd.slice. Jul 2 09:00:29.376371 kubelet[3248]: W0702 09:00:29.375725 3248 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-30-172" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-172' and this object Jul 2 09:00:29.376818 kubelet[3248]: E0702 09:00:29.376685 3248 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-30-172" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-172' and this object Jul 2 09:00:29.377238 kubelet[3248]: W0702 09:00:29.376039 3248 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-30-172" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-172' and this object Jul 2 09:00:29.377569 kubelet[3248]: W0702 09:00:29.376327 3248 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-30-172" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-172' and this object Jul 2 09:00:29.377836 kubelet[3248]: W0702 09:00:29.376459 3248 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-30-172" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-172' and this object Jul 2 09:00:29.379629 kubelet[3248]: E0702 09:00:29.379583 3248 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-30-172" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-172' and this object Jul 2 09:00:29.379921 kubelet[3248]: E0702 09:00:29.379772 3248 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-30-172" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-172' and this object Jul 2 09:00:29.380064 kubelet[3248]: E0702 09:00:29.378031 3248 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-30-172" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-172' and this object Jul 2 09:00:29.519268 sshd[5140]: Accepted publickey for core from 147.75.109.163 port 34682 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:29.522993 sshd[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:29.536718 systemd-logind[1994]: New session 28 of user core. Jul 2 09:00:29.542902 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 09:00:29.667659 sshd[5140]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:29.673236 systemd[1]: sshd@27-172.31.30.172:22-147.75.109.163:34682.service: Deactivated successfully. Jul 2 09:00:29.678429 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 09:00:29.682880 systemd-logind[1994]: Session 28 logged out. Waiting for processes to exit. Jul 2 09:00:29.685134 systemd-logind[1994]: Removed session 28. Jul 2 09:00:29.707612 systemd[1]: Started sshd@28-172.31.30.172:22-147.75.109.163:34696.service - OpenSSH per-connection server daemon (147.75.109.163:34696). Jul 2 09:00:29.878488 sshd[5149]: Accepted publickey for core from 147.75.109.163 port 34696 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:29.882921 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:29.893452 systemd-logind[1994]: New session 29 of user core. Jul 2 09:00:29.902369 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 09:00:30.461103 kubelet[3248]: E0702 09:00:30.460452 3248 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 09:00:30.461103 kubelet[3248]: E0702 09:00:30.460492 3248 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-bb4xj: failed to sync secret cache: timed out waiting for the condition Jul 2 09:00:30.461103 kubelet[3248]: E0702 09:00:30.460594 3248 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/05819ed1-1e90-42d6-b449-a32682811dbd-hubble-tls podName:05819ed1-1e90-42d6-b449-a32682811dbd nodeName:}" failed. No retries permitted until 2024-07-02 09:00:30.960563679 +0000 UTC m=+110.783266469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/05819ed1-1e90-42d6-b449-a32682811dbd-hubble-tls") pod "cilium-bb4xj" (UID: "05819ed1-1e90-42d6-b449-a32682811dbd") : failed to sync secret cache: timed out waiting for the condition Jul 2 09:00:30.461756 kubelet[3248]: E0702 09:00:30.461113 3248 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 2 09:00:30.461756 kubelet[3248]: E0702 09:00:30.461202 3248 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05819ed1-1e90-42d6-b449-a32682811dbd-cilium-ipsec-secrets podName:05819ed1-1e90-42d6-b449-a32682811dbd nodeName:}" failed. No retries permitted until 2024-07-02 09:00:30.961178811 +0000 UTC m=+110.783881589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/05819ed1-1e90-42d6-b449-a32682811dbd-cilium-ipsec-secrets") pod "cilium-bb4xj" (UID: "05819ed1-1e90-42d6-b449-a32682811dbd") : failed to sync secret cache: timed out waiting for the condition Jul 2 09:00:30.703650 kubelet[3248]: E0702 09:00:30.703594 3248 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 09:00:31.174377 containerd[2019]: time="2024-07-02T09:00:31.174317550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bb4xj,Uid:05819ed1-1e90-42d6-b449-a32682811dbd,Namespace:kube-system,Attempt:0,}" Jul 2 09:00:31.214379 containerd[2019]: time="2024-07-02T09:00:31.214242378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:00:31.214745 containerd[2019]: time="2024-07-02T09:00:31.214587054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:31.214745 containerd[2019]: time="2024-07-02T09:00:31.214661886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:00:31.215065 containerd[2019]: time="2024-07-02T09:00:31.214722294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:31.251532 systemd[1]: run-containerd-runc-k8s.io-4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8-runc.MQ9WcO.mount: Deactivated successfully. Jul 2 09:00:31.264370 systemd[1]: Started cri-containerd-4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8.scope - libcontainer container 4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8. Jul 2 09:00:31.303922 containerd[2019]: time="2024-07-02T09:00:31.303842935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bb4xj,Uid:05819ed1-1e90-42d6-b449-a32682811dbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8\"" Jul 2 09:00:31.311514 containerd[2019]: time="2024-07-02T09:00:31.311410807Z" level=info msg="CreateContainer within sandbox \"4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 09:00:31.333166 containerd[2019]: time="2024-07-02T09:00:31.332994343Z" level=info msg="CreateContainer within sandbox \"4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9154a47257946261d65f9ec5525eecdd1f53e216862b02b4c398643647bb056b\"" Jul 2 09:00:31.333901 containerd[2019]: time="2024-07-02T09:00:31.333859807Z" level=info msg="StartContainer for \"9154a47257946261d65f9ec5525eecdd1f53e216862b02b4c398643647bb056b\"" Jul 2 09:00:31.376387 systemd[1]: Started cri-containerd-9154a47257946261d65f9ec5525eecdd1f53e216862b02b4c398643647bb056b.scope - libcontainer container 9154a47257946261d65f9ec5525eecdd1f53e216862b02b4c398643647bb056b. Jul 2 09:00:31.420619 containerd[2019]: time="2024-07-02T09:00:31.420550135Z" level=info msg="StartContainer for \"9154a47257946261d65f9ec5525eecdd1f53e216862b02b4c398643647bb056b\" returns successfully" Jul 2 09:00:31.436998 systemd[1]: cri-containerd-9154a47257946261d65f9ec5525eecdd1f53e216862b02b4c398643647bb056b.scope: Deactivated successfully. Jul 2 09:00:31.493135 containerd[2019]: time="2024-07-02T09:00:31.493021760Z" level=info msg="shim disconnected" id=9154a47257946261d65f9ec5525eecdd1f53e216862b02b4c398643647bb056b namespace=k8s.io Jul 2 09:00:31.493135 containerd[2019]: time="2024-07-02T09:00:31.493131284Z" level=warning msg="cleaning up after shim disconnected" id=9154a47257946261d65f9ec5525eecdd1f53e216862b02b4c398643647bb056b namespace=k8s.io Jul 2 09:00:31.493536 containerd[2019]: time="2024-07-02T09:00:31.493155212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:31.971862 containerd[2019]: time="2024-07-02T09:00:31.971694790Z" level=info msg="CreateContainer within sandbox \"4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 09:00:31.998712 containerd[2019]: time="2024-07-02T09:00:31.998226490Z" level=info msg="CreateContainer within sandbox \"4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e27b1a8fe8f374372ed96e969a953a039ea1e49dde81bb71a9684b15f9d751e9\"" Jul 2 09:00:32.003325 containerd[2019]: time="2024-07-02T09:00:32.003212046Z" level=info msg="StartContainer for \"e27b1a8fe8f374372ed96e969a953a039ea1e49dde81bb71a9684b15f9d751e9\"" Jul 2 09:00:32.078393 systemd[1]: Started cri-containerd-e27b1a8fe8f374372ed96e969a953a039ea1e49dde81bb71a9684b15f9d751e9.scope - libcontainer container e27b1a8fe8f374372ed96e969a953a039ea1e49dde81bb71a9684b15f9d751e9. Jul 2 09:00:32.129595 containerd[2019]: time="2024-07-02T09:00:32.129485203Z" level=info msg="StartContainer for \"e27b1a8fe8f374372ed96e969a953a039ea1e49dde81bb71a9684b15f9d751e9\" returns successfully" Jul 2 09:00:32.142436 systemd[1]: cri-containerd-e27b1a8fe8f374372ed96e969a953a039ea1e49dde81bb71a9684b15f9d751e9.scope: Deactivated successfully. Jul 2 09:00:32.205457 containerd[2019]: time="2024-07-02T09:00:32.205337635Z" level=info msg="shim disconnected" id=e27b1a8fe8f374372ed96e969a953a039ea1e49dde81bb71a9684b15f9d751e9 namespace=k8s.io Jul 2 09:00:32.205457 containerd[2019]: time="2024-07-02T09:00:32.205415011Z" level=warning msg="cleaning up after shim disconnected" id=e27b1a8fe8f374372ed96e969a953a039ea1e49dde81bb71a9684b15f9d751e9 namespace=k8s.io Jul 2 09:00:32.205457 containerd[2019]: time="2024-07-02T09:00:32.205435543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:32.281942 kubelet[3248]: I0702 09:00:32.279291 3248 setters.go:568] "Node became not ready" node="ip-172-31-30-172" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T09:00:32Z","lastTransitionTime":"2024-07-02T09:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 09:00:32.982103 containerd[2019]: time="2024-07-02T09:00:32.977688623Z" level=info msg="CreateContainer within sandbox \"4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 09:00:32.986719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e27b1a8fe8f374372ed96e969a953a039ea1e49dde81bb71a9684b15f9d751e9-rootfs.mount: Deactivated successfully. Jul 2 09:00:33.015922 containerd[2019]: time="2024-07-02T09:00:33.015729811Z" level=info msg="CreateContainer within sandbox \"4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5d89ea20aedb7bc156e5b953428138cf0049a2f70280713366ec8c17626fceb7\"" Jul 2 09:00:33.018368 containerd[2019]: time="2024-07-02T09:00:33.018298231Z" level=info msg="StartContainer for \"5d89ea20aedb7bc156e5b953428138cf0049a2f70280713366ec8c17626fceb7\"" Jul 2 09:00:33.087786 systemd[1]: Started cri-containerd-5d89ea20aedb7bc156e5b953428138cf0049a2f70280713366ec8c17626fceb7.scope - libcontainer container 5d89ea20aedb7bc156e5b953428138cf0049a2f70280713366ec8c17626fceb7. Jul 2 09:00:33.139532 containerd[2019]: time="2024-07-02T09:00:33.139466000Z" level=info msg="StartContainer for \"5d89ea20aedb7bc156e5b953428138cf0049a2f70280713366ec8c17626fceb7\" returns successfully" Jul 2 09:00:33.144637 systemd[1]: cri-containerd-5d89ea20aedb7bc156e5b953428138cf0049a2f70280713366ec8c17626fceb7.scope: Deactivated successfully. Jul 2 09:00:33.193687 containerd[2019]: time="2024-07-02T09:00:33.193533044Z" level=info msg="shim disconnected" id=5d89ea20aedb7bc156e5b953428138cf0049a2f70280713366ec8c17626fceb7 namespace=k8s.io Jul 2 09:00:33.193687 containerd[2019]: time="2024-07-02T09:00:33.193627688Z" level=warning msg="cleaning up after shim disconnected" id=5d89ea20aedb7bc156e5b953428138cf0049a2f70280713366ec8c17626fceb7 namespace=k8s.io Jul 2 09:00:33.193687 containerd[2019]: time="2024-07-02T09:00:33.193648664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:33.989188 systemd[1]: run-containerd-runc-k8s.io-5d89ea20aedb7bc156e5b953428138cf0049a2f70280713366ec8c17626fceb7-runc.fVKtPs.mount: Deactivated successfully. Jul 2 09:00:33.991154 containerd[2019]: time="2024-07-02T09:00:33.989398548Z" level=info msg="CreateContainer within sandbox \"4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 09:00:33.989405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d89ea20aedb7bc156e5b953428138cf0049a2f70280713366ec8c17626fceb7-rootfs.mount: Deactivated successfully. Jul 2 09:00:34.019748 containerd[2019]: time="2024-07-02T09:00:34.019404824Z" level=info msg="CreateContainer within sandbox \"4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7d1e2b3d998ffcf16395caf1ef5f2ddda89ae12e09fc2119ca4549fd28cfdf66\"" Jul 2 09:00:34.021181 containerd[2019]: time="2024-07-02T09:00:34.020227892Z" level=info msg="StartContainer for \"7d1e2b3d998ffcf16395caf1ef5f2ddda89ae12e09fc2119ca4549fd28cfdf66\"" Jul 2 09:00:34.072547 systemd[1]: run-containerd-runc-k8s.io-7d1e2b3d998ffcf16395caf1ef5f2ddda89ae12e09fc2119ca4549fd28cfdf66-runc.1839aB.mount: Deactivated successfully. Jul 2 09:00:34.082398 systemd[1]: Started cri-containerd-7d1e2b3d998ffcf16395caf1ef5f2ddda89ae12e09fc2119ca4549fd28cfdf66.scope - libcontainer container 7d1e2b3d998ffcf16395caf1ef5f2ddda89ae12e09fc2119ca4549fd28cfdf66. Jul 2 09:00:34.126897 systemd[1]: cri-containerd-7d1e2b3d998ffcf16395caf1ef5f2ddda89ae12e09fc2119ca4549fd28cfdf66.scope: Deactivated successfully. Jul 2 09:00:34.132851 containerd[2019]: time="2024-07-02T09:00:34.132540129Z" level=info msg="StartContainer for \"7d1e2b3d998ffcf16395caf1ef5f2ddda89ae12e09fc2119ca4549fd28cfdf66\" returns successfully" Jul 2 09:00:34.174922 containerd[2019]: time="2024-07-02T09:00:34.174811929Z" level=info msg="shim disconnected" id=7d1e2b3d998ffcf16395caf1ef5f2ddda89ae12e09fc2119ca4549fd28cfdf66 namespace=k8s.io Jul 2 09:00:34.174922 containerd[2019]: time="2024-07-02T09:00:34.174903789Z" level=warning msg="cleaning up after shim disconnected" id=7d1e2b3d998ffcf16395caf1ef5f2ddda89ae12e09fc2119ca4549fd28cfdf66 namespace=k8s.io Jul 2 09:00:34.174922 containerd[2019]: time="2024-07-02T09:00:34.174926349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:34.984381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d1e2b3d998ffcf16395caf1ef5f2ddda89ae12e09fc2119ca4549fd28cfdf66-rootfs.mount: Deactivated successfully. Jul 2 09:00:35.002805 containerd[2019]: time="2024-07-02T09:00:35.002546181Z" level=info msg="CreateContainer within sandbox \"4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 09:00:35.037806 containerd[2019]: time="2024-07-02T09:00:35.037725261Z" level=info msg="CreateContainer within sandbox \"4abcd6b4cfe7c1f16c65d8b35e3ebca1f245fe6e0054165dd6811bcc1aed79f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"df375983d0098a051e7625ebcc93323fe8820c254d932b8cc27c4e5c9568c4bd\"" Jul 2 09:00:35.038934 containerd[2019]: time="2024-07-02T09:00:35.038769693Z" level=info msg="StartContainer for \"df375983d0098a051e7625ebcc93323fe8820c254d932b8cc27c4e5c9568c4bd\"" Jul 2 09:00:35.093394 systemd[1]: Started cri-containerd-df375983d0098a051e7625ebcc93323fe8820c254d932b8cc27c4e5c9568c4bd.scope - libcontainer container df375983d0098a051e7625ebcc93323fe8820c254d932b8cc27c4e5c9568c4bd. Jul 2 09:00:35.147913 containerd[2019]: time="2024-07-02T09:00:35.147832054Z" level=info msg="StartContainer for \"df375983d0098a051e7625ebcc93323fe8820c254d932b8cc27c4e5c9568c4bd\" returns successfully" Jul 2 09:00:35.943113 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 2 09:00:36.038117 kubelet[3248]: I0702 09:00:36.037531 3248 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bb4xj" podStartSLOduration=7.037469194 podStartE2EDuration="7.037469194s" podCreationTimestamp="2024-07-02 09:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:00:36.036104914 +0000 UTC m=+115.858807728" watchObservedRunningTime="2024-07-02 09:00:36.037469194 +0000 UTC m=+115.860171984" Jul 2 09:00:38.661706 systemd[1]: run-containerd-runc-k8s.io-df375983d0098a051e7625ebcc93323fe8820c254d932b8cc27c4e5c9568c4bd-runc.53jvdA.mount: Deactivated successfully. Jul 2 09:00:40.001305 systemd-networkd[1935]: lxc_health: Link UP Jul 2 09:00:40.013897 (udev-worker)[5984]: Network interface NamePolicy= disabled on kernel command line. Jul 2 09:00:40.034188 systemd-networkd[1935]: lxc_health: Gained carrier Jul 2 09:00:40.525548 containerd[2019]: time="2024-07-02T09:00:40.523718681Z" level=info msg="StopPodSandbox for \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\"" Jul 2 09:00:40.525548 containerd[2019]: time="2024-07-02T09:00:40.525337661Z" level=info msg="TearDown network for sandbox \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\" successfully" Jul 2 09:00:40.525548 containerd[2019]: time="2024-07-02T09:00:40.525418169Z" level=info msg="StopPodSandbox for \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\" returns successfully" Jul 2 09:00:40.528137 containerd[2019]: time="2024-07-02T09:00:40.526757489Z" level=info msg="RemovePodSandbox for \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\"" Jul 2 09:00:40.528137 containerd[2019]: time="2024-07-02T09:00:40.526817957Z" level=info msg="Forcibly stopping sandbox \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\"" Jul 2 09:00:40.528137 containerd[2019]: time="2024-07-02T09:00:40.526959293Z" level=info msg="TearDown network for sandbox \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\" successfully" Jul 2 09:00:40.533352 containerd[2019]: time="2024-07-02T09:00:40.533269853Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:00:40.533507 containerd[2019]: time="2024-07-02T09:00:40.533384657Z" level=info msg="RemovePodSandbox \"e0dca9128eed0eea519dc68b2f6361ef71e8ed981fdd8836cbf4bc72deb5a453\" returns successfully" Jul 2 09:00:40.535429 containerd[2019]: time="2024-07-02T09:00:40.535364501Z" level=info msg="StopPodSandbox for \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\"" Jul 2 09:00:40.535634 containerd[2019]: time="2024-07-02T09:00:40.535524689Z" level=info msg="TearDown network for sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" successfully" Jul 2 09:00:40.535634 containerd[2019]: time="2024-07-02T09:00:40.535593605Z" level=info msg="StopPodSandbox for \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" returns successfully" Jul 2 09:00:40.539821 containerd[2019]: time="2024-07-02T09:00:40.536273753Z" level=info msg="RemovePodSandbox for \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\"" Jul 2 09:00:40.539821 containerd[2019]: time="2024-07-02T09:00:40.538210169Z" level=info msg="Forcibly stopping sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\"" Jul 2 09:00:40.539821 containerd[2019]: time="2024-07-02T09:00:40.538502957Z" level=info msg="TearDown network for sandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" successfully" Jul 2 09:00:40.546751 containerd[2019]: time="2024-07-02T09:00:40.546495665Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:00:40.546751 containerd[2019]: time="2024-07-02T09:00:40.546599369Z" level=info msg="RemovePodSandbox \"8a186c4c92c7ca686c6f7c36200238a7c078bd423087b58e19d28ac76b1ad3be\" returns successfully" Jul 2 09:00:41.133455 systemd-networkd[1935]: lxc_health: Gained IPv6LL Jul 2 09:00:43.693392 ntpd[1989]: Listen normally on 15 lxc_health [fe80::702f:ccff:fe31:906b%14]:123 Jul 2 09:00:43.695609 ntpd[1989]: 2 Jul 09:00:43 ntpd[1989]: Listen normally on 15 lxc_health [fe80::702f:ccff:fe31:906b%14]:123 Jul 2 09:00:45.568108 systemd[1]: run-containerd-runc-k8s.io-df375983d0098a051e7625ebcc93323fe8820c254d932b8cc27c4e5c9568c4bd-runc.z8IH0n.mount: Deactivated successfully. Jul 2 09:00:45.712432 sshd[5149]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:45.722847 systemd[1]: sshd@28-172.31.30.172:22-147.75.109.163:34696.service: Deactivated successfully. Jul 2 09:00:45.728718 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 09:00:45.731337 systemd-logind[1994]: Session 29 logged out. Waiting for processes to exit. Jul 2 09:00:45.734885 systemd-logind[1994]: Removed session 29. Jul 2 09:01:00.375201 systemd[1]: cri-containerd-03c6a2dc34988def880118e07123bce8c044074755b16fe01bd5ef140ec1bfa3.scope: Deactivated successfully. Jul 2 09:01:00.376523 systemd[1]: cri-containerd-03c6a2dc34988def880118e07123bce8c044074755b16fe01bd5ef140ec1bfa3.scope: Consumed 5.515s CPU time, 22.5M memory peak, 0B memory swap peak. Jul 2 09:01:00.415986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03c6a2dc34988def880118e07123bce8c044074755b16fe01bd5ef140ec1bfa3-rootfs.mount: Deactivated successfully. Jul 2 09:01:00.419934 containerd[2019]: time="2024-07-02T09:01:00.419840963Z" level=info msg="shim disconnected" id=03c6a2dc34988def880118e07123bce8c044074755b16fe01bd5ef140ec1bfa3 namespace=k8s.io Jul 2 09:01:00.419934 containerd[2019]: time="2024-07-02T09:01:00.419920031Z" level=warning msg="cleaning up after shim disconnected" id=03c6a2dc34988def880118e07123bce8c044074755b16fe01bd5ef140ec1bfa3 namespace=k8s.io Jul 2 09:01:00.420902 containerd[2019]: time="2024-07-02T09:01:00.419942471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:01:01.081481 kubelet[3248]: I0702 09:01:01.080570 3248 scope.go:117] "RemoveContainer" containerID="03c6a2dc34988def880118e07123bce8c044074755b16fe01bd5ef140ec1bfa3" Jul 2 09:01:01.085167 containerd[2019]: time="2024-07-02T09:01:01.085109051Z" level=info msg="CreateContainer within sandbox \"09fb10830ec8b873a4f5c06581f1bd7bd55bfdb0a1e1b09c751a36c7bfe9f0fe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 09:01:01.105247 containerd[2019]: time="2024-07-02T09:01:01.105166955Z" level=info msg="CreateContainer within sandbox \"09fb10830ec8b873a4f5c06581f1bd7bd55bfdb0a1e1b09c751a36c7bfe9f0fe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3e2f99e276035bd1319aeb2490c1b8882c93f20fbc6788d95339e861a0324840\"" Jul 2 09:01:01.105887 containerd[2019]: time="2024-07-02T09:01:01.105830567Z" level=info msg="StartContainer for \"3e2f99e276035bd1319aeb2490c1b8882c93f20fbc6788d95339e861a0324840\"" Jul 2 09:01:01.165426 systemd[1]: Started cri-containerd-3e2f99e276035bd1319aeb2490c1b8882c93f20fbc6788d95339e861a0324840.scope - libcontainer container 3e2f99e276035bd1319aeb2490c1b8882c93f20fbc6788d95339e861a0324840. Jul 2 09:01:01.242468 containerd[2019]: time="2024-07-02T09:01:01.242393363Z" level=info msg="StartContainer for \"3e2f99e276035bd1319aeb2490c1b8882c93f20fbc6788d95339e861a0324840\" returns successfully" Jul 2 09:01:01.417652 systemd[1]: run-containerd-runc-k8s.io-3e2f99e276035bd1319aeb2490c1b8882c93f20fbc6788d95339e861a0324840-runc.mCY841.mount: Deactivated successfully. Jul 2 09:01:03.491095 kubelet[3248]: E0702 09:01:03.489950 3248 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-172?timeout=10s\": context deadline exceeded" Jul 2 09:01:05.365885 systemd[1]: cri-containerd-7ac10a88f624d10af43254b5ed4f0687e3bd0ab5d2a93351614f3c7123594802.scope: Deactivated successfully. Jul 2 09:01:05.367028 systemd[1]: cri-containerd-7ac10a88f624d10af43254b5ed4f0687e3bd0ab5d2a93351614f3c7123594802.scope: Consumed 2.504s CPU time, 17.0M memory peak, 0B memory swap peak. Jul 2 09:01:05.408907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ac10a88f624d10af43254b5ed4f0687e3bd0ab5d2a93351614f3c7123594802-rootfs.mount: Deactivated successfully. Jul 2 09:01:05.424614 containerd[2019]: time="2024-07-02T09:01:05.423196168Z" level=info msg="shim disconnected" id=7ac10a88f624d10af43254b5ed4f0687e3bd0ab5d2a93351614f3c7123594802 namespace=k8s.io Jul 2 09:01:05.425246 containerd[2019]: time="2024-07-02T09:01:05.424631320Z" level=warning msg="cleaning up after shim disconnected" id=7ac10a88f624d10af43254b5ed4f0687e3bd0ab5d2a93351614f3c7123594802 namespace=k8s.io Jul 2 09:01:05.425246 containerd[2019]: time="2024-07-02T09:01:05.424665784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:01:06.100209 kubelet[3248]: I0702 09:01:06.099658 3248 scope.go:117] "RemoveContainer" containerID="7ac10a88f624d10af43254b5ed4f0687e3bd0ab5d2a93351614f3c7123594802" Jul 2 09:01:06.103663 containerd[2019]: time="2024-07-02T09:01:06.103609672Z" level=info msg="CreateContainer within sandbox \"26e8d0ba3cf9db19aea29d0cfbf5f3114a8cc2017c9ddc6b41cd1f350dee20b8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 09:01:06.128977 containerd[2019]: time="2024-07-02T09:01:06.128845444Z" level=info msg="CreateContainer within sandbox \"26e8d0ba3cf9db19aea29d0cfbf5f3114a8cc2017c9ddc6b41cd1f350dee20b8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"77a1ce2d72decda9ecb0d2f09bf406b4edd8311464cae2e8edc5fd472fceaa2e\"" Jul 2 09:01:06.129842 containerd[2019]: time="2024-07-02T09:01:06.129515656Z" level=info msg="StartContainer for \"77a1ce2d72decda9ecb0d2f09bf406b4edd8311464cae2e8edc5fd472fceaa2e\"" Jul 2 09:01:06.181393 systemd[1]: Started cri-containerd-77a1ce2d72decda9ecb0d2f09bf406b4edd8311464cae2e8edc5fd472fceaa2e.scope - libcontainer container 77a1ce2d72decda9ecb0d2f09bf406b4edd8311464cae2e8edc5fd472fceaa2e. Jul 2 09:01:06.248197 containerd[2019]: time="2024-07-02T09:01:06.248092960Z" level=info msg="StartContainer for \"77a1ce2d72decda9ecb0d2f09bf406b4edd8311464cae2e8edc5fd472fceaa2e\" returns successfully" Jul 2 09:01:13.490954 kubelet[3248]: E0702 09:01:13.490634 3248 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-172?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"