Jul 1 23:59:06.183231 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 1 23:59:06.183276 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 1 23:59:06.183346 kernel: KASLR disabled due to lack of seed Jul 1 23:59:06.183365 kernel: efi: EFI v2.7 by EDK II Jul 1 23:59:06.183381 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Jul 1 23:59:06.183397 kernel: ACPI: Early table checksum verification disabled Jul 1 23:59:06.183415 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 1 23:59:06.183431 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 1 23:59:06.183447 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 1 23:59:06.183463 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 1 23:59:06.183485 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 1 23:59:06.183500 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 1 23:59:06.183516 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 1 23:59:06.183532 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 1 23:59:06.183551 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 1 23:59:06.183573 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 1 23:59:06.183591 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 1 23:59:06.183608 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 1 23:59:06.183625 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 1 23:59:06.183641 kernel: printk: bootconsole [uart0] enabled Jul 1 23:59:06.185377 kernel: NUMA: Failed to initialise from firmware Jul 1 23:59:06.185401 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 1 23:59:06.185419 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 1 23:59:06.185436 kernel: Zone ranges: Jul 1 23:59:06.185453 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 1 23:59:06.185470 kernel: DMA32 empty Jul 1 23:59:06.185496 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 1 23:59:06.185514 kernel: Movable zone start for each node Jul 1 23:59:06.185531 kernel: Early memory node ranges Jul 1 23:59:06.185548 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 1 23:59:06.185564 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 1 23:59:06.185580 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 1 23:59:06.185597 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 1 23:59:06.185614 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 1 23:59:06.185630 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 1 23:59:06.185647 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 1 23:59:06.185664 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 1 23:59:06.185680 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 1 23:59:06.185702 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 1 23:59:06.185719 kernel: psci: probing for conduit method from ACPI. Jul 1 23:59:06.185745 kernel: psci: PSCIv1.0 detected in firmware. Jul 1 23:59:06.185763 kernel: psci: Using standard PSCI v0.2 function IDs Jul 1 23:59:06.185780 kernel: psci: Trusted OS migration not required Jul 1 23:59:06.185806 kernel: psci: SMC Calling Convention v1.1 Jul 1 23:59:06.185824 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 1 23:59:06.185841 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 1 23:59:06.185859 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 1 23:59:06.185878 kernel: Detected PIPT I-cache on CPU0 Jul 1 23:59:06.185896 kernel: CPU features: detected: GIC system register CPU interface Jul 1 23:59:06.185914 kernel: CPU features: detected: Spectre-v2 Jul 1 23:59:06.185931 kernel: CPU features: detected: Spectre-v3a Jul 1 23:59:06.185948 kernel: CPU features: detected: Spectre-BHB Jul 1 23:59:06.185966 kernel: CPU features: detected: ARM erratum 1742098 Jul 1 23:59:06.185984 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 1 23:59:06.186006 kernel: alternatives: applying boot alternatives Jul 1 23:59:06.186026 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 1 23:59:06.186045 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 1 23:59:06.186063 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 1 23:59:06.186082 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 1 23:59:06.186101 kernel: Fallback order for Node 0: 0 Jul 1 23:59:06.186119 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 1 23:59:06.186137 kernel: Policy zone: Normal Jul 1 23:59:06.186156 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 1 23:59:06.186173 kernel: software IO TLB: area num 2. Jul 1 23:59:06.186191 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 1 23:59:06.186215 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Jul 1 23:59:06.186233 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 1 23:59:06.186251 kernel: trace event string verifier disabled Jul 1 23:59:06.186268 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 1 23:59:06.186311 kernel: rcu: RCU event tracing is enabled. Jul 1 23:59:06.186333 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 1 23:59:06.186352 kernel: Trampoline variant of Tasks RCU enabled. Jul 1 23:59:06.186371 kernel: Tracing variant of Tasks RCU enabled. Jul 1 23:59:06.186395 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 1 23:59:06.186417 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 1 23:59:06.186436 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 1 23:59:06.186462 kernel: GICv3: 96 SPIs implemented Jul 1 23:59:06.186484 kernel: GICv3: 0 Extended SPIs implemented Jul 1 23:59:06.186502 kernel: Root IRQ handler: gic_handle_irq Jul 1 23:59:06.186522 kernel: GICv3: GICv3 features: 16 PPIs Jul 1 23:59:06.186540 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 1 23:59:06.186558 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 1 23:59:06.186576 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 1 23:59:06.186597 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jul 1 23:59:06.186616 kernel: GICv3: using LPI property table @0x00000004000e0000 Jul 1 23:59:06.186637 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 1 23:59:06.186655 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jul 1 23:59:06.186674 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 1 23:59:06.186698 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 1 23:59:06.186717 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 1 23:59:06.186735 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 1 23:59:06.186754 kernel: Console: colour dummy device 80x25 Jul 1 23:59:06.186773 kernel: printk: console [tty1] enabled Jul 1 23:59:06.186792 kernel: ACPI: Core revision 20230628 Jul 1 23:59:06.186810 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 1 23:59:06.186829 kernel: pid_max: default: 32768 minimum: 301 Jul 1 23:59:06.186848 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 1 23:59:06.186867 kernel: SELinux: Initializing. Jul 1 23:59:06.186895 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 1 23:59:06.186913 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 1 23:59:06.186933 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 1 23:59:06.186955 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 1 23:59:06.186973 kernel: rcu: Hierarchical SRCU implementation. Jul 1 23:59:06.186993 kernel: rcu: Max phase no-delay instances is 400. Jul 1 23:59:06.187011 kernel: Platform MSI: ITS@0x10080000 domain created Jul 1 23:59:06.187029 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 1 23:59:06.187047 kernel: Remapping and enabling EFI services. Jul 1 23:59:06.187070 kernel: smp: Bringing up secondary CPUs ... Jul 1 23:59:06.187088 kernel: Detected PIPT I-cache on CPU1 Jul 1 23:59:06.187106 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 1 23:59:06.187124 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jul 1 23:59:06.187142 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 1 23:59:06.187160 kernel: smp: Brought up 1 node, 2 CPUs Jul 1 23:59:06.187179 kernel: SMP: Total of 2 processors activated. Jul 1 23:59:06.187198 kernel: CPU features: detected: 32-bit EL0 Support Jul 1 23:59:06.187216 kernel: CPU features: detected: 32-bit EL1 Support Jul 1 23:59:06.187240 kernel: CPU features: detected: CRC32 instructions Jul 1 23:59:06.187258 kernel: CPU: All CPU(s) started at EL1 Jul 1 23:59:06.191378 kernel: alternatives: applying system-wide alternatives Jul 1 23:59:06.191418 kernel: devtmpfs: initialized Jul 1 23:59:06.191438 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 1 23:59:06.191459 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 1 23:59:06.191480 kernel: pinctrl core: initialized pinctrl subsystem Jul 1 23:59:06.191498 kernel: SMBIOS 3.0.0 present. Jul 1 23:59:06.191518 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 1 23:59:06.191543 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 1 23:59:06.191562 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 1 23:59:06.191581 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 1 23:59:06.191601 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 1 23:59:06.191620 kernel: audit: initializing netlink subsys (disabled) Jul 1 23:59:06.191639 kernel: audit: type=2000 audit(0.294:1): state=initialized audit_enabled=0 res=1 Jul 1 23:59:06.191660 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 1 23:59:06.191685 kernel: cpuidle: using governor menu Jul 1 23:59:06.191706 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 1 23:59:06.191727 kernel: ASID allocator initialised with 65536 entries Jul 1 23:59:06.191749 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 1 23:59:06.191770 kernel: Serial: AMBA PL011 UART driver Jul 1 23:59:06.191790 kernel: Modules: 17600 pages in range for non-PLT usage Jul 1 23:59:06.191810 kernel: Modules: 509120 pages in range for PLT usage Jul 1 23:59:06.191829 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 1 23:59:06.191847 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 1 23:59:06.191872 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 1 23:59:06.191891 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 1 23:59:06.191910 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 1 23:59:06.191929 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 1 23:59:06.191947 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 1 23:59:06.191966 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 1 23:59:06.191984 kernel: ACPI: Added _OSI(Module Device) Jul 1 23:59:06.192003 kernel: ACPI: Added _OSI(Processor Device) Jul 1 23:59:06.192021 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 1 23:59:06.192045 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 1 23:59:06.192064 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 1 23:59:06.192083 kernel: ACPI: Interpreter enabled Jul 1 23:59:06.192101 kernel: ACPI: Using GIC for interrupt routing Jul 1 23:59:06.192120 kernel: ACPI: MCFG table detected, 1 entries Jul 1 23:59:06.192138 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 1 23:59:06.192524 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 1 23:59:06.192746 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 1 23:59:06.192968 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 1 23:59:06.193184 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 1 23:59:06.195536 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 1 23:59:06.195585 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 1 23:59:06.195605 kernel: acpiphp: Slot [1] registered Jul 1 23:59:06.195624 kernel: acpiphp: Slot [2] registered Jul 1 23:59:06.195642 kernel: acpiphp: Slot [3] registered Jul 1 23:59:06.195661 kernel: acpiphp: Slot [4] registered Jul 1 23:59:06.195679 kernel: acpiphp: Slot [5] registered Jul 1 23:59:06.195708 kernel: acpiphp: Slot [6] registered Jul 1 23:59:06.195727 kernel: acpiphp: Slot [7] registered Jul 1 23:59:06.195745 kernel: acpiphp: Slot [8] registered Jul 1 23:59:06.195763 kernel: acpiphp: Slot [9] registered Jul 1 23:59:06.195781 kernel: acpiphp: Slot [10] registered Jul 1 23:59:06.195799 kernel: acpiphp: Slot [11] registered Jul 1 23:59:06.195818 kernel: acpiphp: Slot [12] registered Jul 1 23:59:06.195836 kernel: acpiphp: Slot [13] registered Jul 1 23:59:06.195855 kernel: acpiphp: Slot [14] registered Jul 1 23:59:06.195878 kernel: acpiphp: Slot [15] registered Jul 1 23:59:06.195897 kernel: acpiphp: Slot [16] registered Jul 1 23:59:06.195915 kernel: acpiphp: Slot [17] registered Jul 1 23:59:06.195934 kernel: acpiphp: Slot [18] registered Jul 1 23:59:06.195952 kernel: acpiphp: Slot [19] registered Jul 1 23:59:06.195970 kernel: acpiphp: Slot [20] registered Jul 1 23:59:06.195988 kernel: acpiphp: Slot [21] registered Jul 1 23:59:06.196007 kernel: acpiphp: Slot [22] registered Jul 1 23:59:06.196025 kernel: acpiphp: Slot [23] registered Jul 1 23:59:06.196043 kernel: acpiphp: Slot [24] registered Jul 1 23:59:06.196067 kernel: acpiphp: Slot [25] registered Jul 1 23:59:06.196085 kernel: acpiphp: Slot [26] registered Jul 1 23:59:06.196104 kernel: acpiphp: Slot [27] registered Jul 1 23:59:06.196122 kernel: acpiphp: Slot [28] registered Jul 1 23:59:06.196141 kernel: acpiphp: Slot [29] registered Jul 1 23:59:06.196159 kernel: acpiphp: Slot [30] registered Jul 1 23:59:06.196178 kernel: acpiphp: Slot [31] registered Jul 1 23:59:06.196213 kernel: PCI host bridge to bus 0000:00 Jul 1 23:59:06.197584 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 1 23:59:06.200628 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 1 23:59:06.200815 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 1 23:59:06.200998 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 1 23:59:06.201247 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 1 23:59:06.201516 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 1 23:59:06.201735 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 1 23:59:06.201966 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 1 23:59:06.202182 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 1 23:59:06.204498 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 1 23:59:06.204751 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 1 23:59:06.204968 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 1 23:59:06.205185 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 1 23:59:06.205426 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 1 23:59:06.205653 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 1 23:59:06.205861 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 1 23:59:06.206069 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 1 23:59:06.208386 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 1 23:59:06.208654 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 1 23:59:06.208870 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 1 23:59:06.209062 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 1 23:59:06.209252 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 1 23:59:06.209495 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 1 23:59:06.209522 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 1 23:59:06.209542 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 1 23:59:06.209562 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 1 23:59:06.209580 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 1 23:59:06.209599 kernel: iommu: Default domain type: Translated Jul 1 23:59:06.209618 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 1 23:59:06.209644 kernel: efivars: Registered efivars operations Jul 1 23:59:06.209662 kernel: vgaarb: loaded Jul 1 23:59:06.209681 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 1 23:59:06.209699 kernel: VFS: Disk quotas dquot_6.6.0 Jul 1 23:59:06.209718 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 1 23:59:06.209737 kernel: pnp: PnP ACPI init Jul 1 23:59:06.209951 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 1 23:59:06.209978 kernel: pnp: PnP ACPI: found 1 devices Jul 1 23:59:06.210003 kernel: NET: Registered PF_INET protocol family Jul 1 23:59:06.210022 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 1 23:59:06.210041 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 1 23:59:06.210059 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 1 23:59:06.210078 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 1 23:59:06.210097 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 1 23:59:06.210116 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 1 23:59:06.210134 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 1 23:59:06.210153 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 1 23:59:06.210176 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 1 23:59:06.210196 kernel: PCI: CLS 0 bytes, default 64 Jul 1 23:59:06.210214 kernel: kvm [1]: HYP mode not available Jul 1 23:59:06.210233 kernel: Initialise system trusted keyrings Jul 1 23:59:06.210252 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 1 23:59:06.210270 kernel: Key type asymmetric registered Jul 1 23:59:06.213363 kernel: Asymmetric key parser 'x509' registered Jul 1 23:59:06.213394 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 1 23:59:06.213418 kernel: io scheduler mq-deadline registered Jul 1 23:59:06.213449 kernel: io scheduler kyber registered Jul 1 23:59:06.213469 kernel: io scheduler bfq registered Jul 1 23:59:06.213737 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 1 23:59:06.213766 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 1 23:59:06.213786 kernel: ACPI: button: Power Button [PWRB] Jul 1 23:59:06.213806 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 1 23:59:06.213825 kernel: ACPI: button: Sleep Button [SLPB] Jul 1 23:59:06.213843 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 1 23:59:06.213869 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 1 23:59:06.214082 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 1 23:59:06.214109 kernel: printk: console [ttyS0] disabled Jul 1 23:59:06.214128 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 1 23:59:06.214147 kernel: printk: console [ttyS0] enabled Jul 1 23:59:06.214166 kernel: printk: bootconsole [uart0] disabled Jul 1 23:59:06.214184 kernel: thunder_xcv, ver 1.0 Jul 1 23:59:06.214202 kernel: thunder_bgx, ver 1.0 Jul 1 23:59:06.214221 kernel: nicpf, ver 1.0 Jul 1 23:59:06.214239 kernel: nicvf, ver 1.0 Jul 1 23:59:06.214548 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 1 23:59:06.214748 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-01T23:59:05 UTC (1719878345) Jul 1 23:59:06.214775 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 1 23:59:06.214794 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 1 23:59:06.214813 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 1 23:59:06.214832 kernel: watchdog: Hard watchdog permanently disabled Jul 1 23:59:06.214851 kernel: NET: Registered PF_INET6 protocol family Jul 1 23:59:06.214870 kernel: Segment Routing with IPv6 Jul 1 23:59:06.214896 kernel: In-situ OAM (IOAM) with IPv6 Jul 1 23:59:06.214915 kernel: NET: Registered PF_PACKET protocol family Jul 1 23:59:06.214934 kernel: Key type dns_resolver registered Jul 1 23:59:06.214952 kernel: registered taskstats version 1 Jul 1 23:59:06.214970 kernel: Loading compiled-in X.509 certificates Jul 1 23:59:06.214989 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 1 23:59:06.215008 kernel: Key type .fscrypt registered Jul 1 23:59:06.215026 kernel: Key type fscrypt-provisioning registered Jul 1 23:59:06.215044 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 1 23:59:06.215068 kernel: ima: Allocated hash algorithm: sha1 Jul 1 23:59:06.215086 kernel: ima: No architecture policies found Jul 1 23:59:06.215105 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 1 23:59:06.215124 kernel: clk: Disabling unused clocks Jul 1 23:59:06.215143 kernel: Freeing unused kernel memory: 39040K Jul 1 23:59:06.215163 kernel: Run /init as init process Jul 1 23:59:06.215181 kernel: with arguments: Jul 1 23:59:06.215200 kernel: /init Jul 1 23:59:06.215218 kernel: with environment: Jul 1 23:59:06.215241 kernel: HOME=/ Jul 1 23:59:06.215261 kernel: TERM=linux Jul 1 23:59:06.215836 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 1 23:59:06.215872 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 1 23:59:06.215896 systemd[1]: Detected virtualization amazon. Jul 1 23:59:06.215917 systemd[1]: Detected architecture arm64. Jul 1 23:59:06.215937 systemd[1]: Running in initrd. Jul 1 23:59:06.215956 systemd[1]: No hostname configured, using default hostname. Jul 1 23:59:06.215987 systemd[1]: Hostname set to . Jul 1 23:59:06.216009 systemd[1]: Initializing machine ID from VM UUID. Jul 1 23:59:06.216029 systemd[1]: Queued start job for default target initrd.target. Jul 1 23:59:06.216050 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 23:59:06.216071 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 23:59:06.216094 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 1 23:59:06.216115 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 23:59:06.216144 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 1 23:59:06.216167 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 1 23:59:06.216191 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 1 23:59:06.216237 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 1 23:59:06.216260 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 23:59:06.216308 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 23:59:06.216334 systemd[1]: Reached target paths.target - Path Units. Jul 1 23:59:06.216361 systemd[1]: Reached target slices.target - Slice Units. Jul 1 23:59:06.216382 systemd[1]: Reached target swap.target - Swaps. Jul 1 23:59:06.216402 systemd[1]: Reached target timers.target - Timer Units. Jul 1 23:59:06.216422 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 23:59:06.216442 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 23:59:06.216462 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 1 23:59:06.216481 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 1 23:59:06.216502 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 23:59:06.216522 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 23:59:06.216548 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 23:59:06.216568 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 23:59:06.216587 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 1 23:59:06.216608 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 23:59:06.216628 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 1 23:59:06.216648 systemd[1]: Starting systemd-fsck-usr.service... Jul 1 23:59:06.216668 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 23:59:06.216688 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 23:59:06.216713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 23:59:06.216734 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 1 23:59:06.216754 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 23:59:06.216774 systemd[1]: Finished systemd-fsck-usr.service. Jul 1 23:59:06.216849 systemd-journald[249]: Collecting audit messages is disabled. Jul 1 23:59:06.216900 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 1 23:59:06.216922 systemd-journald[249]: Journal started Jul 1 23:59:06.216967 systemd-journald[249]: Runtime Journal (/run/log/journal/ec28bed08340daee39c08c464b9bd41f) is 8.0M, max 75.3M, 67.3M free. Jul 1 23:59:06.194673 systemd-modules-load[251]: Inserted module 'overlay' Jul 1 23:59:06.222390 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 23:59:06.222432 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 1 23:59:06.228135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 23:59:06.235270 kernel: Bridge firewalling registered Jul 1 23:59:06.231268 systemd-modules-load[251]: Inserted module 'br_netfilter' Jul 1 23:59:06.237651 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 23:59:06.241932 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 23:59:06.264862 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 1 23:59:06.270316 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 23:59:06.274502 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 23:59:06.278493 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 1 23:59:06.326762 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 23:59:06.339685 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 1 23:59:06.343692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 23:59:06.354393 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 23:59:06.360164 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 1 23:59:06.384530 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 23:59:06.405723 dracut-cmdline[286]: dracut-dracut-053 Jul 1 23:59:06.414474 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 1 23:59:06.461455 systemd-resolved[295]: Positive Trust Anchors: Jul 1 23:59:06.461483 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 23:59:06.461544 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 1 23:59:06.580607 kernel: SCSI subsystem initialized Jul 1 23:59:06.588407 kernel: Loading iSCSI transport class v2.0-870. Jul 1 23:59:06.600400 kernel: iscsi: registered transport (tcp) Jul 1 23:59:06.622407 kernel: iscsi: registered transport (qla4xxx) Jul 1 23:59:06.622480 kernel: QLogic iSCSI HBA Driver Jul 1 23:59:06.675417 kernel: random: crng init done Jul 1 23:59:06.675741 systemd-resolved[295]: Defaulting to hostname 'linux'. Jul 1 23:59:06.679176 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 23:59:06.683212 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 23:59:06.706131 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 1 23:59:06.724529 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 1 23:59:06.754009 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 1 23:59:06.754087 kernel: device-mapper: uevent: version 1.0.3 Jul 1 23:59:06.754115 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 1 23:59:06.821327 kernel: raid6: neonx8 gen() 6776 MB/s Jul 1 23:59:06.838313 kernel: raid6: neonx4 gen() 6590 MB/s Jul 1 23:59:06.855313 kernel: raid6: neonx2 gen() 5478 MB/s Jul 1 23:59:06.872312 kernel: raid6: neonx1 gen() 3971 MB/s Jul 1 23:59:06.889313 kernel: raid6: int64x8 gen() 3832 MB/s Jul 1 23:59:06.906313 kernel: raid6: int64x4 gen() 3726 MB/s Jul 1 23:59:06.923313 kernel: raid6: int64x2 gen() 3618 MB/s Jul 1 23:59:06.940986 kernel: raid6: int64x1 gen() 2758 MB/s Jul 1 23:59:06.941019 kernel: raid6: using algorithm neonx8 gen() 6776 MB/s Jul 1 23:59:06.958967 kernel: raid6: .... xor() 4783 MB/s, rmw enabled Jul 1 23:59:06.959003 kernel: raid6: using neon recovery algorithm Jul 1 23:59:06.966318 kernel: xor: measuring software checksum speed Jul 1 23:59:06.968316 kernel: 8regs : 11032 MB/sec Jul 1 23:59:06.970312 kernel: 32regs : 11923 MB/sec Jul 1 23:59:06.972316 kernel: arm64_neon : 9581 MB/sec Jul 1 23:59:06.972349 kernel: xor: using function: 32regs (11923 MB/sec) Jul 1 23:59:07.057336 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 1 23:59:07.075824 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 1 23:59:07.085608 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 23:59:07.136099 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 1 23:59:07.144580 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 23:59:07.157692 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 1 23:59:07.192655 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Jul 1 23:59:07.249044 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 23:59:07.259569 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 23:59:07.385573 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 23:59:07.399713 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 1 23:59:07.439648 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 1 23:59:07.442915 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 23:59:07.448052 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 23:59:07.451497 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 23:59:07.462600 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 1 23:59:07.523535 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 1 23:59:07.574003 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 1 23:59:07.574078 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 1 23:59:07.601517 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 1 23:59:07.601789 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 1 23:59:07.602025 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:f3:a3:f7:97:91 Jul 1 23:59:07.604256 (udev-worker)[541]: Network interface NamePolicy= disabled on kernel command line. Jul 1 23:59:07.613509 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 1 23:59:07.613759 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 23:59:07.627229 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 1 23:59:07.631915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 23:59:07.633930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 23:59:07.638458 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 23:59:07.657309 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 1 23:59:07.659402 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 1 23:59:07.661876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 23:59:07.671941 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 1 23:59:07.678734 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 1 23:59:07.678811 kernel: GPT:9289727 != 16777215 Jul 1 23:59:07.678837 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 1 23:59:07.678863 kernel: GPT:9289727 != 16777215 Jul 1 23:59:07.678888 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 1 23:59:07.679700 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 23:59:07.690395 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 23:59:07.702565 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 1 23:59:07.741209 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 23:59:07.790629 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 1 23:59:07.806355 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (519) Jul 1 23:59:07.843388 kernel: BTRFS: device fsid 2e7aff7f-b51e-4094-8f16-54690a62fb17 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (545) Jul 1 23:59:07.873876 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 1 23:59:07.916986 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 1 23:59:07.932828 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 1 23:59:07.935156 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 1 23:59:07.950597 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 1 23:59:07.965751 disk-uuid[662]: Primary Header is updated. Jul 1 23:59:07.965751 disk-uuid[662]: Secondary Entries is updated. Jul 1 23:59:07.965751 disk-uuid[662]: Secondary Header is updated. Jul 1 23:59:07.973342 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 23:59:07.980363 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 23:59:07.990336 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 23:59:08.987321 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 23:59:08.989803 disk-uuid[663]: The operation has completed successfully. Jul 1 23:59:09.174773 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 1 23:59:09.176401 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 1 23:59:09.210583 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 1 23:59:09.229989 sh[1006]: Success Jul 1 23:59:09.249931 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 1 23:59:09.344833 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 1 23:59:09.358488 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 1 23:59:09.367706 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 1 23:59:09.399168 kernel: BTRFS info (device dm-0): first mount of filesystem 2e7aff7f-b51e-4094-8f16-54690a62fb17 Jul 1 23:59:09.399242 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 1 23:59:09.399270 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 1 23:59:09.402125 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 1 23:59:09.402160 kernel: BTRFS info (device dm-0): using free space tree Jul 1 23:59:09.465326 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 1 23:59:09.506444 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 1 23:59:09.509789 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 1 23:59:09.519631 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 1 23:59:09.528644 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 1 23:59:09.558339 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 1 23:59:09.558420 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 1 23:59:09.559938 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 1 23:59:09.564321 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 1 23:59:09.582471 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 1 23:59:09.584364 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 1 23:59:09.606365 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 1 23:59:09.619652 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 1 23:59:09.705348 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 23:59:09.729660 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 23:59:09.787763 systemd-networkd[1199]: lo: Link UP Jul 1 23:59:09.787785 systemd-networkd[1199]: lo: Gained carrier Jul 1 23:59:09.792495 systemd-networkd[1199]: Enumeration completed Jul 1 23:59:09.792671 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 23:59:09.795246 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 23:59:09.795254 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 23:59:09.803341 systemd[1]: Reached target network.target - Network. Jul 1 23:59:09.806885 systemd-networkd[1199]: eth0: Link UP Jul 1 23:59:09.806894 systemd-networkd[1199]: eth0: Gained carrier Jul 1 23:59:09.806917 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 23:59:09.821373 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.30.222/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 1 23:59:09.917215 ignition[1125]: Ignition 2.18.0 Jul 1 23:59:09.918847 ignition[1125]: Stage: fetch-offline Jul 1 23:59:09.920706 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:09.922406 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:09.924783 ignition[1125]: Ignition finished successfully Jul 1 23:59:09.926621 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 23:59:09.942575 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 1 23:59:09.965195 ignition[1208]: Ignition 2.18.0 Jul 1 23:59:09.965224 ignition[1208]: Stage: fetch Jul 1 23:59:09.966144 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:09.966170 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:09.966792 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:09.975785 ignition[1208]: PUT result: OK Jul 1 23:59:09.978618 ignition[1208]: parsed url from cmdline: "" Jul 1 23:59:09.978741 ignition[1208]: no config URL provided Jul 1 23:59:09.979250 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Jul 1 23:59:09.979838 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Jul 1 23:59:09.979993 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:09.982440 ignition[1208]: PUT result: OK Jul 1 23:59:09.984079 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 1 23:59:09.989686 ignition[1208]: GET result: OK Jul 1 23:59:09.989849 ignition[1208]: parsing config with SHA512: b9a74cfaa8f5b7fd6784596e90505dd4f074f015ae61d2606408ca6f340a6dbf2a548ded5fc2711d2d5ba27704848a011495ddbe6aaee64910123b7477f18803 Jul 1 23:59:09.998195 unknown[1208]: fetched base config from "system" Jul 1 23:59:09.998230 unknown[1208]: fetched base config from "system" Jul 1 23:59:09.998245 unknown[1208]: fetched user config from "aws" Jul 1 23:59:10.000879 ignition[1208]: fetch: fetch complete Jul 1 23:59:10.006733 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 1 23:59:10.000891 ignition[1208]: fetch: fetch passed Jul 1 23:59:10.000979 ignition[1208]: Ignition finished successfully Jul 1 23:59:10.018648 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 1 23:59:10.044771 ignition[1215]: Ignition 2.18.0 Jul 1 23:59:10.044800 ignition[1215]: Stage: kargs Jul 1 23:59:10.045700 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:10.045725 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:10.045859 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:10.047989 ignition[1215]: PUT result: OK Jul 1 23:59:10.056507 ignition[1215]: kargs: kargs passed Jul 1 23:59:10.056742 ignition[1215]: Ignition finished successfully Jul 1 23:59:10.062369 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 1 23:59:10.070601 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 1 23:59:10.098785 ignition[1222]: Ignition 2.18.0 Jul 1 23:59:10.099269 ignition[1222]: Stage: disks Jul 1 23:59:10.099897 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:10.099922 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:10.100094 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:10.103885 ignition[1222]: PUT result: OK Jul 1 23:59:10.111432 ignition[1222]: disks: disks passed Jul 1 23:59:10.111536 ignition[1222]: Ignition finished successfully Jul 1 23:59:10.116265 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 1 23:59:10.120588 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 1 23:59:10.123046 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 1 23:59:10.129235 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 23:59:10.131041 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 23:59:10.132869 systemd[1]: Reached target basic.target - Basic System. Jul 1 23:59:10.150665 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 1 23:59:10.194724 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 1 23:59:10.203521 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 1 23:59:10.216217 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 1 23:59:10.303552 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 95038baa-e9f1-4207-86a5-38a4ce3cff7d r/w with ordered data mode. Quota mode: none. Jul 1 23:59:10.304541 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 1 23:59:10.308008 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 1 23:59:10.328464 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 23:59:10.334535 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 1 23:59:10.337161 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 1 23:59:10.337241 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 1 23:59:10.337396 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 23:59:10.352339 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1250) Jul 1 23:59:10.356197 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 1 23:59:10.356267 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 1 23:59:10.356330 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 1 23:59:10.361332 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 1 23:59:10.364467 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 23:59:10.371865 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 1 23:59:10.381582 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 1 23:59:10.731631 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Jul 1 23:59:10.739988 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Jul 1 23:59:10.747910 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Jul 1 23:59:10.755907 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Jul 1 23:59:11.046064 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 1 23:59:11.053503 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 1 23:59:11.064681 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 1 23:59:11.082100 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 1 23:59:11.086321 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 1 23:59:11.118358 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 1 23:59:11.127869 ignition[1364]: INFO : Ignition 2.18.0 Jul 1 23:59:11.127869 ignition[1364]: INFO : Stage: mount Jul 1 23:59:11.131569 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:11.131569 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:11.131569 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:11.137658 ignition[1364]: INFO : PUT result: OK Jul 1 23:59:11.142731 ignition[1364]: INFO : mount: mount passed Jul 1 23:59:11.144243 ignition[1364]: INFO : Ignition finished successfully Jul 1 23:59:11.148117 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 1 23:59:11.167458 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 1 23:59:11.186627 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 23:59:11.221314 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1375) Jul 1 23:59:11.225319 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 1 23:59:11.225363 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 1 23:59:11.225390 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 1 23:59:11.229323 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 1 23:59:11.233640 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 23:59:11.271298 ignition[1392]: INFO : Ignition 2.18.0 Jul 1 23:59:11.271298 ignition[1392]: INFO : Stage: files Jul 1 23:59:11.274421 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:11.274421 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:11.274421 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:11.281357 ignition[1392]: INFO : PUT result: OK Jul 1 23:59:11.286036 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Jul 1 23:59:11.289100 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 1 23:59:11.289100 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 1 23:59:11.321885 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 1 23:59:11.324456 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 1 23:59:11.327262 unknown[1392]: wrote ssh authorized keys file for user: core Jul 1 23:59:11.329521 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 1 23:59:11.341921 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 1 23:59:11.345138 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 1 23:59:11.348235 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 1 23:59:11.352125 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 1 23:59:11.404548 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 1 23:59:11.484777 systemd-networkd[1199]: eth0: Gained IPv6LL Jul 1 23:59:11.500476 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 1 23:59:11.504337 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 1 23:59:11.504337 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 1 23:59:11.971655 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 1 23:59:12.098193 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 1 23:59:12.098193 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 1 23:59:12.104503 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 1 23:59:12.475432 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 1 23:59:12.799421 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 1 23:59:12.799421 ignition[1392]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 1 23:59:12.805889 ignition[1392]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 1 23:59:12.805889 ignition[1392]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 1 23:59:12.805889 ignition[1392]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 1 23:59:12.805889 ignition[1392]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 1 23:59:12.805889 ignition[1392]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 23:59:12.805889 ignition[1392]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 23:59:12.805889 ignition[1392]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 1 23:59:12.805889 ignition[1392]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 1 23:59:12.805889 ignition[1392]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 1 23:59:12.805889 ignition[1392]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 1 23:59:12.805889 ignition[1392]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 1 23:59:12.805889 ignition[1392]: INFO : files: files passed Jul 1 23:59:12.805889 ignition[1392]: INFO : Ignition finished successfully Jul 1 23:59:12.821550 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 1 23:59:12.851653 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 1 23:59:12.862800 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 1 23:59:12.870083 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 1 23:59:12.870401 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 1 23:59:12.901351 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 23:59:12.901351 initrd-setup-root-after-ignition[1421]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 1 23:59:12.907635 initrd-setup-root-after-ignition[1425]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 23:59:12.914057 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 23:59:12.920720 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 1 23:59:12.930572 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 1 23:59:12.989103 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 1 23:59:12.989338 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 1 23:59:12.991936 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 1 23:59:12.994347 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 1 23:59:13.000036 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 1 23:59:13.013628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 1 23:59:13.049812 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 23:59:13.064584 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 1 23:59:13.087166 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 1 23:59:13.091598 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 23:59:13.094125 systemd[1]: Stopped target timers.target - Timer Units. Jul 1 23:59:13.099789 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 1 23:59:13.100027 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 23:59:13.102820 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 1 23:59:13.110207 systemd[1]: Stopped target basic.target - Basic System. Jul 1 23:59:13.112039 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 1 23:59:13.114170 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 23:59:13.121356 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 1 23:59:13.123978 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 1 23:59:13.129235 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 23:59:13.131742 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 1 23:59:13.133749 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 1 23:59:13.135671 systemd[1]: Stopped target swap.target - Swaps. Jul 1 23:59:13.137323 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 1 23:59:13.138094 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 1 23:59:13.141752 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 1 23:59:13.143796 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 23:59:13.146060 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 1 23:59:13.146275 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 23:59:13.151029 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 1 23:59:13.151249 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 1 23:59:13.152443 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 1 23:59:13.152655 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 23:59:13.153184 systemd[1]: ignition-files.service: Deactivated successfully. Jul 1 23:59:13.153810 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 1 23:59:13.190712 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 1 23:59:13.199030 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 1 23:59:13.200722 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 1 23:59:13.201535 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 23:59:13.205531 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 1 23:59:13.206722 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 23:59:13.223497 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 1 23:59:13.223834 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 1 23:59:13.254506 ignition[1445]: INFO : Ignition 2.18.0 Jul 1 23:59:13.254506 ignition[1445]: INFO : Stage: umount Jul 1 23:59:13.258894 ignition[1445]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:13.258894 ignition[1445]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:13.258894 ignition[1445]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:13.258894 ignition[1445]: INFO : PUT result: OK Jul 1 23:59:13.270184 ignition[1445]: INFO : umount: umount passed Jul 1 23:59:13.270184 ignition[1445]: INFO : Ignition finished successfully Jul 1 23:59:13.275922 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 1 23:59:13.276114 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 1 23:59:13.286753 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 1 23:59:13.288945 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 1 23:59:13.289049 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 1 23:59:13.292641 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 1 23:59:13.292733 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 1 23:59:13.294701 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 1 23:59:13.294800 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 1 23:59:13.298187 systemd[1]: Stopped target network.target - Network. Jul 1 23:59:13.299761 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 1 23:59:13.299868 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 23:59:13.302069 systemd[1]: Stopped target paths.target - Path Units. Jul 1 23:59:13.303696 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 1 23:59:13.307577 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 23:59:13.311579 systemd[1]: Stopped target slices.target - Slice Units. Jul 1 23:59:13.314610 systemd[1]: Stopped target sockets.target - Socket Units. Jul 1 23:59:13.316426 systemd[1]: iscsid.socket: Deactivated successfully. Jul 1 23:59:13.316504 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 23:59:13.318528 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 1 23:59:13.318598 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 23:59:13.320479 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 1 23:59:13.320565 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 1 23:59:13.322392 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 1 23:59:13.322467 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 1 23:59:13.324639 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 1 23:59:13.326623 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 1 23:59:13.329102 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 1 23:59:13.329944 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 1 23:59:13.332812 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 1 23:59:13.332982 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 1 23:59:13.339106 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 1 23:59:13.339402 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 1 23:59:13.340930 systemd-networkd[1199]: eth0: DHCPv6 lease lost Jul 1 23:59:13.347700 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 1 23:59:13.347826 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 1 23:59:13.352426 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 1 23:59:13.354876 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 1 23:59:13.367457 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 1 23:59:13.367551 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 1 23:59:13.393604 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 1 23:59:13.410530 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 1 23:59:13.410640 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 23:59:13.417270 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 1 23:59:13.417379 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 1 23:59:13.419321 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 1 23:59:13.419401 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 1 23:59:13.428045 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 23:59:13.458743 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 1 23:59:13.461362 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 23:59:13.462771 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 1 23:59:13.462860 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 1 23:59:13.463015 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 1 23:59:13.463077 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 23:59:13.463261 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 1 23:59:13.463582 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 1 23:59:13.464274 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 1 23:59:13.464464 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 1 23:59:13.465028 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 1 23:59:13.465101 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 23:59:13.480476 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 1 23:59:13.489024 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 1 23:59:13.489155 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 23:59:13.492141 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 1 23:59:13.494576 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 23:59:13.499376 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 1 23:59:13.499478 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 23:59:13.502447 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 23:59:13.502539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 23:59:13.506171 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 1 23:59:13.506369 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 1 23:59:13.515702 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 1 23:59:13.515999 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 1 23:59:13.523142 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 1 23:59:13.561817 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 1 23:59:13.598499 systemd[1]: Switching root. Jul 1 23:59:13.634681 systemd-journald[249]: Journal stopped Jul 1 23:59:16.267985 systemd-journald[249]: Received SIGTERM from PID 1 (systemd). Jul 1 23:59:16.268109 kernel: SELinux: policy capability network_peer_controls=1 Jul 1 23:59:16.268174 kernel: SELinux: policy capability open_perms=1 Jul 1 23:59:16.268208 kernel: SELinux: policy capability extended_socket_class=1 Jul 1 23:59:16.268238 kernel: SELinux: policy capability always_check_network=0 Jul 1 23:59:16.268269 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 1 23:59:16.268321 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 1 23:59:16.268355 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 1 23:59:16.268390 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 1 23:59:16.268426 kernel: audit: type=1403 audit(1719878354.759:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 1 23:59:16.268466 systemd[1]: Successfully loaded SELinux policy in 62.410ms. Jul 1 23:59:16.268518 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.850ms. Jul 1 23:59:16.268554 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 1 23:59:16.268584 systemd[1]: Detected virtualization amazon. Jul 1 23:59:16.268614 systemd[1]: Detected architecture arm64. Jul 1 23:59:16.268644 systemd[1]: Detected first boot. Jul 1 23:59:16.268678 systemd[1]: Initializing machine ID from VM UUID. Jul 1 23:59:16.268710 zram_generator::config[1504]: No configuration found. Jul 1 23:59:16.268757 systemd[1]: Populated /etc with preset unit settings. Jul 1 23:59:16.268790 systemd[1]: Queued start job for default target multi-user.target. Jul 1 23:59:16.268823 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 1 23:59:16.268855 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 1 23:59:16.268885 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 1 23:59:16.268918 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 1 23:59:16.268954 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 1 23:59:16.268984 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 1 23:59:16.269020 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 1 23:59:16.269054 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 1 23:59:16.269086 systemd[1]: Created slice user.slice - User and Session Slice. Jul 1 23:59:16.269120 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 23:59:16.269149 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 23:59:16.269182 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 1 23:59:16.269212 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 1 23:59:16.269245 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 1 23:59:16.269303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 23:59:16.269371 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 1 23:59:16.269402 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 23:59:16.269435 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 1 23:59:16.269469 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 23:59:16.269500 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 23:59:16.269534 systemd[1]: Reached target slices.target - Slice Units. Jul 1 23:59:16.269566 systemd[1]: Reached target swap.target - Swaps. Jul 1 23:59:16.269595 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 1 23:59:16.269639 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 1 23:59:16.269669 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 1 23:59:16.269701 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 1 23:59:16.269731 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 23:59:16.269760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 23:59:16.269790 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 23:59:16.269821 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 1 23:59:16.269856 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 1 23:59:16.269886 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 1 23:59:16.269919 systemd[1]: Mounting media.mount - External Media Directory... Jul 1 23:59:16.269949 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 1 23:59:16.269981 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 1 23:59:16.270013 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 1 23:59:16.270045 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 1 23:59:16.270077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 23:59:16.270107 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 23:59:16.270137 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 1 23:59:16.270169 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 23:59:16.270204 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 23:59:16.270234 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 23:59:16.270263 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 1 23:59:16.270345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 23:59:16.270384 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 1 23:59:16.270417 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 1 23:59:16.270452 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 1 23:59:16.270481 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 23:59:16.270516 kernel: loop: module loaded Jul 1 23:59:16.270550 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 23:59:16.270581 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 1 23:59:16.270609 kernel: fuse: init (API version 7.39) Jul 1 23:59:16.270639 kernel: ACPI: bus type drm_connector registered Jul 1 23:59:16.270668 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 1 23:59:16.270699 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 23:59:16.270787 systemd-journald[1603]: Collecting audit messages is disabled. Jul 1 23:59:16.270852 systemd-journald[1603]: Journal started Jul 1 23:59:16.270899 systemd-journald[1603]: Runtime Journal (/run/log/journal/ec28bed08340daee39c08c464b9bd41f) is 8.0M, max 75.3M, 67.3M free. Jul 1 23:59:16.282351 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 23:59:16.287274 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 1 23:59:16.292700 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 1 23:59:16.297520 systemd[1]: Mounted media.mount - External Media Directory. Jul 1 23:59:16.304015 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 1 23:59:16.308929 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 1 23:59:16.313888 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 1 23:59:16.318729 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 23:59:16.324741 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 1 23:59:16.325104 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 1 23:59:16.330651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 23:59:16.331023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 23:59:16.338802 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 23:59:16.339178 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 23:59:16.345334 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 23:59:16.345692 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 23:59:16.351333 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 1 23:59:16.351687 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 1 23:59:16.357180 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 23:59:16.357630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 23:59:16.363755 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 23:59:16.369202 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 1 23:59:16.375182 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 23:59:16.381036 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 1 23:59:16.407067 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 1 23:59:16.421469 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 1 23:59:16.434475 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 1 23:59:16.439627 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 1 23:59:16.462704 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 1 23:59:16.477852 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 1 23:59:16.485020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 23:59:16.491548 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 1 23:59:16.497550 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 23:59:16.509601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 23:59:16.528581 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 1 23:59:16.549691 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 23:59:16.555130 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 1 23:59:16.561713 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 1 23:59:16.583785 systemd-journald[1603]: Time spent on flushing to /var/log/journal/ec28bed08340daee39c08c464b9bd41f is 62.461ms for 901 entries. Jul 1 23:59:16.583785 systemd-journald[1603]: System Journal (/var/log/journal/ec28bed08340daee39c08c464b9bd41f) is 8.0M, max 195.6M, 187.6M free. Jul 1 23:59:16.660447 systemd-journald[1603]: Received client request to flush runtime journal. Jul 1 23:59:16.588673 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 1 23:59:16.594213 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 1 23:59:16.603565 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 1 23:59:16.641570 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 23:59:16.657028 udevadm[1662]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 1 23:59:16.667606 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 1 23:59:16.679936 systemd-tmpfiles[1655]: ACLs are not supported, ignoring. Jul 1 23:59:16.679968 systemd-tmpfiles[1655]: ACLs are not supported, ignoring. Jul 1 23:59:16.690544 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 23:59:16.703552 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 1 23:59:16.756630 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 1 23:59:16.770684 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 23:59:16.801181 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Jul 1 23:59:16.801800 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Jul 1 23:59:16.812626 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 23:59:17.538736 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 1 23:59:17.559712 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 23:59:17.607630 systemd-udevd[1683]: Using default interface naming scheme 'v255'. Jul 1 23:59:17.653853 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 23:59:17.681523 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 23:59:17.711592 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 1 23:59:17.818324 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1688) Jul 1 23:59:17.837121 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 1 23:59:17.884177 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 1 23:59:17.895749 (udev-worker)[1704]: Network interface NamePolicy= disabled on kernel command line. Jul 1 23:59:17.999727 systemd-networkd[1691]: lo: Link UP Jul 1 23:59:17.999750 systemd-networkd[1691]: lo: Gained carrier Jul 1 23:59:18.007448 systemd-networkd[1691]: Enumeration completed Jul 1 23:59:18.008447 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 23:59:18.015771 systemd-networkd[1691]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 23:59:18.015789 systemd-networkd[1691]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 23:59:18.019183 systemd-networkd[1691]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 23:59:18.019252 systemd-networkd[1691]: eth0: Link UP Jul 1 23:59:18.019558 systemd-networkd[1691]: eth0: Gained carrier Jul 1 23:59:18.019581 systemd-networkd[1691]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 23:59:18.034780 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 1 23:59:18.035467 systemd-networkd[1691]: eth0: DHCPv4 address 172.31.30.222/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 1 23:59:18.088344 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1693) Jul 1 23:59:18.195770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 23:59:18.320670 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 1 23:59:18.339521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 1 23:59:18.354550 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 1 23:59:18.384356 lvm[1808]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 1 23:59:18.423913 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 1 23:59:18.427814 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 23:59:18.439786 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 1 23:59:18.450120 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 23:59:18.454551 lvm[1812]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 1 23:59:18.491790 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 1 23:59:18.494837 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 1 23:59:18.497734 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 1 23:59:18.497937 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 23:59:18.499958 systemd[1]: Reached target machines.target - Containers. Jul 1 23:59:18.503681 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 1 23:59:18.514629 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 1 23:59:18.524621 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 1 23:59:18.527801 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 23:59:18.533797 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 1 23:59:18.558305 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 1 23:59:18.564331 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 1 23:59:18.570121 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 1 23:59:18.616355 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 1 23:59:18.623353 kernel: loop0: detected capacity change from 0 to 113672 Jul 1 23:59:18.623479 kernel: block loop0: the capability attribute has been deprecated. Jul 1 23:59:18.629074 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 1 23:59:18.630623 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 1 23:59:18.707347 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 1 23:59:18.739377 kernel: loop1: detected capacity change from 0 to 59672 Jul 1 23:59:18.799329 kernel: loop2: detected capacity change from 0 to 51896 Jul 1 23:59:18.873353 kernel: loop3: detected capacity change from 0 to 193208 Jul 1 23:59:18.919334 kernel: loop4: detected capacity change from 0 to 113672 Jul 1 23:59:18.931328 kernel: loop5: detected capacity change from 0 to 59672 Jul 1 23:59:18.942338 kernel: loop6: detected capacity change from 0 to 51896 Jul 1 23:59:18.953343 kernel: loop7: detected capacity change from 0 to 193208 Jul 1 23:59:18.966021 (sd-merge)[1836]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 1 23:59:18.968354 (sd-merge)[1836]: Merged extensions into '/usr'. Jul 1 23:59:18.976407 systemd[1]: Reloading requested from client PID 1823 ('systemd-sysext') (unit systemd-sysext.service)... Jul 1 23:59:18.976441 systemd[1]: Reloading... Jul 1 23:59:19.118579 zram_generator::config[1862]: No configuration found. Jul 1 23:59:19.227532 systemd-networkd[1691]: eth0: Gained IPv6LL Jul 1 23:59:19.387136 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 23:59:19.478575 ldconfig[1819]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 1 23:59:19.530524 systemd[1]: Reloading finished in 553 ms. Jul 1 23:59:19.563199 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 1 23:59:19.567000 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 1 23:59:19.569898 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 1 23:59:19.586605 systemd[1]: Starting ensure-sysext.service... Jul 1 23:59:19.598642 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 1 23:59:19.614493 systemd[1]: Reloading requested from client PID 1923 ('systemctl') (unit ensure-sysext.service)... Jul 1 23:59:19.614518 systemd[1]: Reloading... Jul 1 23:59:19.655910 systemd-tmpfiles[1924]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 1 23:59:19.656639 systemd-tmpfiles[1924]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 1 23:59:19.658394 systemd-tmpfiles[1924]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 1 23:59:19.658935 systemd-tmpfiles[1924]: ACLs are not supported, ignoring. Jul 1 23:59:19.659089 systemd-tmpfiles[1924]: ACLs are not supported, ignoring. Jul 1 23:59:19.665483 systemd-tmpfiles[1924]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 23:59:19.665510 systemd-tmpfiles[1924]: Skipping /boot Jul 1 23:59:19.687247 systemd-tmpfiles[1924]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 23:59:19.687296 systemd-tmpfiles[1924]: Skipping /boot Jul 1 23:59:19.733333 zram_generator::config[1950]: No configuration found. Jul 1 23:59:20.002872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 23:59:20.143755 systemd[1]: Reloading finished in 528 ms. Jul 1 23:59:20.175495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 1 23:59:20.191612 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 1 23:59:20.197825 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 1 23:59:20.212661 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 1 23:59:20.228701 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 23:59:20.237572 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 1 23:59:20.266252 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 23:59:20.278515 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 23:59:20.300740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 23:59:20.313838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 23:59:20.317225 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 23:59:20.339322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 23:59:20.339814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 23:59:20.356191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 23:59:20.356658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 23:59:20.370867 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 1 23:59:20.386568 systemd[1]: Finished ensure-sysext.service. Jul 1 23:59:20.388997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 23:59:20.396064 augenrules[2034]: No rules Jul 1 23:59:20.389405 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 23:59:20.392408 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 23:59:20.392822 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 23:59:20.397209 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 1 23:59:20.408787 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 1 23:59:20.417088 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 23:59:20.426693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 23:59:20.428810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 23:59:20.428878 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 23:59:20.428979 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 23:59:20.429041 systemd[1]: Reached target time-set.target - System Time Set. Jul 1 23:59:20.445186 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 1 23:59:20.460031 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 23:59:20.462796 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 23:59:20.497662 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 1 23:59:20.503226 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 1 23:59:20.508071 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 1 23:59:20.541622 systemd-resolved[2014]: Positive Trust Anchors: Jul 1 23:59:20.541660 systemd-resolved[2014]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 23:59:20.541724 systemd-resolved[2014]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 1 23:59:20.549165 systemd-resolved[2014]: Defaulting to hostname 'linux'. Jul 1 23:59:20.552575 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 23:59:20.554821 systemd[1]: Reached target network.target - Network. Jul 1 23:59:20.556471 systemd[1]: Reached target network-online.target - Network is Online. Jul 1 23:59:20.558554 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 23:59:20.560776 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 23:59:20.562964 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 1 23:59:20.565201 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 1 23:59:20.567649 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 1 23:59:20.569789 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 1 23:59:20.571976 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 1 23:59:20.574195 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 1 23:59:20.574252 systemd[1]: Reached target paths.target - Path Units. Jul 1 23:59:20.575845 systemd[1]: Reached target timers.target - Timer Units. Jul 1 23:59:20.579124 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 1 23:59:20.583929 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 1 23:59:20.588455 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 1 23:59:20.593434 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 1 23:59:20.595508 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 23:59:20.598404 systemd[1]: Reached target basic.target - Basic System. Jul 1 23:59:20.600790 systemd[1]: System is tainted: cgroupsv1 Jul 1 23:59:20.600880 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 1 23:59:20.600929 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 1 23:59:20.609530 systemd[1]: Starting containerd.service - containerd container runtime... Jul 1 23:59:20.616922 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 1 23:59:20.628685 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 1 23:59:20.643401 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 1 23:59:20.651115 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 1 23:59:20.653839 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 1 23:59:20.671318 jq[2070]: false Jul 1 23:59:20.667512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 23:59:20.677586 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 1 23:59:20.698783 systemd[1]: Started ntpd.service - Network Time Service. Jul 1 23:59:20.715608 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 1 23:59:20.730486 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 1 23:59:20.744623 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 1 23:59:20.754513 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 1 23:59:20.770192 dbus-daemon[2069]: [system] SELinux support is enabled Jul 1 23:59:20.770792 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 1 23:59:20.792555 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 1 23:59:20.798446 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 1 23:59:20.800744 dbus-daemon[2069]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1691 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 1 23:59:20.811332 extend-filesystems[2071]: Found loop4 Jul 1 23:59:20.811332 extend-filesystems[2071]: Found loop5 Jul 1 23:59:20.811332 extend-filesystems[2071]: Found loop6 Jul 1 23:59:20.811332 extend-filesystems[2071]: Found loop7 Jul 1 23:59:20.811332 extend-filesystems[2071]: Found nvme0n1 Jul 1 23:59:20.811332 extend-filesystems[2071]: Found nvme0n1p1 Jul 1 23:59:20.811332 extend-filesystems[2071]: Found nvme0n1p2 Jul 1 23:59:20.811332 extend-filesystems[2071]: Found nvme0n1p3 Jul 1 23:59:20.811332 extend-filesystems[2071]: Found usr Jul 1 23:59:20.811332 extend-filesystems[2071]: Found nvme0n1p4 Jul 1 23:59:20.811332 extend-filesystems[2071]: Found nvme0n1p6 Jul 1 23:59:20.811332 extend-filesystems[2071]: Found nvme0n1p7 Jul 1 23:59:20.811332 extend-filesystems[2071]: Found nvme0n1p9 Jul 1 23:59:20.811332 extend-filesystems[2071]: Checking size of /dev/nvme0n1p9 Jul 1 23:59:20.817847 systemd[1]: Starting update-engine.service - Update Engine... Jul 1 23:59:20.839022 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 1 23:59:20.843993 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 1 23:59:20.859752 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 1 23:59:20.860266 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 1 23:59:20.903884 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 1 23:59:20.904467 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 1 23:59:20.930530 ntpd[2075]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 1 23:59:20.930878 ntpd[2075]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: ---------------------------------------------------- Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: ntp-4 is maintained by Network Time Foundation, Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: corporation. Support and training for ntp-4 are Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: available at https://www.nwtime.org/support Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: ---------------------------------------------------- Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: proto: precision = 0.108 usec (-23) Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: basedate set to 2024-06-19 Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: gps base set to 2024-06-23 (week 2320) Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: Listen and drop on 0 v6wildcard [::]:123 Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: Listen normally on 2 lo 127.0.0.1:123 Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: Listen normally on 3 eth0 172.31.30.222:123 Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: Listen normally on 4 lo [::1]:123 Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: Listen normally on 5 eth0 [fe80::4f3:a3ff:fef7:9791%2]:123 Jul 1 23:59:20.949474 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: Listening on routing socket on fd #22 for interface updates Jul 1 23:59:20.930900 ntpd[2075]: ---------------------------------------------------- Jul 1 23:59:20.976681 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 1 23:59:20.976681 ntpd[2075]: 1 Jul 23:59:20 ntpd[2075]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 1 23:59:20.976797 extend-filesystems[2071]: Resized partition /dev/nvme0n1p9 Jul 1 23:59:20.993635 jq[2094]: true Jul 1 23:59:20.930918 ntpd[2075]: ntp-4 is maintained by Network Time Foundation, Jul 1 23:59:20.994141 extend-filesystems[2120]: resize2fs 1.47.0 (5-Feb-2023) Jul 1 23:59:21.078676 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 1 23:59:20.930937 ntpd[2075]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 1 23:59:21.078971 update_engine[2090]: I0701 23:59:21.042865 2090 main.cc:92] Flatcar Update Engine starting Jul 1 23:59:21.078971 update_engine[2090]: I0701 23:59:21.057550 2090 update_check_scheduler.cc:74] Next update check in 6m34s Jul 1 23:59:20.930956 ntpd[2075]: corporation. Support and training for ntp-4 are Jul 1 23:59:21.082115 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 1 23:59:20.930974 ntpd[2075]: available at https://www.nwtime.org/support Jul 1 23:59:21.085883 (ntainerd)[2116]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 1 23:59:20.930992 ntpd[2075]: ---------------------------------------------------- Jul 1 23:59:21.098680 systemd[1]: motdgen.service: Deactivated successfully. Jul 1 23:59:20.936775 ntpd[2075]: proto: precision = 0.108 usec (-23) Jul 1 23:59:21.126575 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 1 23:59:21.099171 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 1 23:59:21.157240 tar[2099]: linux-arm64/helm Jul 1 23:59:20.937198 ntpd[2075]: basedate set to 2024-06-19 Jul 1 23:59:21.159018 coreos-metadata[2067]: Jul 01 23:59:21.156 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 1 23:59:21.163861 jq[2123]: true Jul 1 23:59:21.120170 systemd[1]: Started update-engine.service - Update Engine. Jul 1 23:59:20.937222 ntpd[2075]: gps base set to 2024-06-23 (week 2320) Jul 1 23:59:21.173023 extend-filesystems[2120]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 1 23:59:21.173023 extend-filesystems[2120]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 1 23:59:21.173023 extend-filesystems[2120]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 1 23:59:21.188085 coreos-metadata[2067]: Jul 01 23:59:21.168 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 1 23:59:21.139474 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 1 23:59:20.941839 ntpd[2075]: Listen and drop on 0 v6wildcard [::]:123 Jul 1 23:59:21.223961 extend-filesystems[2071]: Resized filesystem in /dev/nvme0n1p9 Jul 1 23:59:21.227472 coreos-metadata[2067]: Jul 01 23:59:21.194 INFO Fetch successful Jul 1 23:59:21.227472 coreos-metadata[2067]: Jul 01 23:59:21.214 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 1 23:59:21.227472 coreos-metadata[2067]: Jul 01 23:59:21.226 INFO Fetch successful Jul 1 23:59:21.227472 coreos-metadata[2067]: Jul 01 23:59:21.226 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 1 23:59:21.139548 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 1 23:59:20.941924 ntpd[2075]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 1 23:59:21.242220 coreos-metadata[2067]: Jul 01 23:59:21.227 INFO Fetch successful Jul 1 23:59:21.242220 coreos-metadata[2067]: Jul 01 23:59:21.227 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 1 23:59:21.242220 coreos-metadata[2067]: Jul 01 23:59:21.234 INFO Fetch successful Jul 1 23:59:21.242220 coreos-metadata[2067]: Jul 01 23:59:21.234 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 1 23:59:21.242220 coreos-metadata[2067]: Jul 01 23:59:21.241 INFO Fetch failed with 404: resource not found Jul 1 23:59:21.242220 coreos-metadata[2067]: Jul 01 23:59:21.241 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 1 23:59:21.161682 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 1 23:59:20.942209 ntpd[2075]: Listen normally on 2 lo 127.0.0.1:123 Jul 1 23:59:21.164493 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 1 23:59:20.942274 ntpd[2075]: Listen normally on 3 eth0 172.31.30.222:123 Jul 1 23:59:21.164533 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 1 23:59:20.942377 ntpd[2075]: Listen normally on 4 lo [::1]:123 Jul 1 23:59:21.172562 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 1 23:59:20.942448 ntpd[2075]: Listen normally on 5 eth0 [fe80::4f3:a3ff:fef7:9791%2]:123 Jul 1 23:59:21.179610 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 1 23:59:20.942508 ntpd[2075]: Listening on routing socket on fd #22 for interface updates Jul 1 23:59:21.205758 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 1 23:59:20.954904 ntpd[2075]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 1 23:59:21.208671 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 1 23:59:21.246750 coreos-metadata[2067]: Jul 01 23:59:21.246 INFO Fetch successful Jul 1 23:59:21.246750 coreos-metadata[2067]: Jul 01 23:59:21.246 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 1 23:59:20.954952 ntpd[2075]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 1 23:59:21.103006 dbus-daemon[2069]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 1 23:59:21.256520 coreos-metadata[2067]: Jul 01 23:59:21.250 INFO Fetch successful Jul 1 23:59:21.256520 coreos-metadata[2067]: Jul 01 23:59:21.250 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 1 23:59:21.256520 coreos-metadata[2067]: Jul 01 23:59:21.253 INFO Fetch successful Jul 1 23:59:21.256520 coreos-metadata[2067]: Jul 01 23:59:21.253 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 1 23:59:21.258716 coreos-metadata[2067]: Jul 01 23:59:21.256 INFO Fetch successful Jul 1 23:59:21.258716 coreos-metadata[2067]: Jul 01 23:59:21.257 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 1 23:59:21.265327 coreos-metadata[2067]: Jul 01 23:59:21.260 INFO Fetch successful Jul 1 23:59:21.334681 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 1 23:59:21.340867 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 1 23:59:21.545141 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 1 23:59:21.547775 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 1 23:59:21.553001 bash[2186]: Updated "/home/core/.ssh/authorized_keys" Jul 1 23:59:21.554191 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 1 23:59:21.563157 systemd[1]: Starting sshkeys.service... Jul 1 23:59:21.577923 systemd-logind[2087]: Watching system buttons on /dev/input/event0 (Power Button) Jul 1 23:59:21.577974 systemd-logind[2087]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 1 23:59:21.579572 systemd-logind[2087]: New seat seat0. Jul 1 23:59:21.617264 systemd[1]: Started systemd-logind.service - User Login Management. Jul 1 23:59:21.635901 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2182) Jul 1 23:59:21.701255 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 1 23:59:21.720064 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 1 23:59:21.794569 locksmithd[2141]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 1 23:59:21.856105 amazon-ssm-agent[2155]: Initializing new seelog logger Jul 1 23:59:21.859399 amazon-ssm-agent[2155]: New Seelog Logger Creation Complete Jul 1 23:59:21.859399 amazon-ssm-agent[2155]: 2024/07/01 23:59:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:21.859399 amazon-ssm-agent[2155]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:21.874379 amazon-ssm-agent[2155]: 2024/07/01 23:59:21 processing appconfig overrides Jul 1 23:59:21.874379 amazon-ssm-agent[2155]: 2024-07-01 23:59:21 INFO Proxy environment variables: Jul 1 23:59:21.874379 amazon-ssm-agent[2155]: 2024/07/01 23:59:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:21.874379 amazon-ssm-agent[2155]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:21.874379 amazon-ssm-agent[2155]: 2024/07/01 23:59:21 processing appconfig overrides Jul 1 23:59:21.887350 amazon-ssm-agent[2155]: 2024/07/01 23:59:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:21.887350 amazon-ssm-agent[2155]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:21.887350 amazon-ssm-agent[2155]: 2024/07/01 23:59:21 processing appconfig overrides Jul 1 23:59:21.896040 amazon-ssm-agent[2155]: 2024/07/01 23:59:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:21.896040 amazon-ssm-agent[2155]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:21.896040 amazon-ssm-agent[2155]: 2024/07/01 23:59:21 processing appconfig overrides Jul 1 23:59:21.975306 amazon-ssm-agent[2155]: 2024-07-01 23:59:21 INFO https_proxy: Jul 1 23:59:22.078302 amazon-ssm-agent[2155]: 2024-07-01 23:59:21 INFO http_proxy: Jul 1 23:59:22.182270 amazon-ssm-agent[2155]: 2024-07-01 23:59:21 INFO no_proxy: Jul 1 23:59:22.228611 coreos-metadata[2217]: Jul 01 23:59:22.228 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 1 23:59:22.231956 coreos-metadata[2217]: Jul 01 23:59:22.231 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 1 23:59:22.234985 coreos-metadata[2217]: Jul 01 23:59:22.234 INFO Fetch successful Jul 1 23:59:22.234985 coreos-metadata[2217]: Jul 01 23:59:22.234 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 1 23:59:22.235882 coreos-metadata[2217]: Jul 01 23:59:22.235 INFO Fetch successful Jul 1 23:59:22.246434 unknown[2217]: wrote ssh authorized keys file for user: core Jul 1 23:59:22.293300 amazon-ssm-agent[2155]: 2024-07-01 23:59:21 INFO Checking if agent identity type OnPrem can be assumed Jul 1 23:59:22.300071 containerd[2116]: time="2024-07-01T23:59:22.299940923Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 1 23:59:22.369987 update-ssh-keys[2303]: Updated "/home/core/.ssh/authorized_keys" Jul 1 23:59:22.351213 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 1 23:59:22.374551 systemd[1]: Finished sshkeys.service. Jul 1 23:59:22.382562 dbus-daemon[2069]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 1 23:59:22.384861 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 1 23:59:22.395156 amazon-ssm-agent[2155]: 2024-07-01 23:59:21 INFO Checking if agent identity type EC2 can be assumed Jul 1 23:59:22.409506 dbus-daemon[2069]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2138 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 1 23:59:22.421039 systemd[1]: Starting polkit.service - Authorization Manager... Jul 1 23:59:22.484267 polkitd[2316]: Started polkitd version 121 Jul 1 23:59:22.493806 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO Agent will take identity from EC2 Jul 1 23:59:22.533261 polkitd[2316]: Loading rules from directory /etc/polkit-1/rules.d Jul 1 23:59:22.536413 polkitd[2316]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 1 23:59:22.538682 polkitd[2316]: Finished loading, compiling and executing 2 rules Jul 1 23:59:22.542116 dbus-daemon[2069]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 1 23:59:22.542435 systemd[1]: Started polkit.service - Authorization Manager. Jul 1 23:59:22.549379 polkitd[2316]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 1 23:59:22.590396 systemd-hostnamed[2138]: Hostname set to (transient) Jul 1 23:59:22.591079 systemd-resolved[2014]: System hostname changed to 'ip-172-31-30-222'. Jul 1 23:59:22.592691 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 1 23:59:22.620834 containerd[2116]: time="2024-07-01T23:59:22.620483917Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 1 23:59:22.620834 containerd[2116]: time="2024-07-01T23:59:22.620557201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:22.629310 containerd[2116]: time="2024-07-01T23:59:22.627603865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 1 23:59:22.629310 containerd[2116]: time="2024-07-01T23:59:22.627675409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:22.629310 containerd[2116]: time="2024-07-01T23:59:22.628077589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 1 23:59:22.629310 containerd[2116]: time="2024-07-01T23:59:22.628128589Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 1 23:59:22.629310 containerd[2116]: time="2024-07-01T23:59:22.628308661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:22.629310 containerd[2116]: time="2024-07-01T23:59:22.628435933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 1 23:59:22.629310 containerd[2116]: time="2024-07-01T23:59:22.628466125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:22.629310 containerd[2116]: time="2024-07-01T23:59:22.628617097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:22.629310 containerd[2116]: time="2024-07-01T23:59:22.628993981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:22.629310 containerd[2116]: time="2024-07-01T23:59:22.629027653Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 1 23:59:22.629310 containerd[2116]: time="2024-07-01T23:59:22.629052145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:22.631814 containerd[2116]: time="2024-07-01T23:59:22.631349101Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 1 23:59:22.631814 containerd[2116]: time="2024-07-01T23:59:22.631403125Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 1 23:59:22.631814 containerd[2116]: time="2024-07-01T23:59:22.631552333Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 1 23:59:22.631814 containerd[2116]: time="2024-07-01T23:59:22.631577521Z" level=info msg="metadata content store policy set" policy=shared Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.640602901Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.640678069Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.640712869Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.641200477Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.641427361Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.641586493Z" level=info msg="NRI interface is disabled by configuration." Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.641626705Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.641873257Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.641907085Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.641940193Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.642094273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.642127237Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.642172633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 1 23:59:22.642346 containerd[2116]: time="2024-07-01T23:59:22.642205345Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 1 23:59:22.643003 containerd[2116]: time="2024-07-01T23:59:22.642237409Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 1 23:59:22.643003 containerd[2116]: time="2024-07-01T23:59:22.642268693Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 1 23:59:22.643003 containerd[2116]: time="2024-07-01T23:59:22.642330925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 1 23:59:22.643003 containerd[2116]: time="2024-07-01T23:59:22.642363229Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 1 23:59:22.643184 containerd[2116]: time="2024-07-01T23:59:22.643028005Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 1 23:59:22.644323 containerd[2116]: time="2024-07-01T23:59:22.643333897Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646074205Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646152901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646186693Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646238545Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646389661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646423669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646453621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646481401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646513957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646543501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646571377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646602613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646633429Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 1 23:59:22.647303 containerd[2116]: time="2024-07-01T23:59:22.646927465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647949 containerd[2116]: time="2024-07-01T23:59:22.646963033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647949 containerd[2116]: time="2024-07-01T23:59:22.646991869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647949 containerd[2116]: time="2024-07-01T23:59:22.647021209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647949 containerd[2116]: time="2024-07-01T23:59:22.647049769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647949 containerd[2116]: time="2024-07-01T23:59:22.647082697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647949 containerd[2116]: time="2024-07-01T23:59:22.647114365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.647949 containerd[2116]: time="2024-07-01T23:59:22.647142529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 1 23:59:22.653352 containerd[2116]: time="2024-07-01T23:59:22.649515169Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 1 23:59:22.653352 containerd[2116]: time="2024-07-01T23:59:22.651262885Z" level=info msg="Connect containerd service" Jul 1 23:59:22.653352 containerd[2116]: time="2024-07-01T23:59:22.651678769Z" level=info msg="using legacy CRI server" Jul 1 23:59:22.653352 containerd[2116]: time="2024-07-01T23:59:22.651704497Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 1 23:59:22.655845 containerd[2116]: time="2024-07-01T23:59:22.655776313Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 1 23:59:22.663346 containerd[2116]: time="2024-07-01T23:59:22.662728129Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 1 23:59:22.663346 containerd[2116]: time="2024-07-01T23:59:22.662830585Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 1 23:59:22.663346 containerd[2116]: time="2024-07-01T23:59:22.662884393Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 1 23:59:22.663346 containerd[2116]: time="2024-07-01T23:59:22.662922745Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 1 23:59:22.663346 containerd[2116]: time="2024-07-01T23:59:22.662966149Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 1 23:59:22.663895 containerd[2116]: time="2024-07-01T23:59:22.663828745Z" level=info msg="Start subscribing containerd event" Jul 1 23:59:22.670203 containerd[2116]: time="2024-07-01T23:59:22.668383573Z" level=info msg="Start recovering state" Jul 1 23:59:22.670203 containerd[2116]: time="2024-07-01T23:59:22.664192741Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 1 23:59:22.670203 containerd[2116]: time="2024-07-01T23:59:22.668586565Z" level=info msg="Start event monitor" Jul 1 23:59:22.670203 containerd[2116]: time="2024-07-01T23:59:22.668617921Z" level=info msg="Start snapshots syncer" Jul 1 23:59:22.670203 containerd[2116]: time="2024-07-01T23:59:22.668590501Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 1 23:59:22.670203 containerd[2116]: time="2024-07-01T23:59:22.668640913Z" level=info msg="Start cni network conf syncer for default" Jul 1 23:59:22.670203 containerd[2116]: time="2024-07-01T23:59:22.668691421Z" level=info msg="Start streaming server" Jul 1 23:59:22.670203 containerd[2116]: time="2024-07-01T23:59:22.669447817Z" level=info msg="containerd successfully booted in 0.381291s" Jul 1 23:59:22.668987 systemd[1]: Started containerd.service - containerd container runtime. Jul 1 23:59:22.694305 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 1 23:59:22.792952 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 1 23:59:22.800758 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 1 23:59:22.800758 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 1 23:59:22.800758 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [amazon-ssm-agent] Starting Core Agent Jul 1 23:59:22.800758 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 1 23:59:22.800758 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [Registrar] Starting registrar module Jul 1 23:59:22.800758 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 1 23:59:22.800758 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [EC2Identity] EC2 registration was successful. Jul 1 23:59:22.800758 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [CredentialRefresher] credentialRefresher has started Jul 1 23:59:22.800758 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [CredentialRefresher] Starting credentials refresher loop Jul 1 23:59:22.800758 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 1 23:59:22.894451 amazon-ssm-agent[2155]: 2024-07-01 23:59:22 INFO [CredentialRefresher] Next credential rotation will be in 30.008324741633334 minutes Jul 1 23:59:23.225535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:23.243505 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 23:59:23.393490 tar[2099]: linux-arm64/LICENSE Jul 1 23:59:23.393490 tar[2099]: linux-arm64/README.md Jul 1 23:59:23.433092 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 1 23:59:23.851728 amazon-ssm-agent[2155]: 2024-07-01 23:59:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 1 23:59:23.953511 amazon-ssm-agent[2155]: 2024-07-01 23:59:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2351) started Jul 1 23:59:24.055152 amazon-ssm-agent[2155]: 2024-07-01 23:59:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 1 23:59:24.102996 kubelet[2339]: E0701 23:59:24.102838 2339 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 23:59:24.109625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 23:59:24.110809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 23:59:24.451530 sshd_keygen[2119]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 1 23:59:24.491729 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 1 23:59:24.509055 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 1 23:59:24.522883 systemd[1]: issuegen.service: Deactivated successfully. Jul 1 23:59:24.523554 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 1 23:59:24.539227 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 1 23:59:24.558895 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 1 23:59:24.570853 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 1 23:59:24.586864 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 1 23:59:24.589676 systemd[1]: Reached target getty.target - Login Prompts. Jul 1 23:59:24.592046 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 1 23:59:24.594811 systemd[1]: Startup finished in 10.094s (kernel) + 9.895s (userspace) = 19.990s. Jul 1 23:59:28.406226 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 1 23:59:28.414784 systemd[1]: Started sshd@0-172.31.30.222:22-147.75.109.163:49832.service - OpenSSH per-connection server daemon (147.75.109.163:49832). Jul 1 23:59:28.590317 sshd[2388]: Accepted publickey for core from 147.75.109.163 port 49832 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:28.593613 sshd[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:28.607958 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 1 23:59:28.619693 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 1 23:59:28.625391 systemd-logind[2087]: New session 1 of user core. Jul 1 23:59:28.644768 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 1 23:59:28.656393 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 1 23:59:28.673455 (systemd)[2394]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:28.879996 systemd[2394]: Queued start job for default target default.target. Jul 1 23:59:28.881149 systemd[2394]: Created slice app.slice - User Application Slice. Jul 1 23:59:28.881622 systemd[2394]: Reached target paths.target - Paths. Jul 1 23:59:28.881654 systemd[2394]: Reached target timers.target - Timers. Jul 1 23:59:28.893723 systemd[2394]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 1 23:59:28.907567 systemd[2394]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 1 23:59:28.907692 systemd[2394]: Reached target sockets.target - Sockets. Jul 1 23:59:28.907724 systemd[2394]: Reached target basic.target - Basic System. Jul 1 23:59:28.907821 systemd[2394]: Reached target default.target - Main User Target. Jul 1 23:59:28.907884 systemd[2394]: Startup finished in 222ms. Jul 1 23:59:28.909085 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 1 23:59:28.913916 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 1 23:59:29.066895 systemd[1]: Started sshd@1-172.31.30.222:22-147.75.109.163:49838.service - OpenSSH per-connection server daemon (147.75.109.163:49838). Jul 1 23:59:29.236062 sshd[2406]: Accepted publickey for core from 147.75.109.163 port 49838 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:29.238605 sshd[2406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:29.246723 systemd-logind[2087]: New session 2 of user core. Jul 1 23:59:29.254863 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 1 23:59:29.383775 sshd[2406]: pam_unix(sshd:session): session closed for user core Jul 1 23:59:29.388759 systemd-logind[2087]: Session 2 logged out. Waiting for processes to exit. Jul 1 23:59:29.390227 systemd[1]: sshd@1-172.31.30.222:22-147.75.109.163:49838.service: Deactivated successfully. Jul 1 23:59:29.396984 systemd[1]: session-2.scope: Deactivated successfully. Jul 1 23:59:29.398688 systemd-logind[2087]: Removed session 2. Jul 1 23:59:29.416754 systemd[1]: Started sshd@2-172.31.30.222:22-147.75.109.163:49852.service - OpenSSH per-connection server daemon (147.75.109.163:49852). Jul 1 23:59:29.579316 sshd[2414]: Accepted publickey for core from 147.75.109.163 port 49852 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:29.581721 sshd[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:29.589162 systemd-logind[2087]: New session 3 of user core. Jul 1 23:59:29.601744 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 1 23:59:29.721236 sshd[2414]: pam_unix(sshd:session): session closed for user core Jul 1 23:59:29.728438 systemd[1]: sshd@2-172.31.30.222:22-147.75.109.163:49852.service: Deactivated successfully. Jul 1 23:59:29.733504 systemd[1]: session-3.scope: Deactivated successfully. Jul 1 23:59:29.733509 systemd-logind[2087]: Session 3 logged out. Waiting for processes to exit. Jul 1 23:59:29.737101 systemd-logind[2087]: Removed session 3. Jul 1 23:59:29.750822 systemd[1]: Started sshd@3-172.31.30.222:22-147.75.109.163:49858.service - OpenSSH per-connection server daemon (147.75.109.163:49858). Jul 1 23:59:29.925124 sshd[2422]: Accepted publickey for core from 147.75.109.163 port 49858 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:29.927642 sshd[2422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:29.934954 systemd-logind[2087]: New session 4 of user core. Jul 1 23:59:29.945862 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 1 23:59:30.073617 sshd[2422]: pam_unix(sshd:session): session closed for user core Jul 1 23:59:30.080469 systemd-logind[2087]: Session 4 logged out. Waiting for processes to exit. Jul 1 23:59:30.082036 systemd[1]: sshd@3-172.31.30.222:22-147.75.109.163:49858.service: Deactivated successfully. Jul 1 23:59:30.086915 systemd[1]: session-4.scope: Deactivated successfully. Jul 1 23:59:30.088424 systemd-logind[2087]: Removed session 4. Jul 1 23:59:30.101798 systemd[1]: Started sshd@4-172.31.30.222:22-147.75.109.163:49870.service - OpenSSH per-connection server daemon (147.75.109.163:49870). Jul 1 23:59:30.284607 sshd[2430]: Accepted publickey for core from 147.75.109.163 port 49870 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:30.287043 sshd[2430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:30.295521 systemd-logind[2087]: New session 5 of user core. Jul 1 23:59:30.302759 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 1 23:59:30.417671 sudo[2434]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 1 23:59:30.418186 sudo[2434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 1 23:59:30.433355 sudo[2434]: pam_unix(sudo:session): session closed for user root Jul 1 23:59:30.457707 sshd[2430]: pam_unix(sshd:session): session closed for user core Jul 1 23:59:30.466159 systemd[1]: sshd@4-172.31.30.222:22-147.75.109.163:49870.service: Deactivated successfully. Jul 1 23:59:30.467576 systemd-logind[2087]: Session 5 logged out. Waiting for processes to exit. Jul 1 23:59:30.472271 systemd[1]: session-5.scope: Deactivated successfully. Jul 1 23:59:30.474387 systemd-logind[2087]: Removed session 5. Jul 1 23:59:30.490766 systemd[1]: Started sshd@5-172.31.30.222:22-147.75.109.163:49886.service - OpenSSH per-connection server daemon (147.75.109.163:49886). Jul 1 23:59:30.661850 sshd[2439]: Accepted publickey for core from 147.75.109.163 port 49886 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:30.664370 sshd[2439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:30.673377 systemd-logind[2087]: New session 6 of user core. Jul 1 23:59:30.679910 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 1 23:59:30.785222 sudo[2444]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 1 23:59:30.786386 sudo[2444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 1 23:59:30.792750 sudo[2444]: pam_unix(sudo:session): session closed for user root Jul 1 23:59:30.802568 sudo[2443]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 1 23:59:30.803109 sudo[2443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 1 23:59:30.831730 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 1 23:59:30.835406 auditctl[2447]: No rules Jul 1 23:59:30.836199 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 23:59:30.836781 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 1 23:59:30.852954 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 1 23:59:30.891772 augenrules[2466]: No rules Jul 1 23:59:30.893955 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 1 23:59:30.897642 sudo[2443]: pam_unix(sudo:session): session closed for user root Jul 1 23:59:30.922266 sshd[2439]: pam_unix(sshd:session): session closed for user core Jul 1 23:59:30.930781 systemd[1]: sshd@5-172.31.30.222:22-147.75.109.163:49886.service: Deactivated successfully. Jul 1 23:59:30.936520 systemd[1]: session-6.scope: Deactivated successfully. Jul 1 23:59:30.938205 systemd-logind[2087]: Session 6 logged out. Waiting for processes to exit. Jul 1 23:59:30.939898 systemd-logind[2087]: Removed session 6. Jul 1 23:59:30.951764 systemd[1]: Started sshd@6-172.31.30.222:22-147.75.109.163:49896.service - OpenSSH per-connection server daemon (147.75.109.163:49896). Jul 1 23:59:31.114464 sshd[2475]: Accepted publickey for core from 147.75.109.163 port 49896 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:31.116875 sshd[2475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:31.125161 systemd-logind[2087]: New session 7 of user core. Jul 1 23:59:31.131752 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 1 23:59:31.235589 sudo[2479]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 1 23:59:31.236159 sudo[2479]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 1 23:59:31.397226 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 1 23:59:31.410956 (dockerd)[2488]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 1 23:59:31.730209 dockerd[2488]: time="2024-07-01T23:59:31.729716731Z" level=info msg="Starting up" Jul 1 23:59:31.763774 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2375722431-merged.mount: Deactivated successfully. Jul 1 23:59:32.345511 dockerd[2488]: time="2024-07-01T23:59:32.344997942Z" level=info msg="Loading containers: start." Jul 1 23:59:32.491322 kernel: Initializing XFRM netlink socket Jul 1 23:59:32.523506 (udev-worker)[2500]: Network interface NamePolicy= disabled on kernel command line. Jul 1 23:59:32.607316 systemd-networkd[1691]: docker0: Link UP Jul 1 23:59:32.625072 dockerd[2488]: time="2024-07-01T23:59:32.624840528Z" level=info msg="Loading containers: done." Jul 1 23:59:32.709561 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck114857421-merged.mount: Deactivated successfully. Jul 1 23:59:32.713331 dockerd[2488]: time="2024-07-01T23:59:32.712209919Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 1 23:59:32.713331 dockerd[2488]: time="2024-07-01T23:59:32.712525617Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 1 23:59:32.713331 dockerd[2488]: time="2024-07-01T23:59:32.712718793Z" level=info msg="Daemon has completed initialization" Jul 1 23:59:32.763014 dockerd[2488]: time="2024-07-01T23:59:32.762930563Z" level=info msg="API listen on /run/docker.sock" Jul 1 23:59:32.764518 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 1 23:59:33.690029 containerd[2116]: time="2024-07-01T23:59:33.689584205Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 1 23:59:34.313218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 1 23:59:34.320880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 23:59:34.387863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount29736508.mount: Deactivated successfully. Jul 1 23:59:34.795153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:34.800925 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 23:59:34.966826 kubelet[2646]: E0701 23:59:34.965691 2646 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 23:59:34.977978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 23:59:34.979127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 23:59:36.318627 containerd[2116]: time="2024-07-01T23:59:36.318545919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:36.320708 containerd[2116]: time="2024-07-01T23:59:36.320638745Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671538" Jul 1 23:59:36.321846 containerd[2116]: time="2024-07-01T23:59:36.321758545Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:36.327563 containerd[2116]: time="2024-07-01T23:59:36.327512180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:36.330164 containerd[2116]: time="2024-07-01T23:59:36.329879127Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 2.640237353s" Jul 1 23:59:36.330164 containerd[2116]: time="2024-07-01T23:59:36.329943695Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 1 23:59:36.367895 containerd[2116]: time="2024-07-01T23:59:36.367839794Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 1 23:59:38.205861 containerd[2116]: time="2024-07-01T23:59:38.205170801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:38.207326 containerd[2116]: time="2024-07-01T23:59:38.207258332Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893118" Jul 1 23:59:38.208425 containerd[2116]: time="2024-07-01T23:59:38.208333266Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:38.213972 containerd[2116]: time="2024-07-01T23:59:38.213920822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:38.216558 containerd[2116]: time="2024-07-01T23:59:38.216361414Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.848459153s" Jul 1 23:59:38.216558 containerd[2116]: time="2024-07-01T23:59:38.216425826Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 1 23:59:38.258278 containerd[2116]: time="2024-07-01T23:59:38.257962924Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 1 23:59:39.359071 containerd[2116]: time="2024-07-01T23:59:39.358994161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:39.361143 containerd[2116]: time="2024-07-01T23:59:39.361073828Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358438" Jul 1 23:59:39.362341 containerd[2116]: time="2024-07-01T23:59:39.362272063Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:39.368065 containerd[2116]: time="2024-07-01T23:59:39.367963975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:39.371906 containerd[2116]: time="2024-07-01T23:59:39.370894916Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 1.112866715s" Jul 1 23:59:39.371906 containerd[2116]: time="2024-07-01T23:59:39.370969497Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 1 23:59:39.416056 containerd[2116]: time="2024-07-01T23:59:39.416009106Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 1 23:59:40.717828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3064974338.mount: Deactivated successfully. Jul 1 23:59:41.266004 containerd[2116]: time="2024-07-01T23:59:41.265383932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:41.267223 containerd[2116]: time="2024-07-01T23:59:41.267173523Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772461" Jul 1 23:59:41.269315 containerd[2116]: time="2024-07-01T23:59:41.269245278Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:41.272789 containerd[2116]: time="2024-07-01T23:59:41.272709465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:41.274202 containerd[2116]: time="2024-07-01T23:59:41.274030786Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 1.857795751s" Jul 1 23:59:41.274202 containerd[2116]: time="2024-07-01T23:59:41.274081847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 1 23:59:41.313975 containerd[2116]: time="2024-07-01T23:59:41.313916747Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 1 23:59:41.823991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2026168470.mount: Deactivated successfully. Jul 1 23:59:41.833449 containerd[2116]: time="2024-07-01T23:59:41.832811676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:41.834533 containerd[2116]: time="2024-07-01T23:59:41.834479046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jul 1 23:59:41.836013 containerd[2116]: time="2024-07-01T23:59:41.835930439Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:41.843313 containerd[2116]: time="2024-07-01T23:59:41.842183920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:41.844364 containerd[2116]: time="2024-07-01T23:59:41.843823267Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 529.841976ms" Jul 1 23:59:41.844364 containerd[2116]: time="2024-07-01T23:59:41.843881377Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 1 23:59:41.881685 containerd[2116]: time="2024-07-01T23:59:41.881637798Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 1 23:59:42.443746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount148454017.mount: Deactivated successfully. Jul 1 23:59:45.063904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 1 23:59:45.078919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 23:59:45.578369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:45.593183 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 23:59:45.690046 kubelet[2796]: E0701 23:59:45.689811 2796 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 23:59:45.697995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 23:59:45.698785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 23:59:46.070446 containerd[2116]: time="2024-07-01T23:59:46.070367225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:46.072724 containerd[2116]: time="2024-07-01T23:59:46.072665473Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jul 1 23:59:46.073709 containerd[2116]: time="2024-07-01T23:59:46.073619086Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:46.079916 containerd[2116]: time="2024-07-01T23:59:46.079815226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:46.082535 containerd[2116]: time="2024-07-01T23:59:46.082478589Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 4.200607215s" Jul 1 23:59:46.082892 containerd[2116]: time="2024-07-01T23:59:46.082707412Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 1 23:59:46.123139 containerd[2116]: time="2024-07-01T23:59:46.123010089Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 1 23:59:46.917495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1210606806.mount: Deactivated successfully. Jul 1 23:59:47.497074 containerd[2116]: time="2024-07-01T23:59:47.496371024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:47.498338 containerd[2116]: time="2024-07-01T23:59:47.498130888Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Jul 1 23:59:47.499915 containerd[2116]: time="2024-07-01T23:59:47.499820000Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:47.505331 containerd[2116]: time="2024-07-01T23:59:47.505183176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:47.507521 containerd[2116]: time="2024-07-01T23:59:47.507269711Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.384193372s" Jul 1 23:59:47.507521 containerd[2116]: time="2024-07-01T23:59:47.507367884Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 1 23:59:52.600246 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 1 23:59:55.813367 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 1 23:59:55.825343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 23:59:56.275655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:56.296102 (kubelet)[2894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 23:59:56.329797 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 23:59:56.334729 systemd[1]: kubelet.service: Deactivated successfully. Jul 1 23:59:56.335395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:56.361058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 23:59:56.397818 systemd[1]: Reloading requested from client PID 2908 ('systemctl') (unit session-7.scope)... Jul 1 23:59:56.397851 systemd[1]: Reloading... Jul 1 23:59:56.620347 zram_generator::config[2949]: No configuration found. Jul 1 23:59:56.931390 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 23:59:57.103734 systemd[1]: Reloading finished in 704 ms. Jul 1 23:59:57.174692 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 1 23:59:57.174909 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 1 23:59:57.175651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:57.184164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 23:59:57.638675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:57.656215 (kubelet)[3018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 1 23:59:57.753276 kubelet[3018]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 23:59:57.753276 kubelet[3018]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 1 23:59:57.753276 kubelet[3018]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 23:59:57.754044 kubelet[3018]: I0701 23:59:57.753450 3018 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 1 23:59:59.313315 kubelet[3018]: I0701 23:59:59.313195 3018 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 1 23:59:59.313315 kubelet[3018]: I0701 23:59:59.313269 3018 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 1 23:59:59.314040 kubelet[3018]: I0701 23:59:59.313682 3018 server.go:895] "Client rotation is on, will bootstrap in background" Jul 1 23:59:59.340140 kubelet[3018]: I0701 23:59:59.340082 3018 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 1 23:59:59.342000 kubelet[3018]: E0701 23:59:59.341816 3018 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.222:6443: connect: connection refused Jul 1 23:59:59.356051 kubelet[3018]: W0701 23:59:59.355990 3018 machine.go:65] Cannot read vendor id correctly, set empty. Jul 1 23:59:59.357454 kubelet[3018]: I0701 23:59:59.357400 3018 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 1 23:59:59.358322 kubelet[3018]: I0701 23:59:59.358254 3018 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 1 23:59:59.358742 kubelet[3018]: I0701 23:59:59.358683 3018 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 1 23:59:59.358959 kubelet[3018]: I0701 23:59:59.358762 3018 topology_manager.go:138] "Creating topology manager with none policy" Jul 1 23:59:59.358959 kubelet[3018]: I0701 23:59:59.358786 3018 container_manager_linux.go:301] "Creating device plugin manager" Jul 1 23:59:59.359104 kubelet[3018]: I0701 23:59:59.359016 3018 state_mem.go:36] "Initialized new in-memory state store" Jul 1 23:59:59.362305 kubelet[3018]: I0701 23:59:59.362226 3018 kubelet.go:393] "Attempting to sync node with API server" Jul 1 23:59:59.362440 kubelet[3018]: I0701 23:59:59.362348 3018 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 1 23:59:59.362440 kubelet[3018]: I0701 23:59:59.362433 3018 kubelet.go:309] "Adding apiserver pod source" Jul 1 23:59:59.364395 kubelet[3018]: I0701 23:59:59.362461 3018 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 1 23:59:59.366200 kubelet[3018]: I0701 23:59:59.366146 3018 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 1 23:59:59.371651 kubelet[3018]: W0701 23:59:59.371567 3018 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 1 23:59:59.372714 kubelet[3018]: I0701 23:59:59.372654 3018 server.go:1232] "Started kubelet" Jul 1 23:59:59.372959 kubelet[3018]: W0701 23:59:59.372883 3018 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 1 23:59:59.373048 kubelet[3018]: E0701 23:59:59.372994 3018 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 1 23:59:59.373710 kubelet[3018]: W0701 23:59:59.373631 3018 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-222&limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 1 23:59:59.373902 kubelet[3018]: E0701 23:59:59.373875 3018 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-222&limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 1 23:59:59.374571 kubelet[3018]: I0701 23:59:59.374527 3018 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 1 23:59:59.382349 kubelet[3018]: I0701 23:59:59.381701 3018 server.go:462] "Adding debug handlers to kubelet server" Jul 1 23:59:59.385174 kubelet[3018]: I0701 23:59:59.385107 3018 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 1 23:59:59.385791 kubelet[3018]: I0701 23:59:59.385725 3018 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 1 23:59:59.386192 kubelet[3018]: I0701 23:59:59.386125 3018 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 1 23:59:59.389890 kubelet[3018]: E0701 23:59:59.389632 3018 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-222.17de3c4abb78e2ee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-222", UID:"ip-172-31-30-222", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-222"}, FirstTimestamp:time.Date(2024, time.July, 1, 23, 59, 59, 372616430, time.Local), LastTimestamp:time.Date(2024, time.July, 1, 23, 59, 59, 372616430, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-30-222"}': 'Post "https://172.31.30.222:6443/api/v1/namespaces/default/events": dial tcp 172.31.30.222:6443: connect: connection refused'(may retry after sleeping) Jul 1 23:59:59.392581 kubelet[3018]: I0701 23:59:59.392498 3018 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 1 23:59:59.393510 kubelet[3018]: I0701 23:59:59.393458 3018 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 1 23:59:59.393697 kubelet[3018]: I0701 23:59:59.393631 3018 reconciler_new.go:29] "Reconciler: start to sync state" Jul 1 23:59:59.395987 kubelet[3018]: W0701 23:59:59.395892 3018 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 1 23:59:59.395987 kubelet[3018]: E0701 23:59:59.395993 3018 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 1 23:59:59.396434 kubelet[3018]: E0701 23:59:59.396385 3018 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-222?timeout=10s\": dial tcp 172.31.30.222:6443: connect: connection refused" interval="200ms" Jul 1 23:59:59.400981 kubelet[3018]: E0701 23:59:59.400475 3018 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 1 23:59:59.400981 kubelet[3018]: E0701 23:59:59.400540 3018 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 1 23:59:59.425792 kubelet[3018]: I0701 23:59:59.425712 3018 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 1 23:59:59.428688 kubelet[3018]: I0701 23:59:59.428613 3018 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 1 23:59:59.428688 kubelet[3018]: I0701 23:59:59.428666 3018 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 1 23:59:59.428688 kubelet[3018]: I0701 23:59:59.428701 3018 kubelet.go:2303] "Starting kubelet main sync loop" Jul 1 23:59:59.428929 kubelet[3018]: E0701 23:59:59.428796 3018 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 1 23:59:59.447495 kubelet[3018]: W0701 23:59:59.446457 3018 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 1 23:59:59.447495 kubelet[3018]: E0701 23:59:59.447050 3018 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 1 23:59:59.497762 kubelet[3018]: I0701 23:59:59.497720 3018 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-222" Jul 1 23:59:59.498815 kubelet[3018]: E0701 23:59:59.498765 3018 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.222:6443/api/v1/nodes\": dial tcp 172.31.30.222:6443: connect: connection refused" node="ip-172-31-30-222" Jul 1 23:59:59.522911 kubelet[3018]: I0701 23:59:59.522868 3018 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 1 23:59:59.522911 kubelet[3018]: I0701 23:59:59.522908 3018 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 1 23:59:59.523133 kubelet[3018]: I0701 23:59:59.522943 3018 state_mem.go:36] "Initialized new in-memory state store" Jul 1 23:59:59.526213 kubelet[3018]: I0701 23:59:59.526135 3018 policy_none.go:49] "None policy: Start" Jul 1 23:59:59.527622 kubelet[3018]: I0701 23:59:59.527564 3018 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 1 23:59:59.527622 kubelet[3018]: I0701 23:59:59.527625 3018 state_mem.go:35] "Initializing new in-memory state store" Jul 1 23:59:59.529741 kubelet[3018]: E0701 23:59:59.529673 3018 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 1 23:59:59.537000 kubelet[3018]: I0701 23:59:59.536863 3018 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 1 23:59:59.537499 kubelet[3018]: I0701 23:59:59.537351 3018 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 1 23:59:59.541880 kubelet[3018]: E0701 23:59:59.541791 3018 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-222\" not found" Jul 1 23:59:59.598378 kubelet[3018]: E0701 23:59:59.597606 3018 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-222?timeout=10s\": dial tcp 172.31.30.222:6443: connect: connection refused" interval="400ms" Jul 1 23:59:59.701545 kubelet[3018]: I0701 23:59:59.701504 3018 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-222" Jul 1 23:59:59.702327 kubelet[3018]: E0701 23:59:59.702245 3018 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.222:6443/api/v1/nodes\": dial tcp 172.31.30.222:6443: connect: connection refused" node="ip-172-31-30-222" Jul 1 23:59:59.730718 kubelet[3018]: I0701 23:59:59.730646 3018 topology_manager.go:215] "Topology Admit Handler" podUID="a62a206019ca3a978530db3543011dcc" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-222" Jul 1 23:59:59.733242 kubelet[3018]: I0701 23:59:59.733183 3018 topology_manager.go:215] "Topology Admit Handler" podUID="f5f0ff12e727ddf362560b7154f2f04e" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-222" Jul 1 23:59:59.736108 kubelet[3018]: I0701 23:59:59.735756 3018 topology_manager.go:215] "Topology Admit Handler" podUID="3bfdc56480057871847a070e881a5bec" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-222" Jul 1 23:59:59.798779 kubelet[3018]: I0701 23:59:59.798714 3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a62a206019ca3a978530db3543011dcc-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-222\" (UID: \"a62a206019ca3a978530db3543011dcc\") " pod="kube-system/kube-apiserver-ip-172-31-30-222" Jul 1 23:59:59.798916 kubelet[3018]: I0701 23:59:59.798800 3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a62a206019ca3a978530db3543011dcc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-222\" (UID: \"a62a206019ca3a978530db3543011dcc\") " pod="kube-system/kube-apiserver-ip-172-31-30-222" Jul 1 23:59:59.798916 kubelet[3018]: I0701 23:59:59.798870 3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5f0ff12e727ddf362560b7154f2f04e-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"f5f0ff12e727ddf362560b7154f2f04e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jul 1 23:59:59.798916 kubelet[3018]: I0701 23:59:59.798914 3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5f0ff12e727ddf362560b7154f2f04e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"f5f0ff12e727ddf362560b7154f2f04e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jul 1 23:59:59.799122 kubelet[3018]: I0701 23:59:59.798958 3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5f0ff12e727ddf362560b7154f2f04e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"f5f0ff12e727ddf362560b7154f2f04e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jul 1 23:59:59.799122 kubelet[3018]: I0701 23:59:59.799004 3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3bfdc56480057871847a070e881a5bec-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-222\" (UID: \"3bfdc56480057871847a070e881a5bec\") " pod="kube-system/kube-scheduler-ip-172-31-30-222" Jul 1 23:59:59.799122 kubelet[3018]: I0701 23:59:59.799046 3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a62a206019ca3a978530db3543011dcc-ca-certs\") pod \"kube-apiserver-ip-172-31-30-222\" (UID: \"a62a206019ca3a978530db3543011dcc\") " pod="kube-system/kube-apiserver-ip-172-31-30-222" Jul 1 23:59:59.799122 kubelet[3018]: I0701 23:59:59.799090 3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5f0ff12e727ddf362560b7154f2f04e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"f5f0ff12e727ddf362560b7154f2f04e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jul 1 23:59:59.799367 kubelet[3018]: I0701 23:59:59.799140 3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5f0ff12e727ddf362560b7154f2f04e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"f5f0ff12e727ddf362560b7154f2f04e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jul 1 23:59:59.998410 kubelet[3018]: E0701 23:59:59.998206 3018 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-222?timeout=10s\": dial tcp 172.31.30.222:6443: connect: connection refused" interval="800ms" Jul 2 00:00:00.051424 containerd[2116]: time="2024-07-02T00:00:00.050935241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-222,Uid:a62a206019ca3a978530db3543011dcc,Namespace:kube-system,Attempt:0,}" Jul 2 00:00:00.052413 containerd[2116]: time="2024-07-02T00:00:00.052334240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-222,Uid:f5f0ff12e727ddf362560b7154f2f04e,Namespace:kube-system,Attempt:0,}" Jul 2 00:00:00.054359 containerd[2116]: time="2024-07-02T00:00:00.054159308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-222,Uid:3bfdc56480057871847a070e881a5bec,Namespace:kube-system,Attempt:0,}" Jul 2 00:00:00.104569 kubelet[3018]: I0702 00:00:00.104530 3018 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-222" Jul 2 00:00:00.105690 kubelet[3018]: E0702 00:00:00.105634 3018 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.222:6443/api/v1/nodes\": dial tcp 172.31.30.222:6443: connect: connection refused" node="ip-172-31-30-222" Jul 2 00:00:00.272011 kubelet[3018]: W0702 00:00:00.271807 3018 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-222&limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 2 00:00:00.272011 kubelet[3018]: E0702 00:00:00.271908 3018 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-222&limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 2 00:00:00.439229 kubelet[3018]: W0702 00:00:00.439112 3018 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 2 00:00:00.439229 kubelet[3018]: E0702 00:00:00.439181 3018 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 2 00:00:00.464531 kubelet[3018]: W0702 00:00:00.464435 3018 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 2 00:00:00.464531 kubelet[3018]: E0702 00:00:00.464537 3018 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 2 00:00:00.572528 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jul 2 00:00:00.582173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238604934.mount: Deactivated successfully. Jul 2 00:00:00.592545 containerd[2116]: time="2024-07-02T00:00:00.592448033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:00:00.593419 systemd[1]: logrotate.service: Deactivated successfully. Jul 2 00:00:00.599493 containerd[2116]: time="2024-07-02T00:00:00.599412330Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 2 00:00:00.602159 containerd[2116]: time="2024-07-02T00:00:00.601925258Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:00:00.606741 containerd[2116]: time="2024-07-02T00:00:00.606569467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:00:00.606967 containerd[2116]: time="2024-07-02T00:00:00.606900940Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:00:00.609193 containerd[2116]: time="2024-07-02T00:00:00.608946619Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:00:00.609193 containerd[2116]: time="2024-07-02T00:00:00.609127789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:00:00.616272 containerd[2116]: time="2024-07-02T00:00:00.616077871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:00:00.622000 containerd[2116]: time="2024-07-02T00:00:00.621494642Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.127462ms" Jul 2 00:00:00.627344 containerd[2116]: time="2024-07-02T00:00:00.626319217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.185289ms" Jul 2 00:00:00.628509 containerd[2116]: time="2024-07-02T00:00:00.628431673Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.935064ms" Jul 2 00:00:00.800836 kubelet[3018]: E0702 00:00:00.800171 3018 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-222?timeout=10s\": dial tcp 172.31.30.222:6443: connect: connection refused" interval="1.6s" Jul 2 00:00:00.847987 containerd[2116]: time="2024-07-02T00:00:00.846666777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:00.847987 containerd[2116]: time="2024-07-02T00:00:00.846793620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:00.847987 containerd[2116]: time="2024-07-02T00:00:00.846847143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:00.847987 containerd[2116]: time="2024-07-02T00:00:00.846882393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:00.853750 kubelet[3018]: W0702 00:00:00.853658 3018 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 2 00:00:00.853917 kubelet[3018]: E0702 00:00:00.853761 3018 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jul 2 00:00:00.856247 containerd[2116]: time="2024-07-02T00:00:00.856083155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:00.857405 containerd[2116]: time="2024-07-02T00:00:00.856478477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:00.857405 containerd[2116]: time="2024-07-02T00:00:00.856739535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:00.857405 containerd[2116]: time="2024-07-02T00:00:00.856774460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:00.861888 containerd[2116]: time="2024-07-02T00:00:00.861657481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:00.861888 containerd[2116]: time="2024-07-02T00:00:00.861753169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:00.861888 containerd[2116]: time="2024-07-02T00:00:00.861786317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:00.861888 containerd[2116]: time="2024-07-02T00:00:00.861817329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:00.917931 kubelet[3018]: I0702 00:00:00.917874 3018 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-222" Jul 2 00:00:00.921378 kubelet[3018]: E0702 00:00:00.921060 3018 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.222:6443/api/v1/nodes\": dial tcp 172.31.30.222:6443: connect: connection refused" node="ip-172-31-30-222" Jul 2 00:00:01.018659 containerd[2116]: time="2024-07-02T00:00:01.018589709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-222,Uid:f5f0ff12e727ddf362560b7154f2f04e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a903339053ce04d4e1c25fb8eec9bfc4a9f41dd2abaa0be9860e3d41d89b7e7f\"" Jul 2 00:00:01.033876 containerd[2116]: time="2024-07-02T00:00:01.033788561Z" level=info msg="CreateContainer within sandbox \"a903339053ce04d4e1c25fb8eec9bfc4a9f41dd2abaa0be9860e3d41d89b7e7f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:00:01.053526 containerd[2116]: time="2024-07-02T00:00:01.053468760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-222,Uid:a62a206019ca3a978530db3543011dcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb1a7dab33701389d50c26d8ce91773cd08e1a84ee53b44b89740b26065f5f60\"" Jul 2 00:00:01.069740 containerd[2116]: time="2024-07-02T00:00:01.069597380Z" level=info msg="CreateContainer within sandbox \"fb1a7dab33701389d50c26d8ce91773cd08e1a84ee53b44b89740b26065f5f60\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:00:01.074125 containerd[2116]: time="2024-07-02T00:00:01.074068439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-222,Uid:3bfdc56480057871847a070e881a5bec,Namespace:kube-system,Attempt:0,} returns sandbox id \"b290a4053a32f3b70982d14b10745884b3d56cd2848871e80a0776730ae4098b\"" Jul 2 00:00:01.083148 containerd[2116]: time="2024-07-02T00:00:01.082953647Z" level=info msg="CreateContainer within sandbox \"b290a4053a32f3b70982d14b10745884b3d56cd2848871e80a0776730ae4098b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:00:01.095067 containerd[2116]: time="2024-07-02T00:00:01.094974066Z" level=info msg="CreateContainer within sandbox \"a903339053ce04d4e1c25fb8eec9bfc4a9f41dd2abaa0be9860e3d41d89b7e7f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bea2924c97bfa3b0f3f0c385bbad263fac92b32d45e05c74d6bae3ca390bdd97\"" Jul 2 00:00:01.096653 containerd[2116]: time="2024-07-02T00:00:01.096596581Z" level=info msg="StartContainer for \"bea2924c97bfa3b0f3f0c385bbad263fac92b32d45e05c74d6bae3ca390bdd97\"" Jul 2 00:00:01.117501 containerd[2116]: time="2024-07-02T00:00:01.116147560Z" level=info msg="CreateContainer within sandbox \"fb1a7dab33701389d50c26d8ce91773cd08e1a84ee53b44b89740b26065f5f60\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"31158ebec7a687f7cca123e825345c7862575c0a7b72d82946c7a4619e56c1fc\"" Jul 2 00:00:01.118210 containerd[2116]: time="2024-07-02T00:00:01.117754575Z" level=info msg="StartContainer for \"31158ebec7a687f7cca123e825345c7862575c0a7b72d82946c7a4619e56c1fc\"" Jul 2 00:00:01.122048 containerd[2116]: time="2024-07-02T00:00:01.121959389Z" level=info msg="CreateContainer within sandbox \"b290a4053a32f3b70982d14b10745884b3d56cd2848871e80a0776730ae4098b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"58d91d1bb82359258c68c5d4dd39c1f289cd2fc562fb108512bd6328fea9962b\"" Jul 2 00:00:01.122865 containerd[2116]: time="2024-07-02T00:00:01.122778690Z" level=info msg="StartContainer for \"58d91d1bb82359258c68c5d4dd39c1f289cd2fc562fb108512bd6328fea9962b\"" Jul 2 00:00:01.317926 containerd[2116]: time="2024-07-02T00:00:01.317787311Z" level=info msg="StartContainer for \"bea2924c97bfa3b0f3f0c385bbad263fac92b32d45e05c74d6bae3ca390bdd97\" returns successfully" Jul 2 00:00:01.348730 containerd[2116]: time="2024-07-02T00:00:01.348642959Z" level=info msg="StartContainer for \"58d91d1bb82359258c68c5d4dd39c1f289cd2fc562fb108512bd6328fea9962b\" returns successfully" Jul 2 00:00:01.388729 containerd[2116]: time="2024-07-02T00:00:01.388540362Z" level=info msg="StartContainer for \"31158ebec7a687f7cca123e825345c7862575c0a7b72d82946c7a4619e56c1fc\" returns successfully" Jul 2 00:00:01.442778 kubelet[3018]: E0702 00:00:01.442705 3018 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.222:6443: connect: connection refused Jul 2 00:00:02.527312 kubelet[3018]: I0702 00:00:02.527258 3018 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-222" Jul 2 00:00:06.586079 kubelet[3018]: E0702 00:00:06.585997 3018 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-222\" not found" node="ip-172-31-30-222" Jul 2 00:00:06.618421 update_engine[2090]: I0702 00:00:06.618340 2090 update_attempter.cc:509] Updating boot flags... Jul 2 00:00:06.632705 kubelet[3018]: I0702 00:00:06.632376 3018 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-30-222" Jul 2 00:00:06.710346 kubelet[3018]: E0702 00:00:06.710069 3018 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-222.17de3c4abb78e2ee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-222", UID:"ip-172-31-30-222", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-222"}, FirstTimestamp:time.Date(2024, time.July, 1, 23, 59, 59, 372616430, time.Local), LastTimestamp:time.Date(2024, time.July, 1, 23, 59, 59, 372616430, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-30-222"}': 'namespaces "default" not found' (will not retry!) Jul 2 00:00:06.786318 kubelet[3018]: E0702 00:00:06.784477 3018 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-222.17de3c4abd22a3be", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-222", UID:"ip-172-31-30-222", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-222"}, FirstTimestamp:time.Date(2024, time.July, 1, 23, 59, 59, 400518590, time.Local), LastTimestamp:time.Date(2024, time.July, 1, 23, 59, 59, 400518590, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-30-222"}': 'namespaces "default" not found' (will not retry!) Jul 2 00:00:06.902385 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3312) Jul 2 00:00:07.369875 kubelet[3018]: I0702 00:00:07.369797 3018 apiserver.go:52] "Watching apiserver" Jul 2 00:00:07.394590 kubelet[3018]: I0702 00:00:07.394511 3018 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:00:07.529179 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3311) Jul 2 00:00:08.029379 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3311) Jul 2 00:00:09.795221 systemd[1]: Reloading requested from client PID 3566 ('systemctl') (unit session-7.scope)... Jul 2 00:00:09.795249 systemd[1]: Reloading... Jul 2 00:00:09.972948 zram_generator::config[3613]: No configuration found. Jul 2 00:00:10.206553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:00:10.390549 systemd[1]: Reloading finished in 594 ms. Jul 2 00:00:10.447796 kubelet[3018]: I0702 00:00:10.447755 3018 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:00:10.448004 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:00:10.463263 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:00:10.464078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:00:10.474022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:00:10.912627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:00:10.925061 (kubelet)[3674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:00:11.072504 kubelet[3674]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:00:11.073647 kubelet[3674]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:00:11.073647 kubelet[3674]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:00:11.073647 kubelet[3674]: I0702 00:00:11.073160 3674 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:00:11.085323 kubelet[3674]: I0702 00:00:11.083900 3674 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:00:11.085323 kubelet[3674]: I0702 00:00:11.084001 3674 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:00:11.085323 kubelet[3674]: I0702 00:00:11.084476 3674 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:00:11.089996 kubelet[3674]: I0702 00:00:11.089947 3674 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:00:11.093135 kubelet[3674]: I0702 00:00:11.093102 3674 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:00:11.116209 kubelet[3674]: W0702 00:00:11.116177 3674 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 00:00:11.117800 kubelet[3674]: I0702 00:00:11.117756 3674 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:00:11.118851 kubelet[3674]: I0702 00:00:11.118822 3674 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:00:11.119121 sudo[3687]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:00:11.119802 sudo[3687]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:00:11.120330 kubelet[3674]: I0702 00:00:11.119902 3674 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:00:11.120330 kubelet[3674]: I0702 00:00:11.119967 3674 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:00:11.120330 kubelet[3674]: I0702 00:00:11.119987 3674 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:00:11.120330 kubelet[3674]: I0702 00:00:11.120049 3674 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:00:11.121418 kubelet[3674]: I0702 00:00:11.120763 3674 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:00:11.123246 kubelet[3674]: I0702 00:00:11.123212 3674 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:00:11.123441 kubelet[3674]: I0702 00:00:11.123421 3674 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:00:11.123569 kubelet[3674]: I0702 00:00:11.123550 3674 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:00:11.130321 kubelet[3674]: I0702 00:00:11.129669 3674 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:00:11.132720 kubelet[3674]: I0702 00:00:11.132683 3674 server.go:1232] "Started kubelet" Jul 2 00:00:11.135307 kubelet[3674]: I0702 00:00:11.134265 3674 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:00:11.136847 kubelet[3674]: I0702 00:00:11.136770 3674 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:00:11.139358 kubelet[3674]: I0702 00:00:11.137109 3674 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:00:11.142916 kubelet[3674]: I0702 00:00:11.139968 3674 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:00:11.154303 kubelet[3674]: I0702 00:00:11.152131 3674 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:00:11.167232 kubelet[3674]: I0702 00:00:11.167091 3674 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:00:11.174540 kubelet[3674]: I0702 00:00:11.168365 3674 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:00:11.175607 kubelet[3674]: E0702 00:00:11.175576 3674 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:00:11.176197 kubelet[3674]: E0702 00:00:11.176150 3674 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:00:11.178360 kubelet[3674]: I0702 00:00:11.177761 3674 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:00:11.235321 kubelet[3674]: I0702 00:00:11.233438 3674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:00:11.243033 kubelet[3674]: I0702 00:00:11.242602 3674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:00:11.248723 kubelet[3674]: I0702 00:00:11.248679 3674 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:00:11.257378 kubelet[3674]: I0702 00:00:11.257312 3674 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:00:11.257511 kubelet[3674]: E0702 00:00:11.257461 3674 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:00:11.286449 kubelet[3674]: E0702 00:00:11.284913 3674 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jul 2 00:00:11.294631 kubelet[3674]: I0702 00:00:11.294581 3674 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-222" Jul 2 00:00:11.314857 kubelet[3674]: I0702 00:00:11.314806 3674 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-30-222" Jul 2 00:00:11.314986 kubelet[3674]: I0702 00:00:11.314925 3674 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-30-222" Jul 2 00:00:11.358340 kubelet[3674]: E0702 00:00:11.358294 3674 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:00:11.532617 kubelet[3674]: I0702 00:00:11.530509 3674 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:00:11.532617 kubelet[3674]: I0702 00:00:11.530551 3674 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:00:11.532617 kubelet[3674]: I0702 00:00:11.530586 3674 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:00:11.532617 kubelet[3674]: I0702 00:00:11.530864 3674 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:00:11.532617 kubelet[3674]: I0702 00:00:11.530905 3674 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:00:11.532617 kubelet[3674]: I0702 00:00:11.530922 3674 policy_none.go:49] "None policy: Start" Jul 2 00:00:11.532617 kubelet[3674]: I0702 00:00:11.531982 3674 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:00:11.532617 kubelet[3674]: I0702 00:00:11.532021 3674 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:00:11.532617 kubelet[3674]: I0702 00:00:11.532525 3674 state_mem.go:75] "Updated machine memory state" Jul 2 00:00:11.537608 kubelet[3674]: I0702 00:00:11.536917 3674 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:00:11.544070 kubelet[3674]: I0702 00:00:11.541508 3674 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:00:11.561108 kubelet[3674]: I0702 00:00:11.560310 3674 topology_manager.go:215] "Topology Admit Handler" podUID="a62a206019ca3a978530db3543011dcc" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-222" Jul 2 00:00:11.561108 kubelet[3674]: I0702 00:00:11.560480 3674 topology_manager.go:215] "Topology Admit Handler" podUID="f5f0ff12e727ddf362560b7154f2f04e" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-222" Jul 2 00:00:11.561108 kubelet[3674]: I0702 00:00:11.560549 3674 topology_manager.go:215] "Topology Admit Handler" podUID="3bfdc56480057871847a070e881a5bec" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-222" Jul 2 00:00:11.576623 kubelet[3674]: E0702 00:00:11.576421 3674 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-222\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jul 2 00:00:11.592078 kubelet[3674]: I0702 00:00:11.590335 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a62a206019ca3a978530db3543011dcc-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-222\" (UID: \"a62a206019ca3a978530db3543011dcc\") " pod="kube-system/kube-apiserver-ip-172-31-30-222" Jul 2 00:00:11.592078 kubelet[3674]: I0702 00:00:11.590421 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a62a206019ca3a978530db3543011dcc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-222\" (UID: \"a62a206019ca3a978530db3543011dcc\") " pod="kube-system/kube-apiserver-ip-172-31-30-222" Jul 2 00:00:11.592078 kubelet[3674]: I0702 00:00:11.590485 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5f0ff12e727ddf362560b7154f2f04e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"f5f0ff12e727ddf362560b7154f2f04e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jul 2 00:00:11.592078 kubelet[3674]: I0702 00:00:11.590540 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5f0ff12e727ddf362560b7154f2f04e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"f5f0ff12e727ddf362560b7154f2f04e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jul 2 00:00:11.592078 kubelet[3674]: I0702 00:00:11.590591 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3bfdc56480057871847a070e881a5bec-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-222\" (UID: \"3bfdc56480057871847a070e881a5bec\") " pod="kube-system/kube-scheduler-ip-172-31-30-222" Jul 2 00:00:11.592451 kubelet[3674]: I0702 00:00:11.590633 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a62a206019ca3a978530db3543011dcc-ca-certs\") pod \"kube-apiserver-ip-172-31-30-222\" (UID: \"a62a206019ca3a978530db3543011dcc\") " pod="kube-system/kube-apiserver-ip-172-31-30-222" Jul 2 00:00:11.592451 kubelet[3674]: I0702 00:00:11.590681 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5f0ff12e727ddf362560b7154f2f04e-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"f5f0ff12e727ddf362560b7154f2f04e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jul 2 00:00:11.592451 kubelet[3674]: I0702 00:00:11.590725 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5f0ff12e727ddf362560b7154f2f04e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"f5f0ff12e727ddf362560b7154f2f04e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jul 2 00:00:11.592451 kubelet[3674]: I0702 00:00:11.590768 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5f0ff12e727ddf362560b7154f2f04e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"f5f0ff12e727ddf362560b7154f2f04e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jul 2 00:00:12.127553 kubelet[3674]: I0702 00:00:12.127483 3674 apiserver.go:52] "Watching apiserver" Jul 2 00:00:12.145057 sudo[3687]: pam_unix(sudo:session): session closed for user root Jul 2 00:00:12.175057 kubelet[3674]: I0702 00:00:12.175002 3674 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:00:12.374623 kubelet[3674]: I0702 00:00:12.374260 3674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-222" podStartSLOduration=1.3741456 podCreationTimestamp="2024-07-02 00:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:00:12.360707296 +0000 UTC m=+1.419404270" watchObservedRunningTime="2024-07-02 00:00:12.3741456 +0000 UTC m=+1.432842574" Jul 2 00:00:12.398162 kubelet[3674]: I0702 00:00:12.397563 3674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-222" podStartSLOduration=1.397506995 podCreationTimestamp="2024-07-02 00:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:00:12.374976019 +0000 UTC m=+1.433672981" watchObservedRunningTime="2024-07-02 00:00:12.397506995 +0000 UTC m=+1.456203969" Jul 2 00:00:15.395926 sudo[2479]: pam_unix(sudo:session): session closed for user root Jul 2 00:00:15.419259 sshd[2475]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:15.428545 systemd[1]: sshd@6-172.31.30.222:22-147.75.109.163:49896.service: Deactivated successfully. Jul 2 00:00:15.436785 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:00:15.437108 systemd-logind[2087]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:00:15.442446 systemd-logind[2087]: Removed session 7. Jul 2 00:00:23.832764 kubelet[3674]: I0702 00:00:23.832729 3674 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:00:23.834891 containerd[2116]: time="2024-07-02T00:00:23.834726379Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:00:23.835869 kubelet[3674]: I0702 00:00:23.835300 3674 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:00:24.686489 kubelet[3674]: I0702 00:00:24.683300 3674 topology_manager.go:215] "Topology Admit Handler" podUID="0873cd74-b572-4a8d-a1de-31b13a14803f" podNamespace="kube-system" podName="kube-proxy-46tst" Jul 2 00:00:24.702212 kubelet[3674]: I0702 00:00:24.700995 3674 topology_manager.go:215] "Topology Admit Handler" podUID="7661c1b6-d579-418c-a2e4-2344a2f3f75d" podNamespace="kube-system" podName="cilium-kjt8f" Jul 2 00:00:24.770479 kubelet[3674]: I0702 00:00:24.770404 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cilium-run\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.770657 kubelet[3674]: I0702 00:00:24.770514 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-xtables-lock\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.770657 kubelet[3674]: I0702 00:00:24.770567 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94j8w\" (UniqueName: \"kubernetes.io/projected/0873cd74-b572-4a8d-a1de-31b13a14803f-kube-api-access-94j8w\") pod \"kube-proxy-46tst\" (UID: \"0873cd74-b572-4a8d-a1de-31b13a14803f\") " pod="kube-system/kube-proxy-46tst" Jul 2 00:00:24.770657 kubelet[3674]: I0702 00:00:24.770622 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0873cd74-b572-4a8d-a1de-31b13a14803f-kube-proxy\") pod \"kube-proxy-46tst\" (UID: \"0873cd74-b572-4a8d-a1de-31b13a14803f\") " pod="kube-system/kube-proxy-46tst" Jul 2 00:00:24.770838 kubelet[3674]: I0702 00:00:24.770669 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0873cd74-b572-4a8d-a1de-31b13a14803f-lib-modules\") pod \"kube-proxy-46tst\" (UID: \"0873cd74-b572-4a8d-a1de-31b13a14803f\") " pod="kube-system/kube-proxy-46tst" Jul 2 00:00:24.770838 kubelet[3674]: I0702 00:00:24.770712 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-hostproc\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.770838 kubelet[3674]: I0702 00:00:24.770758 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0873cd74-b572-4a8d-a1de-31b13a14803f-xtables-lock\") pod \"kube-proxy-46tst\" (UID: \"0873cd74-b572-4a8d-a1de-31b13a14803f\") " pod="kube-system/kube-proxy-46tst" Jul 2 00:00:24.770838 kubelet[3674]: I0702 00:00:24.770802 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cni-path\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.771049 kubelet[3674]: I0702 00:00:24.770846 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7661c1b6-d579-418c-a2e4-2344a2f3f75d-clustermesh-secrets\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.771049 kubelet[3674]: I0702 00:00:24.770900 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jmsp\" (UniqueName: \"kubernetes.io/projected/7661c1b6-d579-418c-a2e4-2344a2f3f75d-kube-api-access-9jmsp\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.771049 kubelet[3674]: I0702 00:00:24.770945 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-host-proc-sys-kernel\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.771049 kubelet[3674]: I0702 00:00:24.770989 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7661c1b6-d579-418c-a2e4-2344a2f3f75d-hubble-tls\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.771049 kubelet[3674]: I0702 00:00:24.771030 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-bpf-maps\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.771321 kubelet[3674]: I0702 00:00:24.771074 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-etc-cni-netd\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.771321 kubelet[3674]: I0702 00:00:24.771119 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cilium-config-path\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.771321 kubelet[3674]: I0702 00:00:24.771165 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cilium-cgroup\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.771321 kubelet[3674]: I0702 00:00:24.771217 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-lib-modules\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.771321 kubelet[3674]: I0702 00:00:24.771265 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-host-proc-sys-net\") pod \"cilium-kjt8f\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " pod="kube-system/cilium-kjt8f" Jul 2 00:00:24.850786 kubelet[3674]: I0702 00:00:24.850637 3674 topology_manager.go:215] "Topology Admit Handler" podUID="503f6603-be85-4ce9-96b0-1118864d9105" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-j9scj" Jul 2 00:00:24.979411 kubelet[3674]: I0702 00:00:24.974308 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/503f6603-be85-4ce9-96b0-1118864d9105-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-j9scj\" (UID: \"503f6603-be85-4ce9-96b0-1118864d9105\") " pod="kube-system/cilium-operator-6bc8ccdb58-j9scj" Jul 2 00:00:24.979411 kubelet[3674]: I0702 00:00:24.974418 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsclp\" (UniqueName: \"kubernetes.io/projected/503f6603-be85-4ce9-96b0-1118864d9105-kube-api-access-zsclp\") pod \"cilium-operator-6bc8ccdb58-j9scj\" (UID: \"503f6603-be85-4ce9-96b0-1118864d9105\") " pod="kube-system/cilium-operator-6bc8ccdb58-j9scj" Jul 2 00:00:25.036331 containerd[2116]: time="2024-07-02T00:00:25.035770057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kjt8f,Uid:7661c1b6-d579-418c-a2e4-2344a2f3f75d,Namespace:kube-system,Attempt:0,}" Jul 2 00:00:25.116753 containerd[2116]: time="2024-07-02T00:00:25.116408332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:25.116753 containerd[2116]: time="2024-07-02T00:00:25.116569765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:25.116753 containerd[2116]: time="2024-07-02T00:00:25.116668622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:25.116753 containerd[2116]: time="2024-07-02T00:00:25.116738882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:25.170598 containerd[2116]: time="2024-07-02T00:00:25.170383419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-j9scj,Uid:503f6603-be85-4ce9-96b0-1118864d9105,Namespace:kube-system,Attempt:0,}" Jul 2 00:00:25.184324 containerd[2116]: time="2024-07-02T00:00:25.183987225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kjt8f,Uid:7661c1b6-d579-418c-a2e4-2344a2f3f75d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\"" Jul 2 00:00:25.189062 containerd[2116]: time="2024-07-02T00:00:25.188692617Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:00:25.216553 containerd[2116]: time="2024-07-02T00:00:25.216369473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:25.216789 containerd[2116]: time="2024-07-02T00:00:25.216499894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:25.216789 containerd[2116]: time="2024-07-02T00:00:25.216568028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:25.216789 containerd[2116]: time="2024-07-02T00:00:25.216606231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:25.296931 containerd[2116]: time="2024-07-02T00:00:25.296656801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-46tst,Uid:0873cd74-b572-4a8d-a1de-31b13a14803f,Namespace:kube-system,Attempt:0,}" Jul 2 00:00:25.303046 containerd[2116]: time="2024-07-02T00:00:25.302972953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-j9scj,Uid:503f6603-be85-4ce9-96b0-1118864d9105,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\"" Jul 2 00:00:25.335555 containerd[2116]: time="2024-07-02T00:00:25.335123341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:25.335555 containerd[2116]: time="2024-07-02T00:00:25.335213746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:25.335555 containerd[2116]: time="2024-07-02T00:00:25.335245154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:25.335555 containerd[2116]: time="2024-07-02T00:00:25.335349162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:25.400811 containerd[2116]: time="2024-07-02T00:00:25.400655919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-46tst,Uid:0873cd74-b572-4a8d-a1de-31b13a14803f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6c2fabbc84e90810419d4d15466ffc8d3969b71cca80d0b3d3c4933ad189723\"" Jul 2 00:00:25.409106 containerd[2116]: time="2024-07-02T00:00:25.408733952Z" level=info msg="CreateContainer within sandbox \"d6c2fabbc84e90810419d4d15466ffc8d3969b71cca80d0b3d3c4933ad189723\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:00:25.436947 containerd[2116]: time="2024-07-02T00:00:25.436871154Z" level=info msg="CreateContainer within sandbox \"d6c2fabbc84e90810419d4d15466ffc8d3969b71cca80d0b3d3c4933ad189723\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"25b54c0aa23cdc793cdb884aacc4ff3a2230e564d1f71089e060d919c4b58f8c\"" Jul 2 00:00:25.438118 containerd[2116]: time="2024-07-02T00:00:25.437948392Z" level=info msg="StartContainer for \"25b54c0aa23cdc793cdb884aacc4ff3a2230e564d1f71089e060d919c4b58f8c\"" Jul 2 00:00:25.540477 containerd[2116]: time="2024-07-02T00:00:25.540406625Z" level=info msg="StartContainer for \"25b54c0aa23cdc793cdb884aacc4ff3a2230e564d1f71089e060d919c4b58f8c\" returns successfully" Jul 2 00:00:26.400727 kubelet[3674]: I0702 00:00:26.400634 3674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-46tst" podStartSLOduration=2.400547206 podCreationTimestamp="2024-07-02 00:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:00:26.400369661 +0000 UTC m=+15.459066707" watchObservedRunningTime="2024-07-02 00:00:26.400547206 +0000 UTC m=+15.459268240" Jul 2 00:00:30.010443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851134981.mount: Deactivated successfully. Jul 2 00:00:32.753720 containerd[2116]: time="2024-07-02T00:00:32.753657638Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:32.756547 containerd[2116]: time="2024-07-02T00:00:32.756480609Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651486" Jul 2 00:00:32.758054 containerd[2116]: time="2024-07-02T00:00:32.757984240Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:32.764082 containerd[2116]: time="2024-07-02T00:00:32.763789141Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.574466642s" Jul 2 00:00:32.764082 containerd[2116]: time="2024-07-02T00:00:32.763893833Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 00:00:32.765838 containerd[2116]: time="2024-07-02T00:00:32.765180024Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:00:32.769509 containerd[2116]: time="2024-07-02T00:00:32.769199045Z" level=info msg="CreateContainer within sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:00:32.794010 containerd[2116]: time="2024-07-02T00:00:32.793941850Z" level=info msg="CreateContainer within sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e\"" Jul 2 00:00:32.794992 containerd[2116]: time="2024-07-02T00:00:32.794928179Z" level=info msg="StartContainer for \"bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e\"" Jul 2 00:00:32.893612 containerd[2116]: time="2024-07-02T00:00:32.893539317Z" level=info msg="StartContainer for \"bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e\" returns successfully" Jul 2 00:00:32.952189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e-rootfs.mount: Deactivated successfully. Jul 2 00:00:33.815494 containerd[2116]: time="2024-07-02T00:00:33.815168051Z" level=info msg="shim disconnected" id=bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e namespace=k8s.io Jul 2 00:00:33.815494 containerd[2116]: time="2024-07-02T00:00:33.815240555Z" level=warning msg="cleaning up after shim disconnected" id=bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e namespace=k8s.io Jul 2 00:00:33.815494 containerd[2116]: time="2024-07-02T00:00:33.815260954Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:00:34.414953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2743754799.mount: Deactivated successfully. Jul 2 00:00:34.445792 containerd[2116]: time="2024-07-02T00:00:34.443779896Z" level=info msg="CreateContainer within sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:00:34.492069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1487528862.mount: Deactivated successfully. Jul 2 00:00:34.499816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350347377.mount: Deactivated successfully. Jul 2 00:00:34.503084 containerd[2116]: time="2024-07-02T00:00:34.502723198Z" level=info msg="CreateContainer within sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460\"" Jul 2 00:00:34.504391 containerd[2116]: time="2024-07-02T00:00:34.504313705Z" level=info msg="StartContainer for \"c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460\"" Jul 2 00:00:34.654981 containerd[2116]: time="2024-07-02T00:00:34.654905060Z" level=info msg="StartContainer for \"c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460\" returns successfully" Jul 2 00:00:34.667830 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:00:34.668488 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:00:34.668610 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:00:34.688455 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:00:34.735014 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:00:34.792136 containerd[2116]: time="2024-07-02T00:00:34.791968330Z" level=info msg="shim disconnected" id=c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460 namespace=k8s.io Jul 2 00:00:34.792136 containerd[2116]: time="2024-07-02T00:00:34.792043788Z" level=warning msg="cleaning up after shim disconnected" id=c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460 namespace=k8s.io Jul 2 00:00:34.792136 containerd[2116]: time="2024-07-02T00:00:34.792065086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:00:35.245822 containerd[2116]: time="2024-07-02T00:00:35.245766511Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:35.247891 containerd[2116]: time="2024-07-02T00:00:35.247812057Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138266" Jul 2 00:00:35.249261 containerd[2116]: time="2024-07-02T00:00:35.249201992Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:35.253954 containerd[2116]: time="2024-07-02T00:00:35.253812392Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.488549934s" Jul 2 00:00:35.253954 containerd[2116]: time="2024-07-02T00:00:35.253873983Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 00:00:35.257207 containerd[2116]: time="2024-07-02T00:00:35.256969694Z" level=info msg="CreateContainer within sandbox \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:00:35.275814 containerd[2116]: time="2024-07-02T00:00:35.275736741Z" level=info msg="CreateContainer within sandbox \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a\"" Jul 2 00:00:35.279001 containerd[2116]: time="2024-07-02T00:00:35.277976256Z" level=info msg="StartContainer for \"b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a\"" Jul 2 00:00:35.370619 containerd[2116]: time="2024-07-02T00:00:35.370410955Z" level=info msg="StartContainer for \"b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a\" returns successfully" Jul 2 00:00:35.458103 containerd[2116]: time="2024-07-02T00:00:35.457867330Z" level=info msg="CreateContainer within sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:00:35.502931 containerd[2116]: time="2024-07-02T00:00:35.500996508Z" level=info msg="CreateContainer within sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d\"" Jul 2 00:00:35.510511 containerd[2116]: time="2024-07-02T00:00:35.504203647Z" level=info msg="StartContainer for \"f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d\"" Jul 2 00:00:35.522537 kubelet[3674]: I0702 00:00:35.522149 3674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-j9scj" podStartSLOduration=1.573309692 podCreationTimestamp="2024-07-02 00:00:24 +0000 UTC" firstStartedPulling="2024-07-02 00:00:25.305661205 +0000 UTC m=+14.364358167" lastFinishedPulling="2024-07-02 00:00:35.254445396 +0000 UTC m=+24.313142346" observedRunningTime="2024-07-02 00:00:35.459406331 +0000 UTC m=+24.518103329" watchObservedRunningTime="2024-07-02 00:00:35.522093871 +0000 UTC m=+24.580790857" Jul 2 00:00:35.698392 containerd[2116]: time="2024-07-02T00:00:35.698251391Z" level=info msg="StartContainer for \"f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d\" returns successfully" Jul 2 00:00:35.921906 containerd[2116]: time="2024-07-02T00:00:35.921833032Z" level=info msg="shim disconnected" id=f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d namespace=k8s.io Jul 2 00:00:35.923657 containerd[2116]: time="2024-07-02T00:00:35.922353516Z" level=warning msg="cleaning up after shim disconnected" id=f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d namespace=k8s.io Jul 2 00:00:35.923657 containerd[2116]: time="2024-07-02T00:00:35.923367987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:00:36.405227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d-rootfs.mount: Deactivated successfully. Jul 2 00:00:36.470007 containerd[2116]: time="2024-07-02T00:00:36.469662373Z" level=info msg="CreateContainer within sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:00:36.513761 containerd[2116]: time="2024-07-02T00:00:36.513681244Z" level=info msg="CreateContainer within sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14\"" Jul 2 00:00:36.520325 containerd[2116]: time="2024-07-02T00:00:36.517159803Z" level=info msg="StartContainer for \"c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14\"" Jul 2 00:00:36.847334 containerd[2116]: time="2024-07-02T00:00:36.846568883Z" level=info msg="StartContainer for \"c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14\" returns successfully" Jul 2 00:00:36.936746 containerd[2116]: time="2024-07-02T00:00:36.936615805Z" level=info msg="shim disconnected" id=c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14 namespace=k8s.io Jul 2 00:00:36.937003 containerd[2116]: time="2024-07-02T00:00:36.936756923Z" level=warning msg="cleaning up after shim disconnected" id=c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14 namespace=k8s.io Jul 2 00:00:36.937003 containerd[2116]: time="2024-07-02T00:00:36.936781476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:00:37.400021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14-rootfs.mount: Deactivated successfully. Jul 2 00:00:37.473058 containerd[2116]: time="2024-07-02T00:00:37.472913731Z" level=info msg="CreateContainer within sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:00:37.536655 containerd[2116]: time="2024-07-02T00:00:37.536581994Z" level=info msg="CreateContainer within sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\"" Jul 2 00:00:37.538610 containerd[2116]: time="2024-07-02T00:00:37.538520135Z" level=info msg="StartContainer for \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\"" Jul 2 00:00:37.686127 containerd[2116]: time="2024-07-02T00:00:37.685111871Z" level=info msg="StartContainer for \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\" returns successfully" Jul 2 00:00:37.856742 kubelet[3674]: I0702 00:00:37.856685 3674 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:00:37.902443 kubelet[3674]: I0702 00:00:37.902367 3674 topology_manager.go:215] "Topology Admit Handler" podUID="58e10f85-4b61-46a8-b804-dfeddb5f02c6" podNamespace="kube-system" podName="coredns-5dd5756b68-rnfxq" Jul 2 00:00:37.910025 kubelet[3674]: I0702 00:00:37.909796 3674 topology_manager.go:215] "Topology Admit Handler" podUID="4c5c65b8-9ed7-4548-bbc7-9249b91de94e" podNamespace="kube-system" podName="coredns-5dd5756b68-mx7fq" Jul 2 00:00:37.992563 kubelet[3674]: I0702 00:00:37.992101 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs8wk\" (UniqueName: \"kubernetes.io/projected/58e10f85-4b61-46a8-b804-dfeddb5f02c6-kube-api-access-hs8wk\") pod \"coredns-5dd5756b68-rnfxq\" (UID: \"58e10f85-4b61-46a8-b804-dfeddb5f02c6\") " pod="kube-system/coredns-5dd5756b68-rnfxq" Jul 2 00:00:37.992563 kubelet[3674]: I0702 00:00:37.992203 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c5c65b8-9ed7-4548-bbc7-9249b91de94e-config-volume\") pod \"coredns-5dd5756b68-mx7fq\" (UID: \"4c5c65b8-9ed7-4548-bbc7-9249b91de94e\") " pod="kube-system/coredns-5dd5756b68-mx7fq" Jul 2 00:00:37.992563 kubelet[3674]: I0702 00:00:37.992257 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58e10f85-4b61-46a8-b804-dfeddb5f02c6-config-volume\") pod \"coredns-5dd5756b68-rnfxq\" (UID: \"58e10f85-4b61-46a8-b804-dfeddb5f02c6\") " pod="kube-system/coredns-5dd5756b68-rnfxq" Jul 2 00:00:37.992563 kubelet[3674]: I0702 00:00:37.992334 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-425rl\" (UniqueName: \"kubernetes.io/projected/4c5c65b8-9ed7-4548-bbc7-9249b91de94e-kube-api-access-425rl\") pod \"coredns-5dd5756b68-mx7fq\" (UID: \"4c5c65b8-9ed7-4548-bbc7-9249b91de94e\") " pod="kube-system/coredns-5dd5756b68-mx7fq" Jul 2 00:00:38.227585 containerd[2116]: time="2024-07-02T00:00:38.226674237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rnfxq,Uid:58e10f85-4b61-46a8-b804-dfeddb5f02c6,Namespace:kube-system,Attempt:0,}" Jul 2 00:00:38.242486 containerd[2116]: time="2024-07-02T00:00:38.242430083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-mx7fq,Uid:4c5c65b8-9ed7-4548-bbc7-9249b91de94e,Namespace:kube-system,Attempt:0,}" Jul 2 00:00:38.522330 kubelet[3674]: I0702 00:00:38.522196 3674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kjt8f" podStartSLOduration=6.943719244 podCreationTimestamp="2024-07-02 00:00:24 +0000 UTC" firstStartedPulling="2024-07-02 00:00:25.186493538 +0000 UTC m=+14.245190488" lastFinishedPulling="2024-07-02 00:00:32.764425387 +0000 UTC m=+21.823122361" observedRunningTime="2024-07-02 00:00:38.520223436 +0000 UTC m=+27.578920410" watchObservedRunningTime="2024-07-02 00:00:38.521651117 +0000 UTC m=+27.580348103" Jul 2 00:00:40.619455 systemd-networkd[1691]: cilium_host: Link UP Jul 2 00:00:40.619728 systemd-networkd[1691]: cilium_net: Link UP Jul 2 00:00:40.619736 systemd-networkd[1691]: cilium_net: Gained carrier Jul 2 00:00:40.620148 systemd-networkd[1691]: cilium_host: Gained carrier Jul 2 00:00:40.624996 (udev-worker)[4465]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:00:40.627876 (udev-worker)[4463]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:00:40.801890 systemd-networkd[1691]: cilium_vxlan: Link UP Jul 2 00:00:40.801905 systemd-networkd[1691]: cilium_vxlan: Gained carrier Jul 2 00:00:40.851849 systemd-networkd[1691]: cilium_host: Gained IPv6LL Jul 2 00:00:40.947880 systemd-networkd[1691]: cilium_net: Gained IPv6LL Jul 2 00:00:41.287512 kernel: NET: Registered PF_ALG protocol family Jul 2 00:00:42.299929 systemd-networkd[1691]: cilium_vxlan: Gained IPv6LL Jul 2 00:00:42.639122 systemd-networkd[1691]: lxc_health: Link UP Jul 2 00:00:42.643003 systemd-networkd[1691]: lxc_health: Gained carrier Jul 2 00:00:43.346221 systemd-networkd[1691]: lxcbaaea0d90a31: Link UP Jul 2 00:00:43.364151 kernel: eth0: renamed from tmp8a8b7 Jul 2 00:00:43.377363 systemd-networkd[1691]: lxcbaaea0d90a31: Gained carrier Jul 2 00:00:43.385051 systemd-networkd[1691]: lxcf98ef910689a: Link UP Jul 2 00:00:43.404461 kernel: eth0: renamed from tmpb3900 Jul 2 00:00:43.405814 (udev-worker)[4507]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:00:43.410950 systemd-networkd[1691]: lxcf98ef910689a: Gained carrier Jul 2 00:00:43.519324 kubelet[3674]: I0702 00:00:43.515099 3674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:00:43.771552 systemd-networkd[1691]: lxc_health: Gained IPv6LL Jul 2 00:00:45.180416 systemd-networkd[1691]: lxcf98ef910689a: Gained IPv6LL Jul 2 00:00:45.243549 systemd-networkd[1691]: lxcbaaea0d90a31: Gained IPv6LL Jul 2 00:00:47.931718 ntpd[2075]: Listen normally on 6 cilium_host 192.168.0.148:123 Jul 2 00:00:47.933006 ntpd[2075]: 2 Jul 00:00:47 ntpd[2075]: Listen normally on 6 cilium_host 192.168.0.148:123 Jul 2 00:00:47.933006 ntpd[2075]: 2 Jul 00:00:47 ntpd[2075]: Listen normally on 7 cilium_net [fe80::804a:4fff:feb4:806e%4]:123 Jul 2 00:00:47.933006 ntpd[2075]: 2 Jul 00:00:47 ntpd[2075]: Listen normally on 8 cilium_host [fe80::b0a6:5fff:fec6:d56a%5]:123 Jul 2 00:00:47.933006 ntpd[2075]: 2 Jul 00:00:47 ntpd[2075]: Listen normally on 9 cilium_vxlan [fe80::9805:6cff:fe2e:698b%6]:123 Jul 2 00:00:47.933006 ntpd[2075]: 2 Jul 00:00:47 ntpd[2075]: Listen normally on 10 lxc_health [fe80::88c7:74ff:feac:1cb7%8]:123 Jul 2 00:00:47.933006 ntpd[2075]: 2 Jul 00:00:47 ntpd[2075]: Listen normally on 11 lxcbaaea0d90a31 [fe80::2484:11ff:fec2:ece9%10]:123 Jul 2 00:00:47.933006 ntpd[2075]: 2 Jul 00:00:47 ntpd[2075]: Listen normally on 12 lxcf98ef910689a [fe80::9c2b:4ff:fecc:3f14%12]:123 Jul 2 00:00:47.931854 ntpd[2075]: Listen normally on 7 cilium_net [fe80::804a:4fff:feb4:806e%4]:123 Jul 2 00:00:47.931937 ntpd[2075]: Listen normally on 8 cilium_host [fe80::b0a6:5fff:fec6:d56a%5]:123 Jul 2 00:00:47.932003 ntpd[2075]: Listen normally on 9 cilium_vxlan [fe80::9805:6cff:fe2e:698b%6]:123 Jul 2 00:00:47.932070 ntpd[2075]: Listen normally on 10 lxc_health [fe80::88c7:74ff:feac:1cb7%8]:123 Jul 2 00:00:47.932149 ntpd[2075]: Listen normally on 11 lxcbaaea0d90a31 [fe80::2484:11ff:fec2:ece9%10]:123 Jul 2 00:00:47.932214 ntpd[2075]: Listen normally on 12 lxcf98ef910689a [fe80::9c2b:4ff:fecc:3f14%12]:123 Jul 2 00:00:51.926462 containerd[2116]: time="2024-07-02T00:00:51.924336768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:51.927174 containerd[2116]: time="2024-07-02T00:00:51.926677434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:51.927174 containerd[2116]: time="2024-07-02T00:00:51.926739841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:51.927174 containerd[2116]: time="2024-07-02T00:00:51.926774575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:52.034505 containerd[2116]: time="2024-07-02T00:00:52.033925532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:52.034505 containerd[2116]: time="2024-07-02T00:00:52.034023453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:52.034505 containerd[2116]: time="2024-07-02T00:00:52.034064226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:52.034505 containerd[2116]: time="2024-07-02T00:00:52.034098335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:52.159649 containerd[2116]: time="2024-07-02T00:00:52.159349526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rnfxq,Uid:58e10f85-4b61-46a8-b804-dfeddb5f02c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a8b7971c49c6768bdb772c4e1f5954c21a5161dba06cb9e1f95d525f7d680a9\"" Jul 2 00:00:52.174907 containerd[2116]: time="2024-07-02T00:00:52.174135383Z" level=info msg="CreateContainer within sandbox \"8a8b7971c49c6768bdb772c4e1f5954c21a5161dba06cb9e1f95d525f7d680a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:00:52.212505 containerd[2116]: time="2024-07-02T00:00:52.211100501Z" level=info msg="CreateContainer within sandbox \"8a8b7971c49c6768bdb772c4e1f5954c21a5161dba06cb9e1f95d525f7d680a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b90e26d275d24a51ee96b0843552f66f58c6fdd9c1e4394692d8aaaae24ebe5\"" Jul 2 00:00:52.214781 containerd[2116]: time="2024-07-02T00:00:52.212900020Z" level=info msg="StartContainer for \"6b90e26d275d24a51ee96b0843552f66f58c6fdd9c1e4394692d8aaaae24ebe5\"" Jul 2 00:00:52.289057 containerd[2116]: time="2024-07-02T00:00:52.288988466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-mx7fq,Uid:4c5c65b8-9ed7-4548-bbc7-9249b91de94e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3900038bb7b3bcd045e607055d9f435a1c12c232f26880c2404214a36e10a98\"" Jul 2 00:00:52.301357 containerd[2116]: time="2024-07-02T00:00:52.301269415Z" level=info msg="CreateContainer within sandbox \"b3900038bb7b3bcd045e607055d9f435a1c12c232f26880c2404214a36e10a98\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:00:52.332185 containerd[2116]: time="2024-07-02T00:00:52.332102516Z" level=info msg="CreateContainer within sandbox \"b3900038bb7b3bcd045e607055d9f435a1c12c232f26880c2404214a36e10a98\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f814ea3accd4bf48834b257fc1fc544723e5f1d7c6e938b2a5ec68e767b180c1\"" Jul 2 00:00:52.336339 containerd[2116]: time="2024-07-02T00:00:52.333547691Z" level=info msg="StartContainer for \"f814ea3accd4bf48834b257fc1fc544723e5f1d7c6e938b2a5ec68e767b180c1\"" Jul 2 00:00:52.385830 containerd[2116]: time="2024-07-02T00:00:52.385625169Z" level=info msg="StartContainer for \"6b90e26d275d24a51ee96b0843552f66f58c6fdd9c1e4394692d8aaaae24ebe5\" returns successfully" Jul 2 00:00:52.471031 containerd[2116]: time="2024-07-02T00:00:52.470837982Z" level=info msg="StartContainer for \"f814ea3accd4bf48834b257fc1fc544723e5f1d7c6e938b2a5ec68e767b180c1\" returns successfully" Jul 2 00:00:52.561358 kubelet[3674]: I0702 00:00:52.560816 3674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mx7fq" podStartSLOduration=28.560751001 podCreationTimestamp="2024-07-02 00:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:00:52.556764625 +0000 UTC m=+41.615461587" watchObservedRunningTime="2024-07-02 00:00:52.560751001 +0000 UTC m=+41.619447987" Jul 2 00:00:53.561660 kubelet[3674]: I0702 00:00:53.560606 3674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rnfxq" podStartSLOduration=29.560547392 podCreationTimestamp="2024-07-02 00:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:00:52.614407532 +0000 UTC m=+41.673104890" watchObservedRunningTime="2024-07-02 00:00:53.560547392 +0000 UTC m=+42.619244366" Jul 2 00:00:55.477727 systemd[1]: Started sshd@7-172.31.30.222:22-147.75.109.163:56318.service - OpenSSH per-connection server daemon (147.75.109.163:56318). Jul 2 00:00:55.658671 sshd[5037]: Accepted publickey for core from 147.75.109.163 port 56318 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:55.661839 sshd[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:55.669631 systemd-logind[2087]: New session 8 of user core. Jul 2 00:00:55.680893 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:00:55.948456 sshd[5037]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:55.955827 systemd[1]: sshd@7-172.31.30.222:22-147.75.109.163:56318.service: Deactivated successfully. Jul 2 00:00:55.965631 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:00:55.966077 systemd-logind[2087]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:00:55.974750 systemd-logind[2087]: Removed session 8. Jul 2 00:01:00.980737 systemd[1]: Started sshd@8-172.31.30.222:22-147.75.109.163:56322.service - OpenSSH per-connection server daemon (147.75.109.163:56322). Jul 2 00:01:01.163443 sshd[5055]: Accepted publickey for core from 147.75.109.163 port 56322 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:01.166170 sshd[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:01.174440 systemd-logind[2087]: New session 9 of user core. Jul 2 00:01:01.185334 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:01:01.424245 sshd[5055]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:01.431775 systemd[1]: sshd@8-172.31.30.222:22-147.75.109.163:56322.service: Deactivated successfully. Jul 2 00:01:01.438239 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:01:01.438651 systemd-logind[2087]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:01:01.443129 systemd-logind[2087]: Removed session 9. Jul 2 00:01:06.458772 systemd[1]: Started sshd@9-172.31.30.222:22-147.75.109.163:44060.service - OpenSSH per-connection server daemon (147.75.109.163:44060). Jul 2 00:01:06.645330 sshd[5070]: Accepted publickey for core from 147.75.109.163 port 44060 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:06.651596 sshd[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:06.660403 systemd-logind[2087]: New session 10 of user core. Jul 2 00:01:06.666179 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:01:06.909895 sshd[5070]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:06.916411 systemd[1]: sshd@9-172.31.30.222:22-147.75.109.163:44060.service: Deactivated successfully. Jul 2 00:01:06.924689 systemd-logind[2087]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:01:06.926074 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:01:06.929646 systemd-logind[2087]: Removed session 10. Jul 2 00:01:11.949010 systemd[1]: Started sshd@10-172.31.30.222:22-147.75.109.163:44074.service - OpenSSH per-connection server daemon (147.75.109.163:44074). Jul 2 00:01:12.128706 sshd[5087]: Accepted publickey for core from 147.75.109.163 port 44074 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:12.131634 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:12.140414 systemd-logind[2087]: New session 11 of user core. Jul 2 00:01:12.149861 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:01:12.391224 sshd[5087]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:12.405396 systemd-logind[2087]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:01:12.408058 systemd[1]: sshd@10-172.31.30.222:22-147.75.109.163:44074.service: Deactivated successfully. Jul 2 00:01:12.417943 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:01:12.421942 systemd-logind[2087]: Removed session 11. Jul 2 00:01:17.424792 systemd[1]: Started sshd@11-172.31.30.222:22-147.75.109.163:43534.service - OpenSSH per-connection server daemon (147.75.109.163:43534). Jul 2 00:01:17.602480 sshd[5102]: Accepted publickey for core from 147.75.109.163 port 43534 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:17.605997 sshd[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:17.615746 systemd-logind[2087]: New session 12 of user core. Jul 2 00:01:17.622829 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:01:17.855433 sshd[5102]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:17.863697 systemd-logind[2087]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:01:17.865023 systemd[1]: sshd@11-172.31.30.222:22-147.75.109.163:43534.service: Deactivated successfully. Jul 2 00:01:17.871722 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:01:17.874446 systemd-logind[2087]: Removed session 12. Jul 2 00:01:17.886804 systemd[1]: Started sshd@12-172.31.30.222:22-147.75.109.163:43544.service - OpenSSH per-connection server daemon (147.75.109.163:43544). Jul 2 00:01:18.067543 sshd[5117]: Accepted publickey for core from 147.75.109.163 port 43544 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:18.070081 sshd[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:18.077762 systemd-logind[2087]: New session 13 of user core. Jul 2 00:01:18.085727 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:01:19.643223 sshd[5117]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:19.658980 systemd-logind[2087]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:01:19.659389 systemd[1]: sshd@12-172.31.30.222:22-147.75.109.163:43544.service: Deactivated successfully. Jul 2 00:01:19.675444 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:01:19.695861 systemd[1]: Started sshd@13-172.31.30.222:22-147.75.109.163:43548.service - OpenSSH per-connection server daemon (147.75.109.163:43548). Jul 2 00:01:19.697900 systemd-logind[2087]: Removed session 13. Jul 2 00:01:19.870724 sshd[5130]: Accepted publickey for core from 147.75.109.163 port 43548 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:19.873351 sshd[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:19.880891 systemd-logind[2087]: New session 14 of user core. Jul 2 00:01:19.888936 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:01:20.140628 sshd[5130]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:20.146570 systemd[1]: sshd@13-172.31.30.222:22-147.75.109.163:43548.service: Deactivated successfully. Jul 2 00:01:20.154110 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:01:20.157686 systemd-logind[2087]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:01:20.160201 systemd-logind[2087]: Removed session 14. Jul 2 00:01:25.173804 systemd[1]: Started sshd@14-172.31.30.222:22-147.75.109.163:60728.service - OpenSSH per-connection server daemon (147.75.109.163:60728). Jul 2 00:01:25.355416 sshd[5143]: Accepted publickey for core from 147.75.109.163 port 60728 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:25.357917 sshd[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:25.366998 systemd-logind[2087]: New session 15 of user core. Jul 2 00:01:25.373787 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:01:25.618744 sshd[5143]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:25.625598 systemd-logind[2087]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:01:25.626850 systemd[1]: sshd@14-172.31.30.222:22-147.75.109.163:60728.service: Deactivated successfully. Jul 2 00:01:25.635183 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:01:25.638533 systemd-logind[2087]: Removed session 15. Jul 2 00:01:30.656800 systemd[1]: Started sshd@15-172.31.30.222:22-147.75.109.163:60740.service - OpenSSH per-connection server daemon (147.75.109.163:60740). Jul 2 00:01:30.828879 sshd[5159]: Accepted publickey for core from 147.75.109.163 port 60740 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:30.832509 sshd[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:30.842597 systemd-logind[2087]: New session 16 of user core. Jul 2 00:01:30.846842 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:01:31.093917 sshd[5159]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:31.099960 systemd[1]: sshd@15-172.31.30.222:22-147.75.109.163:60740.service: Deactivated successfully. Jul 2 00:01:31.108531 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:01:31.111922 systemd-logind[2087]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:01:31.114616 systemd-logind[2087]: Removed session 16. Jul 2 00:01:36.130769 systemd[1]: Started sshd@16-172.31.30.222:22-147.75.109.163:38980.service - OpenSSH per-connection server daemon (147.75.109.163:38980). Jul 2 00:01:36.309548 sshd[5175]: Accepted publickey for core from 147.75.109.163 port 38980 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:36.312086 sshd[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:36.319968 systemd-logind[2087]: New session 17 of user core. Jul 2 00:01:36.328884 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:01:36.568883 sshd[5175]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:36.575221 systemd[1]: sshd@16-172.31.30.222:22-147.75.109.163:38980.service: Deactivated successfully. Jul 2 00:01:36.576603 systemd-logind[2087]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:01:36.586151 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:01:36.597769 systemd-logind[2087]: Removed session 17. Jul 2 00:01:36.608521 systemd[1]: Started sshd@17-172.31.30.222:22-147.75.109.163:38982.service - OpenSSH per-connection server daemon (147.75.109.163:38982). Jul 2 00:01:36.783326 sshd[5189]: Accepted publickey for core from 147.75.109.163 port 38982 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:36.785891 sshd[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:36.794647 systemd-logind[2087]: New session 18 of user core. Jul 2 00:01:36.803901 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:01:37.111633 sshd[5189]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:37.119676 systemd[1]: sshd@17-172.31.30.222:22-147.75.109.163:38982.service: Deactivated successfully. Jul 2 00:01:37.128354 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:01:37.130658 systemd-logind[2087]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:01:37.137408 systemd-logind[2087]: Removed session 18. Jul 2 00:01:37.141825 systemd[1]: Started sshd@18-172.31.30.222:22-147.75.109.163:38994.service - OpenSSH per-connection server daemon (147.75.109.163:38994). Jul 2 00:01:37.324858 sshd[5201]: Accepted publickey for core from 147.75.109.163 port 38994 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:37.327458 sshd[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:37.336337 systemd-logind[2087]: New session 19 of user core. Jul 2 00:01:37.340926 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:01:38.633984 sshd[5201]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:38.647208 systemd[1]: sshd@18-172.31.30.222:22-147.75.109.163:38994.service: Deactivated successfully. Jul 2 00:01:38.666810 systemd-logind[2087]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:01:38.677559 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:01:38.692854 systemd[1]: Started sshd@19-172.31.30.222:22-147.75.109.163:39004.service - OpenSSH per-connection server daemon (147.75.109.163:39004). Jul 2 00:01:38.695002 systemd-logind[2087]: Removed session 19. Jul 2 00:01:38.868831 sshd[5220]: Accepted publickey for core from 147.75.109.163 port 39004 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:38.871339 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:38.883361 systemd-logind[2087]: New session 20 of user core. Jul 2 00:01:38.898763 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:01:39.503316 sshd[5220]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:39.508682 systemd[1]: sshd@19-172.31.30.222:22-147.75.109.163:39004.service: Deactivated successfully. Jul 2 00:01:39.518720 systemd-logind[2087]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:01:39.518820 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:01:39.523147 systemd-logind[2087]: Removed session 20. Jul 2 00:01:39.532754 systemd[1]: Started sshd@20-172.31.30.222:22-147.75.109.163:39014.service - OpenSSH per-connection server daemon (147.75.109.163:39014). Jul 2 00:01:39.715694 sshd[5233]: Accepted publickey for core from 147.75.109.163 port 39014 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:39.718467 sshd[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:39.725990 systemd-logind[2087]: New session 21 of user core. Jul 2 00:01:39.734854 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:01:39.967605 sshd[5233]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:39.975084 systemd[1]: sshd@20-172.31.30.222:22-147.75.109.163:39014.service: Deactivated successfully. Jul 2 00:01:39.982499 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:01:39.984897 systemd-logind[2087]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:01:39.986672 systemd-logind[2087]: Removed session 21. Jul 2 00:01:45.002721 systemd[1]: Started sshd@21-172.31.30.222:22-147.75.109.163:60516.service - OpenSSH per-connection server daemon (147.75.109.163:60516). Jul 2 00:01:45.184543 sshd[5249]: Accepted publickey for core from 147.75.109.163 port 60516 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:45.187321 sshd[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:45.196688 systemd-logind[2087]: New session 22 of user core. Jul 2 00:01:45.204816 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:01:45.445738 sshd[5249]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:45.451499 systemd[1]: sshd@21-172.31.30.222:22-147.75.109.163:60516.service: Deactivated successfully. Jul 2 00:01:45.459206 systemd-logind[2087]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:01:45.459446 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:01:45.463173 systemd-logind[2087]: Removed session 22. Jul 2 00:01:50.476776 systemd[1]: Started sshd@22-172.31.30.222:22-147.75.109.163:60520.service - OpenSSH per-connection server daemon (147.75.109.163:60520). Jul 2 00:01:50.648717 sshd[5267]: Accepted publickey for core from 147.75.109.163 port 60520 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:50.651874 sshd[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:50.660363 systemd-logind[2087]: New session 23 of user core. Jul 2 00:01:50.664264 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:01:50.902901 sshd[5267]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:50.909947 systemd[1]: sshd@22-172.31.30.222:22-147.75.109.163:60520.service: Deactivated successfully. Jul 2 00:01:50.915604 systemd-logind[2087]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:01:50.915980 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:01:50.920706 systemd-logind[2087]: Removed session 23. Jul 2 00:01:55.937774 systemd[1]: Started sshd@23-172.31.30.222:22-147.75.109.163:60554.service - OpenSSH per-connection server daemon (147.75.109.163:60554). Jul 2 00:01:56.119033 sshd[5283]: Accepted publickey for core from 147.75.109.163 port 60554 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:56.121699 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:56.131446 systemd-logind[2087]: New session 24 of user core. Jul 2 00:01:56.138768 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:01:56.379986 sshd[5283]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:56.387526 systemd[1]: sshd@23-172.31.30.222:22-147.75.109.163:60554.service: Deactivated successfully. Jul 2 00:01:56.395596 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:01:56.398511 systemd-logind[2087]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:01:56.400858 systemd-logind[2087]: Removed session 24. Jul 2 00:02:01.414222 systemd[1]: Started sshd@24-172.31.30.222:22-147.75.109.163:60566.service - OpenSSH per-connection server daemon (147.75.109.163:60566). Jul 2 00:02:01.595268 sshd[5297]: Accepted publickey for core from 147.75.109.163 port 60566 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:01.598465 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:01.610047 systemd-logind[2087]: New session 25 of user core. Jul 2 00:02:01.616068 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:02:01.859235 sshd[5297]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:01.864819 systemd[1]: sshd@24-172.31.30.222:22-147.75.109.163:60566.service: Deactivated successfully. Jul 2 00:02:01.875020 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:02:01.878663 systemd-logind[2087]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:02:01.884846 systemd-logind[2087]: Removed session 25. Jul 2 00:02:01.891769 systemd[1]: Started sshd@25-172.31.30.222:22-147.75.109.163:60578.service - OpenSSH per-connection server daemon (147.75.109.163:60578). Jul 2 00:02:02.061861 sshd[5311]: Accepted publickey for core from 147.75.109.163 port 60578 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:02.064401 sshd[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:02.072116 systemd-logind[2087]: New session 26 of user core. Jul 2 00:02:02.081778 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:02:04.520789 containerd[2116]: time="2024-07-02T00:02:04.520411398Z" level=info msg="StopContainer for \"b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a\" with timeout 30 (s)" Jul 2 00:02:04.525073 containerd[2116]: time="2024-07-02T00:02:04.522849133Z" level=info msg="Stop container \"b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a\" with signal terminated" Jul 2 00:02:04.555874 containerd[2116]: time="2024-07-02T00:02:04.555794857Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:02:04.576985 containerd[2116]: time="2024-07-02T00:02:04.576661178Z" level=info msg="StopContainer for \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\" with timeout 2 (s)" Jul 2 00:02:04.578016 containerd[2116]: time="2024-07-02T00:02:04.577946972Z" level=info msg="Stop container \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\" with signal terminated" Jul 2 00:02:04.610859 systemd-networkd[1691]: lxc_health: Link DOWN Jul 2 00:02:04.610875 systemd-networkd[1691]: lxc_health: Lost carrier Jul 2 00:02:04.644411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a-rootfs.mount: Deactivated successfully. Jul 2 00:02:04.665565 containerd[2116]: time="2024-07-02T00:02:04.665487953Z" level=info msg="shim disconnected" id=b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a namespace=k8s.io Jul 2 00:02:04.666204 containerd[2116]: time="2024-07-02T00:02:04.665718084Z" level=warning msg="cleaning up after shim disconnected" id=b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a namespace=k8s.io Jul 2 00:02:04.666204 containerd[2116]: time="2024-07-02T00:02:04.665742120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:02:04.697261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6-rootfs.mount: Deactivated successfully. Jul 2 00:02:04.700815 containerd[2116]: time="2024-07-02T00:02:04.700388063Z" level=info msg="shim disconnected" id=2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6 namespace=k8s.io Jul 2 00:02:04.700815 containerd[2116]: time="2024-07-02T00:02:04.700580879Z" level=warning msg="cleaning up after shim disconnected" id=2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6 namespace=k8s.io Jul 2 00:02:04.700815 containerd[2116]: time="2024-07-02T00:02:04.700602694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:02:04.707931 containerd[2116]: time="2024-07-02T00:02:04.707727271Z" level=info msg="StopContainer for \"b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a\" returns successfully" Jul 2 00:02:04.709211 containerd[2116]: time="2024-07-02T00:02:04.708935495Z" level=info msg="StopPodSandbox for \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\"" Jul 2 00:02:04.709211 containerd[2116]: time="2024-07-02T00:02:04.709003845Z" level=info msg="Container to stop \"b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:02:04.715512 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80-shm.mount: Deactivated successfully. Jul 2 00:02:04.756528 containerd[2116]: time="2024-07-02T00:02:04.756380277Z" level=info msg="StopContainer for \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\" returns successfully" Jul 2 00:02:04.757561 containerd[2116]: time="2024-07-02T00:02:04.757463291Z" level=info msg="StopPodSandbox for \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\"" Jul 2 00:02:04.758222 containerd[2116]: time="2024-07-02T00:02:04.758012769Z" level=info msg="Container to stop \"c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:02:04.758222 containerd[2116]: time="2024-07-02T00:02:04.758139673Z" level=info msg="Container to stop \"f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:02:04.758222 containerd[2116]: time="2024-07-02T00:02:04.758171501Z" level=info msg="Container to stop \"c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:02:04.758543 containerd[2116]: time="2024-07-02T00:02:04.758195104Z" level=info msg="Container to stop \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:02:04.758790 containerd[2116]: time="2024-07-02T00:02:04.758710594Z" level=info msg="Container to stop \"bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:02:04.764437 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0-shm.mount: Deactivated successfully. Jul 2 00:02:04.794369 containerd[2116]: time="2024-07-02T00:02:04.793868593Z" level=info msg="shim disconnected" id=0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80 namespace=k8s.io Jul 2 00:02:04.794369 containerd[2116]: time="2024-07-02T00:02:04.793953307Z" level=warning msg="cleaning up after shim disconnected" id=0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80 namespace=k8s.io Jul 2 00:02:04.794369 containerd[2116]: time="2024-07-02T00:02:04.793988905Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:02:04.832034 containerd[2116]: time="2024-07-02T00:02:04.831954302Z" level=info msg="TearDown network for sandbox \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\" successfully" Jul 2 00:02:04.832034 containerd[2116]: time="2024-07-02T00:02:04.832010838Z" level=info msg="StopPodSandbox for \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\" returns successfully" Jul 2 00:02:04.850368 containerd[2116]: time="2024-07-02T00:02:04.850250714Z" level=info msg="shim disconnected" id=6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0 namespace=k8s.io Jul 2 00:02:04.850368 containerd[2116]: time="2024-07-02T00:02:04.850363114Z" level=warning msg="cleaning up after shim disconnected" id=6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0 namespace=k8s.io Jul 2 00:02:04.850677 containerd[2116]: time="2024-07-02T00:02:04.850388807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:02:04.879228 containerd[2116]: time="2024-07-02T00:02:04.879046925Z" level=info msg="TearDown network for sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" successfully" Jul 2 00:02:04.879228 containerd[2116]: time="2024-07-02T00:02:04.879110857Z" level=info msg="StopPodSandbox for \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" returns successfully" Jul 2 00:02:04.925078 kubelet[3674]: I0702 00:02:04.925009 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jmsp\" (UniqueName: \"kubernetes.io/projected/7661c1b6-d579-418c-a2e4-2344a2f3f75d-kube-api-access-9jmsp\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.925078 kubelet[3674]: I0702 00:02:04.925087 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-host-proc-sys-kernel\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.926042 kubelet[3674]: I0702 00:02:04.925130 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cilium-run\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.926042 kubelet[3674]: I0702 00:02:04.925172 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cni-path\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.926042 kubelet[3674]: I0702 00:02:04.925216 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7661c1b6-d579-418c-a2e4-2344a2f3f75d-hubble-tls\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.926042 kubelet[3674]: I0702 00:02:04.925261 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsclp\" (UniqueName: \"kubernetes.io/projected/503f6603-be85-4ce9-96b0-1118864d9105-kube-api-access-zsclp\") pod \"503f6603-be85-4ce9-96b0-1118864d9105\" (UID: \"503f6603-be85-4ce9-96b0-1118864d9105\") " Jul 2 00:02:04.930869 kubelet[3674]: I0702 00:02:04.928087 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:02:04.930869 kubelet[3674]: I0702 00:02:04.928183 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:02:04.930869 kubelet[3674]: I0702 00:02:04.928394 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-xtables-lock\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.930869 kubelet[3674]: I0702 00:02:04.928445 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-bpf-maps\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.930869 kubelet[3674]: I0702 00:02:04.928490 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-hostproc\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.935493 kubelet[3674]: I0702 00:02:04.928537 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7661c1b6-d579-418c-a2e4-2344a2f3f75d-clustermesh-secrets\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.935493 kubelet[3674]: I0702 00:02:04.928580 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-etc-cni-netd\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.935493 kubelet[3674]: I0702 00:02:04.928625 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/503f6603-be85-4ce9-96b0-1118864d9105-cilium-config-path\") pod \"503f6603-be85-4ce9-96b0-1118864d9105\" (UID: \"503f6603-be85-4ce9-96b0-1118864d9105\") " Jul 2 00:02:04.935493 kubelet[3674]: I0702 00:02:04.928696 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-host-proc-sys-net\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.935493 kubelet[3674]: I0702 00:02:04.928742 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cilium-config-path\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.935493 kubelet[3674]: I0702 00:02:04.928781 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cilium-cgroup\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.935845 kubelet[3674]: I0702 00:02:04.928820 3674 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-lib-modules\") pod \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\" (UID: \"7661c1b6-d579-418c-a2e4-2344a2f3f75d\") " Jul 2 00:02:04.935845 kubelet[3674]: I0702 00:02:04.928890 3674 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-host-proc-sys-kernel\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:04.935845 kubelet[3674]: I0702 00:02:04.928917 3674 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cilium-run\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:04.935845 kubelet[3674]: I0702 00:02:04.928962 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:02:04.935845 kubelet[3674]: I0702 00:02:04.929009 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:02:04.935845 kubelet[3674]: I0702 00:02:04.929048 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:02:04.936167 kubelet[3674]: I0702 00:02:04.929085 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-hostproc" (OuterVolumeSpecName: "hostproc") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:02:04.936167 kubelet[3674]: I0702 00:02:04.929447 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:02:04.939642 kubelet[3674]: I0702 00:02:04.939573 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:02:04.939772 kubelet[3674]: I0702 00:02:04.939708 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cni-path" (OuterVolumeSpecName: "cni-path") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:02:04.939855 kubelet[3674]: I0702 00:02:04.939822 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:02:04.957332 kubelet[3674]: I0702 00:02:04.952628 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7661c1b6-d579-418c-a2e4-2344a2f3f75d-kube-api-access-9jmsp" (OuterVolumeSpecName: "kube-api-access-9jmsp") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "kube-api-access-9jmsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:02:04.984530 kubelet[3674]: I0702 00:02:04.984477 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7661c1b6-d579-418c-a2e4-2344a2f3f75d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:02:04.989903 kubelet[3674]: I0702 00:02:04.989627 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7661c1b6-d579-418c-a2e4-2344a2f3f75d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:02:04.991496 kubelet[3674]: I0702 00:02:04.991440 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/503f6603-be85-4ce9-96b0-1118864d9105-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "503f6603-be85-4ce9-96b0-1118864d9105" (UID: "503f6603-be85-4ce9-96b0-1118864d9105"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:02:04.995687 kubelet[3674]: I0702 00:02:04.995591 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/503f6603-be85-4ce9-96b0-1118864d9105-kube-api-access-zsclp" (OuterVolumeSpecName: "kube-api-access-zsclp") pod "503f6603-be85-4ce9-96b0-1118864d9105" (UID: "503f6603-be85-4ce9-96b0-1118864d9105"). InnerVolumeSpecName "kube-api-access-zsclp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:02:04.996209 kubelet[3674]: I0702 00:02:04.996165 3674 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7661c1b6-d579-418c-a2e4-2344a2f3f75d" (UID: "7661c1b6-d579-418c-a2e4-2344a2f3f75d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:02:05.029266 kubelet[3674]: I0702 00:02:05.029201 3674 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-etc-cni-netd\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029266 kubelet[3674]: I0702 00:02:05.029269 3674 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cilium-cgroup\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029538 kubelet[3674]: I0702 00:02:05.029319 3674 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-lib-modules\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029538 kubelet[3674]: I0702 00:02:05.029351 3674 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/503f6603-be85-4ce9-96b0-1118864d9105-cilium-config-path\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029538 kubelet[3674]: I0702 00:02:05.029377 3674 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-host-proc-sys-net\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029538 kubelet[3674]: I0702 00:02:05.029408 3674 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cilium-config-path\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029538 kubelet[3674]: I0702 00:02:05.029432 3674 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9jmsp\" (UniqueName: \"kubernetes.io/projected/7661c1b6-d579-418c-a2e4-2344a2f3f75d-kube-api-access-9jmsp\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029538 kubelet[3674]: I0702 00:02:05.029475 3674 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7661c1b6-d579-418c-a2e4-2344a2f3f75d-hubble-tls\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029538 kubelet[3674]: I0702 00:02:05.029503 3674 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zsclp\" (UniqueName: \"kubernetes.io/projected/503f6603-be85-4ce9-96b0-1118864d9105-kube-api-access-zsclp\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029538 kubelet[3674]: I0702 00:02:05.029526 3674 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-cni-path\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029985 kubelet[3674]: I0702 00:02:05.029551 3674 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-xtables-lock\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029985 kubelet[3674]: I0702 00:02:05.029575 3674 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7661c1b6-d579-418c-a2e4-2344a2f3f75d-clustermesh-secrets\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029985 kubelet[3674]: I0702 00:02:05.029597 3674 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-bpf-maps\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.029985 kubelet[3674]: I0702 00:02:05.029620 3674 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7661c1b6-d579-418c-a2e4-2344a2f3f75d-hostproc\") on node \"ip-172-31-30-222\" DevicePath \"\"" Jul 2 00:02:05.516191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80-rootfs.mount: Deactivated successfully. Jul 2 00:02:05.516871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0-rootfs.mount: Deactivated successfully. Jul 2 00:02:05.517100 systemd[1]: var-lib-kubelet-pods-503f6603\x2dbe85\x2d4ce9\x2d96b0\x2d1118864d9105-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzsclp.mount: Deactivated successfully. Jul 2 00:02:05.517640 systemd[1]: var-lib-kubelet-pods-7661c1b6\x2dd579\x2d418c\x2da2e4\x2d2344a2f3f75d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9jmsp.mount: Deactivated successfully. Jul 2 00:02:05.518004 systemd[1]: var-lib-kubelet-pods-7661c1b6\x2dd579\x2d418c\x2da2e4\x2d2344a2f3f75d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:02:05.518403 systemd[1]: var-lib-kubelet-pods-7661c1b6\x2dd579\x2d418c\x2da2e4\x2d2344a2f3f75d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:02:05.755862 kubelet[3674]: I0702 00:02:05.755554 3674 scope.go:117] "RemoveContainer" containerID="b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a" Jul 2 00:02:05.759530 containerd[2116]: time="2024-07-02T00:02:05.759467225Z" level=info msg="RemoveContainer for \"b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a\"" Jul 2 00:02:05.768934 containerd[2116]: time="2024-07-02T00:02:05.768583213Z" level=info msg="RemoveContainer for \"b32d7986fd7c41f455ee5fa7e8d1a2df65193ff8bdd307768d4fce2023635e6a\" returns successfully" Jul 2 00:02:05.773174 kubelet[3674]: I0702 00:02:05.773125 3674 scope.go:117] "RemoveContainer" containerID="2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6" Jul 2 00:02:05.779529 containerd[2116]: time="2024-07-02T00:02:05.779353820Z" level=info msg="RemoveContainer for \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\"" Jul 2 00:02:05.787700 containerd[2116]: time="2024-07-02T00:02:05.787589191Z" level=info msg="RemoveContainer for \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\" returns successfully" Jul 2 00:02:05.789047 kubelet[3674]: I0702 00:02:05.788659 3674 scope.go:117] "RemoveContainer" containerID="c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14" Jul 2 00:02:05.792517 containerd[2116]: time="2024-07-02T00:02:05.792469750Z" level=info msg="RemoveContainer for \"c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14\"" Jul 2 00:02:05.801732 containerd[2116]: time="2024-07-02T00:02:05.801658783Z" level=info msg="RemoveContainer for \"c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14\" returns successfully" Jul 2 00:02:05.803196 kubelet[3674]: I0702 00:02:05.803128 3674 scope.go:117] "RemoveContainer" containerID="f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d" Jul 2 00:02:05.808535 containerd[2116]: time="2024-07-02T00:02:05.808437418Z" level=info msg="RemoveContainer for \"f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d\"" Jul 2 00:02:05.813096 containerd[2116]: time="2024-07-02T00:02:05.813037397Z" level=info msg="RemoveContainer for \"f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d\" returns successfully" Jul 2 00:02:05.813606 kubelet[3674]: I0702 00:02:05.813576 3674 scope.go:117] "RemoveContainer" containerID="c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460" Jul 2 00:02:05.816614 containerd[2116]: time="2024-07-02T00:02:05.816419584Z" level=info msg="RemoveContainer for \"c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460\"" Jul 2 00:02:05.821771 containerd[2116]: time="2024-07-02T00:02:05.821711096Z" level=info msg="RemoveContainer for \"c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460\" returns successfully" Jul 2 00:02:05.822347 kubelet[3674]: I0702 00:02:05.822220 3674 scope.go:117] "RemoveContainer" containerID="bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e" Jul 2 00:02:05.824797 containerd[2116]: time="2024-07-02T00:02:05.824587458Z" level=info msg="RemoveContainer for \"bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e\"" Jul 2 00:02:05.829643 containerd[2116]: time="2024-07-02T00:02:05.829570248Z" level=info msg="RemoveContainer for \"bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e\" returns successfully" Jul 2 00:02:05.829979 kubelet[3674]: I0702 00:02:05.829941 3674 scope.go:117] "RemoveContainer" containerID="2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6" Jul 2 00:02:05.830746 containerd[2116]: time="2024-07-02T00:02:05.830568030Z" level=error msg="ContainerStatus for \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\": not found" Jul 2 00:02:05.830923 kubelet[3674]: E0702 00:02:05.830867 3674 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\": not found" containerID="2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6" Jul 2 00:02:05.831055 kubelet[3674]: I0702 00:02:05.831013 3674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6"} err="failed to get container status \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c030ca041439ed3fc9a1ce7eddf087f6f911eeb3d03826c3352149005fd5ae6\": not found" Jul 2 00:02:05.831055 kubelet[3674]: I0702 00:02:05.831040 3674 scope.go:117] "RemoveContainer" containerID="c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14" Jul 2 00:02:05.831412 containerd[2116]: time="2024-07-02T00:02:05.831354736Z" level=error msg="ContainerStatus for \"c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14\": not found" Jul 2 00:02:05.831971 kubelet[3674]: E0702 00:02:05.831731 3674 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14\": not found" containerID="c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14" Jul 2 00:02:05.831971 kubelet[3674]: I0702 00:02:05.831783 3674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14"} err="failed to get container status \"c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2d3ebf48d120fafdc9a7066c16e5d6bfda99bf5629c6b45362d58d41397da14\": not found" Jul 2 00:02:05.831971 kubelet[3674]: I0702 00:02:05.831805 3674 scope.go:117] "RemoveContainer" containerID="f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d" Jul 2 00:02:05.832747 kubelet[3674]: E0702 00:02:05.832647 3674 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d\": not found" containerID="f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d" Jul 2 00:02:05.832747 kubelet[3674]: I0702 00:02:05.832695 3674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d"} err="failed to get container status \"f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d\": not found" Jul 2 00:02:05.832747 kubelet[3674]: I0702 00:02:05.832718 3674 scope.go:117] "RemoveContainer" containerID="c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460" Jul 2 00:02:05.832991 containerd[2116]: time="2024-07-02T00:02:05.832426091Z" level=error msg="ContainerStatus for \"f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4a734ff7d86b3dc62b9496d4cbad6de3caa0274716c0fbe7ae524bd4a6bcc6d\": not found" Jul 2 00:02:05.833124 containerd[2116]: time="2024-07-02T00:02:05.833016030Z" level=error msg="ContainerStatus for \"c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460\": not found" Jul 2 00:02:05.833429 kubelet[3674]: E0702 00:02:05.833379 3674 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460\": not found" containerID="c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460" Jul 2 00:02:05.833495 kubelet[3674]: I0702 00:02:05.833473 3674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460"} err="failed to get container status \"c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460\": rpc error: code = NotFound desc = an error occurred when try to find container \"c146bbd2d12fa48a5a6367b42c7f9d91744bd594239c8310da397008eca9d460\": not found" Jul 2 00:02:05.833557 kubelet[3674]: I0702 00:02:05.833500 3674 scope.go:117] "RemoveContainer" containerID="bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e" Jul 2 00:02:05.833908 containerd[2116]: time="2024-07-02T00:02:05.833850771Z" level=error msg="ContainerStatus for \"bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e\": not found" Jul 2 00:02:05.834216 kubelet[3674]: E0702 00:02:05.834192 3674 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e\": not found" containerID="bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e" Jul 2 00:02:05.834501 kubelet[3674]: I0702 00:02:05.834452 3674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e"} err="failed to get container status \"bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e\": rpc error: code = NotFound desc = an error occurred when try to find container \"bfdc311a97fdd8571e44a8707b575fa7c8df5851998277f4dff3a905321e939e\": not found" Jul 2 00:02:06.433956 sshd[5311]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:06.439620 systemd[1]: sshd@25-172.31.30.222:22-147.75.109.163:60578.service: Deactivated successfully. Jul 2 00:02:06.450990 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:02:06.451216 systemd-logind[2087]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:02:06.459590 systemd-logind[2087]: Removed session 26. Jul 2 00:02:06.464858 systemd[1]: Started sshd@26-172.31.30.222:22-147.75.109.163:60128.service - OpenSSH per-connection server daemon (147.75.109.163:60128). Jul 2 00:02:06.573503 kubelet[3674]: E0702 00:02:06.573411 3674 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:02:06.641444 sshd[5477]: Accepted publickey for core from 147.75.109.163 port 60128 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:06.643995 sshd[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:06.651396 systemd-logind[2087]: New session 27 of user core. Jul 2 00:02:06.662875 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:02:06.931783 ntpd[2075]: Deleting interface #10 lxc_health, fe80::88c7:74ff:feac:1cb7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Jul 2 00:02:06.932906 ntpd[2075]: 2 Jul 00:02:06 ntpd[2075]: Deleting interface #10 lxc_health, fe80::88c7:74ff:feac:1cb7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Jul 2 00:02:07.263276 kubelet[3674]: I0702 00:02:07.262719 3674 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="503f6603-be85-4ce9-96b0-1118864d9105" path="/var/lib/kubelet/pods/503f6603-be85-4ce9-96b0-1118864d9105/volumes" Jul 2 00:02:07.264060 kubelet[3674]: I0702 00:02:07.264035 3674 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7661c1b6-d579-418c-a2e4-2344a2f3f75d" path="/var/lib/kubelet/pods/7661c1b6-d579-418c-a2e4-2344a2f3f75d/volumes" Jul 2 00:02:08.755014 sshd[5477]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:08.767171 systemd[1]: sshd@26-172.31.30.222:22-147.75.109.163:60128.service: Deactivated successfully. Jul 2 00:02:08.780143 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:02:08.782037 systemd-logind[2087]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:02:08.802023 kubelet[3674]: I0702 00:02:08.800768 3674 topology_manager.go:215] "Topology Admit Handler" podUID="f10a8d37-bf13-4298-92fd-2bdb9b6ab757" podNamespace="kube-system" podName="cilium-zzb5d" Jul 2 00:02:08.802023 kubelet[3674]: E0702 00:02:08.800913 3674 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7661c1b6-d579-418c-a2e4-2344a2f3f75d" containerName="mount-cgroup" Jul 2 00:02:08.802023 kubelet[3674]: E0702 00:02:08.800936 3674 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7661c1b6-d579-418c-a2e4-2344a2f3f75d" containerName="apply-sysctl-overwrites" Jul 2 00:02:08.802023 kubelet[3674]: E0702 00:02:08.800977 3674 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7661c1b6-d579-418c-a2e4-2344a2f3f75d" containerName="clean-cilium-state" Jul 2 00:02:08.802023 kubelet[3674]: E0702 00:02:08.801001 3674 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="503f6603-be85-4ce9-96b0-1118864d9105" containerName="cilium-operator" Jul 2 00:02:08.802023 kubelet[3674]: E0702 00:02:08.801019 3674 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7661c1b6-d579-418c-a2e4-2344a2f3f75d" containerName="mount-bpf-fs" Jul 2 00:02:08.802023 kubelet[3674]: E0702 00:02:08.801037 3674 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7661c1b6-d579-418c-a2e4-2344a2f3f75d" containerName="cilium-agent" Jul 2 00:02:08.802023 kubelet[3674]: I0702 00:02:08.801117 3674 memory_manager.go:346] "RemoveStaleState removing state" podUID="503f6603-be85-4ce9-96b0-1118864d9105" containerName="cilium-operator" Jul 2 00:02:08.802023 kubelet[3674]: I0702 00:02:08.801943 3674 memory_manager.go:346] "RemoveStaleState removing state" podUID="7661c1b6-d579-418c-a2e4-2344a2f3f75d" containerName="cilium-agent" Jul 2 00:02:08.807120 systemd[1]: Started sshd@27-172.31.30.222:22-147.75.109.163:60132.service - OpenSSH per-connection server daemon (147.75.109.163:60132). Jul 2 00:02:08.813692 systemd-logind[2087]: Removed session 27. Jul 2 00:02:08.953673 kubelet[3674]: I0702 00:02:08.953591 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-etc-cni-netd\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.953848 kubelet[3674]: I0702 00:02:08.953706 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-cilium-run\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.953848 kubelet[3674]: I0702 00:02:08.953763 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-cni-path\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.953848 kubelet[3674]: I0702 00:02:08.953813 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-xtables-lock\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.954044 kubelet[3674]: I0702 00:02:08.953860 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-hostproc\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.954044 kubelet[3674]: I0702 00:02:08.953907 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7v9w\" (UniqueName: \"kubernetes.io/projected/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-kube-api-access-f7v9w\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.954044 kubelet[3674]: I0702 00:02:08.953953 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-cilium-cgroup\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.954044 kubelet[3674]: I0702 00:02:08.953997 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-cilium-config-path\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.954044 kubelet[3674]: I0702 00:02:08.954041 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-host-proc-sys-kernel\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.954359 kubelet[3674]: I0702 00:02:08.954091 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-lib-modules\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.954359 kubelet[3674]: I0702 00:02:08.954140 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-host-proc-sys-net\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.954359 kubelet[3674]: I0702 00:02:08.954187 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-hubble-tls\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.954359 kubelet[3674]: I0702 00:02:08.954236 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-bpf-maps\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.954359 kubelet[3674]: I0702 00:02:08.954322 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-clustermesh-secrets\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:08.954626 kubelet[3674]: I0702 00:02:08.954376 3674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f10a8d37-bf13-4298-92fd-2bdb9b6ab757-cilium-ipsec-secrets\") pod \"cilium-zzb5d\" (UID: \"f10a8d37-bf13-4298-92fd-2bdb9b6ab757\") " pod="kube-system/cilium-zzb5d" Jul 2 00:02:09.007713 sshd[5490]: Accepted publickey for core from 147.75.109.163 port 60132 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:09.010320 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:09.019397 systemd-logind[2087]: New session 28 of user core. Jul 2 00:02:09.037157 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:02:09.133378 containerd[2116]: time="2024-07-02T00:02:09.133314826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zzb5d,Uid:f10a8d37-bf13-4298-92fd-2bdb9b6ab757,Namespace:kube-system,Attempt:0,}" Jul 2 00:02:09.162729 sshd[5490]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:09.170144 systemd-logind[2087]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:02:09.171302 systemd[1]: sshd@27-172.31.30.222:22-147.75.109.163:60132.service: Deactivated successfully. Jul 2 00:02:09.182025 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:02:09.195043 systemd-logind[2087]: Removed session 28. Jul 2 00:02:09.200787 systemd[1]: Started sshd@28-172.31.30.222:22-147.75.109.163:60138.service - OpenSSH per-connection server daemon (147.75.109.163:60138). Jul 2 00:02:09.202838 containerd[2116]: time="2024-07-02T00:02:09.202196153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:02:09.202838 containerd[2116]: time="2024-07-02T00:02:09.202336552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:09.202838 containerd[2116]: time="2024-07-02T00:02:09.202386028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:02:09.202838 containerd[2116]: time="2024-07-02T00:02:09.202423247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:09.281879 containerd[2116]: time="2024-07-02T00:02:09.280724651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zzb5d,Uid:f10a8d37-bf13-4298-92fd-2bdb9b6ab757,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbd57473d8c543573757db5d1a4c5285edbf73da2877f97701dc99b905b290bd\"" Jul 2 00:02:09.290429 containerd[2116]: time="2024-07-02T00:02:09.290365625Z" level=info msg="CreateContainer within sandbox \"bbd57473d8c543573757db5d1a4c5285edbf73da2877f97701dc99b905b290bd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:02:09.312325 containerd[2116]: time="2024-07-02T00:02:09.312172207Z" level=info msg="CreateContainer within sandbox \"bbd57473d8c543573757db5d1a4c5285edbf73da2877f97701dc99b905b290bd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"073f94a02d2ff1bcbfa58a7a12a6ba196b218ced546c8ac829d3da935d014ed6\"" Jul 2 00:02:09.313563 containerd[2116]: time="2024-07-02T00:02:09.313511128Z" level=info msg="StartContainer for \"073f94a02d2ff1bcbfa58a7a12a6ba196b218ced546c8ac829d3da935d014ed6\"" Jul 2 00:02:09.397689 sshd[5517]: Accepted publickey for core from 147.75.109.163 port 60138 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:09.402586 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:09.405029 containerd[2116]: time="2024-07-02T00:02:09.404714179Z" level=info msg="StartContainer for \"073f94a02d2ff1bcbfa58a7a12a6ba196b218ced546c8ac829d3da935d014ed6\" returns successfully" Jul 2 00:02:09.427254 systemd-logind[2087]: New session 29 of user core. Jul 2 00:02:09.435561 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:02:09.502150 containerd[2116]: time="2024-07-02T00:02:09.502022738Z" level=info msg="shim disconnected" id=073f94a02d2ff1bcbfa58a7a12a6ba196b218ced546c8ac829d3da935d014ed6 namespace=k8s.io Jul 2 00:02:09.502150 containerd[2116]: time="2024-07-02T00:02:09.502103742Z" level=warning msg="cleaning up after shim disconnected" id=073f94a02d2ff1bcbfa58a7a12a6ba196b218ced546c8ac829d3da935d014ed6 namespace=k8s.io Jul 2 00:02:09.502150 containerd[2116]: time="2024-07-02T00:02:09.502126710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:02:09.817430 containerd[2116]: time="2024-07-02T00:02:09.817363127Z" level=info msg="CreateContainer within sandbox \"bbd57473d8c543573757db5d1a4c5285edbf73da2877f97701dc99b905b290bd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:02:09.840357 containerd[2116]: time="2024-07-02T00:02:09.840228291Z" level=info msg="CreateContainer within sandbox \"bbd57473d8c543573757db5d1a4c5285edbf73da2877f97701dc99b905b290bd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8be2e88178545d02d02af7367064ea606b36bfbda3df2a6e51326a9fc8d3aa91\"" Jul 2 00:02:09.841704 containerd[2116]: time="2024-07-02T00:02:09.841551880Z" level=info msg="StartContainer for \"8be2e88178545d02d02af7367064ea606b36bfbda3df2a6e51326a9fc8d3aa91\"" Jul 2 00:02:09.922870 containerd[2116]: time="2024-07-02T00:02:09.922790335Z" level=info msg="StartContainer for \"8be2e88178545d02d02af7367064ea606b36bfbda3df2a6e51326a9fc8d3aa91\" returns successfully" Jul 2 00:02:09.982922 containerd[2116]: time="2024-07-02T00:02:09.982780956Z" level=info msg="shim disconnected" id=8be2e88178545d02d02af7367064ea606b36bfbda3df2a6e51326a9fc8d3aa91 namespace=k8s.io Jul 2 00:02:09.983184 containerd[2116]: time="2024-07-02T00:02:09.983001771Z" level=warning msg="cleaning up after shim disconnected" id=8be2e88178545d02d02af7367064ea606b36bfbda3df2a6e51326a9fc8d3aa91 namespace=k8s.io Jul 2 00:02:09.983184 containerd[2116]: time="2024-07-02T00:02:09.983052316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:02:10.822723 containerd[2116]: time="2024-07-02T00:02:10.822659167Z" level=info msg="CreateContainer within sandbox \"bbd57473d8c543573757db5d1a4c5285edbf73da2877f97701dc99b905b290bd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:02:10.855350 containerd[2116]: time="2024-07-02T00:02:10.855263586Z" level=info msg="CreateContainer within sandbox \"bbd57473d8c543573757db5d1a4c5285edbf73da2877f97701dc99b905b290bd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea7ff5e4a3a7a66a36a8cb9a7e139f6d87886b8f5a4203447760976cb6ca550f\"" Jul 2 00:02:10.856418 containerd[2116]: time="2024-07-02T00:02:10.856370935Z" level=info msg="StartContainer for \"ea7ff5e4a3a7a66a36a8cb9a7e139f6d87886b8f5a4203447760976cb6ca550f\"" Jul 2 00:02:10.974372 containerd[2116]: time="2024-07-02T00:02:10.973088045Z" level=info msg="StartContainer for \"ea7ff5e4a3a7a66a36a8cb9a7e139f6d87886b8f5a4203447760976cb6ca550f\" returns successfully" Jul 2 00:02:11.023845 containerd[2116]: time="2024-07-02T00:02:11.023759176Z" level=info msg="shim disconnected" id=ea7ff5e4a3a7a66a36a8cb9a7e139f6d87886b8f5a4203447760976cb6ca550f namespace=k8s.io Jul 2 00:02:11.023845 containerd[2116]: time="2024-07-02T00:02:11.023840589Z" level=warning msg="cleaning up after shim disconnected" id=ea7ff5e4a3a7a66a36a8cb9a7e139f6d87886b8f5a4203447760976cb6ca550f namespace=k8s.io Jul 2 00:02:11.024202 containerd[2116]: time="2024-07-02T00:02:11.023864373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:02:11.067435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea7ff5e4a3a7a66a36a8cb9a7e139f6d87886b8f5a4203447760976cb6ca550f-rootfs.mount: Deactivated successfully. Jul 2 00:02:11.218074 containerd[2116]: time="2024-07-02T00:02:11.217866543Z" level=info msg="StopPodSandbox for \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\"" Jul 2 00:02:11.218717 containerd[2116]: time="2024-07-02T00:02:11.218566949Z" level=info msg="TearDown network for sandbox \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\" successfully" Jul 2 00:02:11.218717 containerd[2116]: time="2024-07-02T00:02:11.218692484Z" level=info msg="StopPodSandbox for \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\" returns successfully" Jul 2 00:02:11.219585 containerd[2116]: time="2024-07-02T00:02:11.219526949Z" level=info msg="RemovePodSandbox for \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\"" Jul 2 00:02:11.219728 containerd[2116]: time="2024-07-02T00:02:11.219578887Z" level=info msg="Forcibly stopping sandbox \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\"" Jul 2 00:02:11.219728 containerd[2116]: time="2024-07-02T00:02:11.219712009Z" level=info msg="TearDown network for sandbox \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\" successfully" Jul 2 00:02:11.224577 containerd[2116]: time="2024-07-02T00:02:11.224503280Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:02:11.224748 containerd[2116]: time="2024-07-02T00:02:11.224599424Z" level=info msg="RemovePodSandbox \"0e372398198d6e1fc7f482f1cdce06b16688305cb8bd9fca817a998c75ff8b80\" returns successfully" Jul 2 00:02:11.225673 containerd[2116]: time="2024-07-02T00:02:11.225355045Z" level=info msg="StopPodSandbox for \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\"" Jul 2 00:02:11.225673 containerd[2116]: time="2024-07-02T00:02:11.225501927Z" level=info msg="TearDown network for sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" successfully" Jul 2 00:02:11.225673 containerd[2116]: time="2024-07-02T00:02:11.225563326Z" level=info msg="StopPodSandbox for \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" returns successfully" Jul 2 00:02:11.226529 containerd[2116]: time="2024-07-02T00:02:11.226102215Z" level=info msg="RemovePodSandbox for \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\"" Jul 2 00:02:11.226529 containerd[2116]: time="2024-07-02T00:02:11.226148186Z" level=info msg="Forcibly stopping sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\"" Jul 2 00:02:11.226529 containerd[2116]: time="2024-07-02T00:02:11.226319199Z" level=info msg="TearDown network for sandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" successfully" Jul 2 00:02:11.231743 containerd[2116]: time="2024-07-02T00:02:11.231661221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:02:11.232272 containerd[2116]: time="2024-07-02T00:02:11.231752755Z" level=info msg="RemovePodSandbox \"6c9d288ee6393551d529ae16fa61071c60466ce7e545db6d260e9661ad5d34e0\" returns successfully" Jul 2 00:02:11.258167 kubelet[3674]: E0702 00:02:11.257991 3674 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-rnfxq" podUID="58e10f85-4b61-46a8-b804-dfeddb5f02c6" Jul 2 00:02:11.574857 kubelet[3674]: E0702 00:02:11.574775 3674 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:02:11.836229 containerd[2116]: time="2024-07-02T00:02:11.835836534Z" level=info msg="CreateContainer within sandbox \"bbd57473d8c543573757db5d1a4c5285edbf73da2877f97701dc99b905b290bd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:02:11.866392 containerd[2116]: time="2024-07-02T00:02:11.866313453Z" level=info msg="CreateContainer within sandbox \"bbd57473d8c543573757db5d1a4c5285edbf73da2877f97701dc99b905b290bd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"40dcc5c18cd1c72441b953a588e14a5512af4d4b3bd97788d0a0577f08432d7e\"" Jul 2 00:02:11.867066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301563048.mount: Deactivated successfully. Jul 2 00:02:11.869575 containerd[2116]: time="2024-07-02T00:02:11.867222307Z" level=info msg="StartContainer for \"40dcc5c18cd1c72441b953a588e14a5512af4d4b3bd97788d0a0577f08432d7e\"" Jul 2 00:02:11.972555 containerd[2116]: time="2024-07-02T00:02:11.972145947Z" level=info msg="StartContainer for \"40dcc5c18cd1c72441b953a588e14a5512af4d4b3bd97788d0a0577f08432d7e\" returns successfully" Jul 2 00:02:12.022155 containerd[2116]: time="2024-07-02T00:02:12.022072995Z" level=info msg="shim disconnected" id=40dcc5c18cd1c72441b953a588e14a5512af4d4b3bd97788d0a0577f08432d7e namespace=k8s.io Jul 2 00:02:12.022155 containerd[2116]: time="2024-07-02T00:02:12.022151502Z" level=warning msg="cleaning up after shim disconnected" id=40dcc5c18cd1c72441b953a588e14a5512af4d4b3bd97788d0a0577f08432d7e namespace=k8s.io Jul 2 00:02:12.022537 containerd[2116]: time="2024-07-02T00:02:12.022173797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:02:12.067509 systemd[1]: run-containerd-runc-k8s.io-40dcc5c18cd1c72441b953a588e14a5512af4d4b3bd97788d0a0577f08432d7e-runc.QBRwyz.mount: Deactivated successfully. Jul 2 00:02:12.067779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40dcc5c18cd1c72441b953a588e14a5512af4d4b3bd97788d0a0577f08432d7e-rootfs.mount: Deactivated successfully. Jul 2 00:02:12.259239 kubelet[3674]: E0702 00:02:12.258697 3674 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-mx7fq" podUID="4c5c65b8-9ed7-4548-bbc7-9249b91de94e" Jul 2 00:02:12.833857 containerd[2116]: time="2024-07-02T00:02:12.833524083Z" level=info msg="CreateContainer within sandbox \"bbd57473d8c543573757db5d1a4c5285edbf73da2877f97701dc99b905b290bd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:02:12.870072 containerd[2116]: time="2024-07-02T00:02:12.870002297Z" level=info msg="CreateContainer within sandbox \"bbd57473d8c543573757db5d1a4c5285edbf73da2877f97701dc99b905b290bd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"65b3971cc27843a44d8a41ca5cd06d7de41130f3e62b3da7d52ce1d1ab01980d\"" Jul 2 00:02:12.872395 containerd[2116]: time="2024-07-02T00:02:12.871492038Z" level=info msg="StartContainer for \"65b3971cc27843a44d8a41ca5cd06d7de41130f3e62b3da7d52ce1d1ab01980d\"" Jul 2 00:02:12.978671 containerd[2116]: time="2024-07-02T00:02:12.978342413Z" level=info msg="StartContainer for \"65b3971cc27843a44d8a41ca5cd06d7de41130f3e62b3da7d52ce1d1ab01980d\" returns successfully" Jul 2 00:02:13.259328 kubelet[3674]: E0702 00:02:13.258308 3674 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-rnfxq" podUID="58e10f85-4b61-46a8-b804-dfeddb5f02c6" Jul 2 00:02:13.773459 kubelet[3674]: I0702 00:02:13.770063 3674 setters.go:552] "Node became not ready" node="ip-172-31-30-222" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:02:13Z","lastTransitionTime":"2024-07-02T00:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:02:13.825328 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 2 00:02:13.895224 kubelet[3674]: I0702 00:02:13.893431 3674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zzb5d" podStartSLOduration=5.893374043 podCreationTimestamp="2024-07-02 00:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:02:13.892934923 +0000 UTC m=+122.951631909" watchObservedRunningTime="2024-07-02 00:02:13.893374043 +0000 UTC m=+122.952071041" Jul 2 00:02:14.257863 kubelet[3674]: E0702 00:02:14.257783 3674 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-mx7fq" podUID="4c5c65b8-9ed7-4548-bbc7-9249b91de94e" Jul 2 00:02:15.259268 kubelet[3674]: E0702 00:02:15.258797 3674 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-rnfxq" podUID="58e10f85-4b61-46a8-b804-dfeddb5f02c6" Jul 2 00:02:16.258669 kubelet[3674]: E0702 00:02:16.258600 3674 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-mx7fq" podUID="4c5c65b8-9ed7-4548-bbc7-9249b91de94e" Jul 2 00:02:17.315639 systemd[1]: Started sshd@29-172.31.30.222:22-183.220.241.197:57040.service - OpenSSH per-connection server daemon (183.220.241.197:57040). Jul 2 00:02:17.921689 systemd-networkd[1691]: lxc_health: Link UP Jul 2 00:02:17.939826 systemd-networkd[1691]: lxc_health: Gained carrier Jul 2 00:02:17.947793 (udev-worker)[6323]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:02:18.500960 kubelet[3674]: E0702 00:02:18.500672 3674 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46704->127.0.0.1:42171: write tcp 127.0.0.1:46704->127.0.0.1:42171: write: broken pipe Jul 2 00:02:19.580094 systemd-networkd[1691]: lxc_health: Gained IPv6LL Jul 2 00:02:20.556898 sshd[6184]: Invalid user centos from 183.220.241.197 port 57040 Jul 2 00:02:20.763544 systemd[1]: run-containerd-runc-k8s.io-65b3971cc27843a44d8a41ca5cd06d7de41130f3e62b3da7d52ce1d1ab01980d-runc.mz3w8M.mount: Deactivated successfully. Jul 2 00:02:21.261586 sshd[6393]: pam_faillock(sshd:auth): User unknown Jul 2 00:02:21.270102 sshd[6184]: Postponed keyboard-interactive for invalid user centos from 183.220.241.197 port 57040 ssh2 [preauth] Jul 2 00:02:21.931859 ntpd[2075]: Listen normally on 13 lxc_health [fe80::5031:c6ff:feb1:6d3e%14]:123 Jul 2 00:02:21.933757 ntpd[2075]: 2 Jul 00:02:21 ntpd[2075]: Listen normally on 13 lxc_health [fe80::5031:c6ff:feb1:6d3e%14]:123 Jul 2 00:02:21.981625 sshd[6393]: pam_unix(sshd:auth): check pass; user unknown Jul 2 00:02:21.981775 sshd[6393]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=183.220.241.197 Jul 2 00:02:21.984195 sshd[6393]: pam_faillock(sshd:auth): User unknown Jul 2 00:02:23.212617 kubelet[3674]: E0702 00:02:23.212351 3674 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46726->127.0.0.1:42171: write tcp 127.0.0.1:46726->127.0.0.1:42171: write: connection reset by peer Jul 2 00:02:23.769779 sshd[6184]: PAM: Permission denied for illegal user centos from 183.220.241.197 Jul 2 00:02:23.771607 sshd[6184]: Failed keyboard-interactive/pam for invalid user centos from 183.220.241.197 port 57040 ssh2 Jul 2 00:02:24.474872 sshd[6184]: Connection closed by invalid user centos 183.220.241.197 port 57040 [preauth] Jul 2 00:02:24.478219 systemd[1]: sshd@29-172.31.30.222:22-183.220.241.197:57040.service: Deactivated successfully. Jul 2 00:02:25.555153 sshd[5517]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:25.563519 systemd-logind[2087]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:02:25.563824 systemd[1]: sshd@28-172.31.30.222:22-147.75.109.163:60138.service: Deactivated successfully. Jul 2 00:02:25.581365 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:02:25.583959 systemd-logind[2087]: Removed session 29. Jul 2 00:02:39.564486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bea2924c97bfa3b0f3f0c385bbad263fac92b32d45e05c74d6bae3ca390bdd97-rootfs.mount: Deactivated successfully. Jul 2 00:02:39.614435 containerd[2116]: time="2024-07-02T00:02:39.614241400Z" level=info msg="shim disconnected" id=bea2924c97bfa3b0f3f0c385bbad263fac92b32d45e05c74d6bae3ca390bdd97 namespace=k8s.io Jul 2 00:02:39.614435 containerd[2116]: time="2024-07-02T00:02:39.614361388Z" level=warning msg="cleaning up after shim disconnected" id=bea2924c97bfa3b0f3f0c385bbad263fac92b32d45e05c74d6bae3ca390bdd97 namespace=k8s.io Jul 2 00:02:39.614435 containerd[2116]: time="2024-07-02T00:02:39.614383479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:02:39.633892 containerd[2116]: time="2024-07-02T00:02:39.633825960Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:02:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:02:39.926118 kubelet[3674]: I0702 00:02:39.925968 3674 scope.go:117] "RemoveContainer" containerID="bea2924c97bfa3b0f3f0c385bbad263fac92b32d45e05c74d6bae3ca390bdd97" Jul 2 00:02:39.930885 containerd[2116]: time="2024-07-02T00:02:39.930684061Z" level=info msg="CreateContainer within sandbox \"a903339053ce04d4e1c25fb8eec9bfc4a9f41dd2abaa0be9860e3d41d89b7e7f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 00:02:39.953097 containerd[2116]: time="2024-07-02T00:02:39.952874390Z" level=info msg="CreateContainer within sandbox \"a903339053ce04d4e1c25fb8eec9bfc4a9f41dd2abaa0be9860e3d41d89b7e7f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"eb04e84fdae79f213c58b19ff9f60ab0de9af95988dbce7a7100c60398fc475f\"" Jul 2 00:02:39.954943 containerd[2116]: time="2024-07-02T00:02:39.953813884Z" level=info msg="StartContainer for \"eb04e84fdae79f213c58b19ff9f60ab0de9af95988dbce7a7100c60398fc475f\"" Jul 2 00:02:40.076647 containerd[2116]: time="2024-07-02T00:02:40.076524690Z" level=info msg="StartContainer for \"eb04e84fdae79f213c58b19ff9f60ab0de9af95988dbce7a7100c60398fc475f\" returns successfully" Jul 2 00:02:44.341821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58d91d1bb82359258c68c5d4dd39c1f289cd2fc562fb108512bd6328fea9962b-rootfs.mount: Deactivated successfully. Jul 2 00:02:44.353688 containerd[2116]: time="2024-07-02T00:02:44.353561729Z" level=info msg="shim disconnected" id=58d91d1bb82359258c68c5d4dd39c1f289cd2fc562fb108512bd6328fea9962b namespace=k8s.io Jul 2 00:02:44.353688 containerd[2116]: time="2024-07-02T00:02:44.353642301Z" level=warning msg="cleaning up after shim disconnected" id=58d91d1bb82359258c68c5d4dd39c1f289cd2fc562fb108512bd6328fea9962b namespace=k8s.io Jul 2 00:02:44.353688 containerd[2116]: time="2024-07-02T00:02:44.353664212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:02:44.949183 kubelet[3674]: I0702 00:02:44.949123 3674 scope.go:117] "RemoveContainer" containerID="58d91d1bb82359258c68c5d4dd39c1f289cd2fc562fb108512bd6328fea9962b" Jul 2 00:02:44.953421 containerd[2116]: time="2024-07-02T00:02:44.953073023Z" level=info msg="CreateContainer within sandbox \"b290a4053a32f3b70982d14b10745884b3d56cd2848871e80a0776730ae4098b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 00:02:44.953599 kubelet[3674]: E0702 00:02:44.953385 3674 controller.go:193] "Failed to update lease" err="Put \"https://172.31.30.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-222?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:02:44.979209 containerd[2116]: time="2024-07-02T00:02:44.979102307Z" level=info msg="CreateContainer within sandbox \"b290a4053a32f3b70982d14b10745884b3d56cd2848871e80a0776730ae4098b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c46195e187d09195112730fb9320397f6d8656b00804d48f0f4332558d25f65d\"" Jul 2 00:02:44.980224 containerd[2116]: time="2024-07-02T00:02:44.980176412Z" level=info msg="StartContainer for \"c46195e187d09195112730fb9320397f6d8656b00804d48f0f4332558d25f65d\"" Jul 2 00:02:45.093863 containerd[2116]: time="2024-07-02T00:02:45.093771278Z" level=info msg="StartContainer for \"c46195e187d09195112730fb9320397f6d8656b00804d48f0f4332558d25f65d\" returns successfully" Jul 2 00:02:54.954592 kubelet[3674]: E0702 00:02:54.954263 3674 controller.go:193] "Failed to update lease" err="Put \"https://172.31.30.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-222?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"