Jul 2 00:00:25.194742 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 2 00:00:25.194788 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 00:00:25.194812 kernel: KASLR disabled due to lack of seed Jul 2 00:00:25.194829 kernel: efi: EFI v2.7 by EDK II Jul 2 00:00:25.194845 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Jul 2 00:00:25.194861 kernel: ACPI: Early table checksum verification disabled Jul 2 00:00:25.194878 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 2 00:00:25.194894 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 2 00:00:25.194910 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 00:00:25.194925 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 2 00:00:25.194946 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 00:00:25.194962 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 2 00:00:25.194978 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 2 00:00:25.194994 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 2 00:00:25.195012 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 00:00:25.195033 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 2 00:00:25.195050 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 2 00:00:25.195066 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 2 00:00:25.195082 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 2 00:00:25.195098 kernel: printk: bootconsole [uart0] enabled Jul 2 00:00:25.195115 kernel: NUMA: Failed to initialise from firmware Jul 2 00:00:25.195131 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 00:00:25.195148 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 2 00:00:25.195163 kernel: Zone ranges: Jul 2 00:00:25.195180 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 2 00:00:25.195196 kernel: DMA32 empty Jul 2 00:00:25.195217 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 2 00:00:25.195233 kernel: Movable zone start for each node Jul 2 00:00:25.195249 kernel: Early memory node ranges Jul 2 00:00:25.195265 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 2 00:00:25.195281 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 2 00:00:25.195297 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 2 00:00:25.195313 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 2 00:00:25.195329 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 2 00:00:25.195346 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 2 00:00:25.195362 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 2 00:00:25.195378 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 2 00:00:25.195394 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 00:00:25.195415 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 2 00:00:25.195661 kernel: psci: probing for conduit method from ACPI. Jul 2 00:00:25.195694 kernel: psci: PSCIv1.0 detected in firmware. Jul 2 00:00:25.195713 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:00:25.195731 kernel: psci: Trusted OS migration not required Jul 2 00:00:25.195754 kernel: psci: SMC Calling Convention v1.1 Jul 2 00:00:25.195772 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 00:00:25.195790 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 00:00:25.195808 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 00:00:25.195826 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:00:25.195843 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:00:25.195861 kernel: CPU features: detected: Spectre-v2 Jul 2 00:00:25.195878 kernel: CPU features: detected: Spectre-v3a Jul 2 00:00:25.195896 kernel: CPU features: detected: Spectre-BHB Jul 2 00:00:25.195914 kernel: CPU features: detected: ARM erratum 1742098 Jul 2 00:00:25.195931 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 2 00:00:25.195953 kernel: alternatives: applying boot alternatives Jul 2 00:00:25.195973 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 2 00:00:25.195992 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:00:25.196009 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:00:25.196027 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:00:25.196044 kernel: Fallback order for Node 0: 0 Jul 2 00:00:25.196061 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 2 00:00:25.196078 kernel: Policy zone: Normal Jul 2 00:00:25.196095 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:00:25.196112 kernel: software IO TLB: area num 2. Jul 2 00:00:25.196130 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 2 00:00:25.196153 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Jul 2 00:00:25.196171 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:00:25.196188 kernel: trace event string verifier disabled Jul 2 00:00:25.196205 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:00:25.196223 kernel: rcu: RCU event tracing is enabled. Jul 2 00:00:25.196241 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:00:25.196258 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:00:25.196276 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:00:25.196293 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:00:25.196311 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:00:25.196328 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:00:25.196350 kernel: GICv3: 96 SPIs implemented Jul 2 00:00:25.196367 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:00:25.196384 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:00:25.196401 kernel: GICv3: GICv3 features: 16 PPIs Jul 2 00:00:25.196418 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 2 00:00:25.196582 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 2 00:00:25.196603 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 00:00:25.196621 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jul 2 00:00:25.196638 kernel: GICv3: using LPI property table @0x00000004000e0000 Jul 2 00:00:25.196655 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 2 00:00:25.196673 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jul 2 00:00:25.196690 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:00:25.196714 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 2 00:00:25.196732 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 2 00:00:25.196750 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 2 00:00:25.196767 kernel: Console: colour dummy device 80x25 Jul 2 00:00:25.196785 kernel: printk: console [tty1] enabled Jul 2 00:00:25.196802 kernel: ACPI: Core revision 20230628 Jul 2 00:00:25.196820 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 2 00:00:25.196838 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:00:25.196855 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:00:25.196873 kernel: SELinux: Initializing. Jul 2 00:00:25.196895 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:00:25.196913 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:00:25.196931 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:00:25.196949 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:00:25.196966 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:00:25.196984 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:00:25.197002 kernel: Platform MSI: ITS@0x10080000 domain created Jul 2 00:00:25.197019 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 2 00:00:25.197037 kernel: Remapping and enabling EFI services. Jul 2 00:00:25.197059 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:00:25.197077 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:00:25.197094 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 2 00:00:25.197112 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jul 2 00:00:25.197129 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 2 00:00:25.197147 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:00:25.197164 kernel: SMP: Total of 2 processors activated. Jul 2 00:00:25.197182 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:00:25.197199 kernel: CPU features: detected: 32-bit EL1 Support Jul 2 00:00:25.197222 kernel: CPU features: detected: CRC32 instructions Jul 2 00:00:25.197240 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:00:25.197270 kernel: alternatives: applying system-wide alternatives Jul 2 00:00:25.197294 kernel: devtmpfs: initialized Jul 2 00:00:25.197313 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:00:25.197331 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:00:25.197349 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:00:25.197368 kernel: SMBIOS 3.0.0 present. Jul 2 00:00:25.197386 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 2 00:00:25.197409 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:00:25.197462 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:00:25.197487 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:00:25.197506 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:00:25.197525 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:00:25.197543 kernel: audit: type=2000 audit(0.295:1): state=initialized audit_enabled=0 res=1 Jul 2 00:00:25.197561 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:00:25.197586 kernel: cpuidle: using governor menu Jul 2 00:00:25.197605 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:00:25.197623 kernel: ASID allocator initialised with 65536 entries Jul 2 00:00:25.197642 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:00:25.197660 kernel: Serial: AMBA PL011 UART driver Jul 2 00:00:25.197678 kernel: Modules: 17600 pages in range for non-PLT usage Jul 2 00:00:25.197696 kernel: Modules: 509120 pages in range for PLT usage Jul 2 00:00:25.197714 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:00:25.197732 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:00:25.197756 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:00:25.197774 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 00:00:25.197793 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:00:25.197811 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:00:25.197829 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:00:25.197848 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 00:00:25.197866 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:00:25.197884 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:00:25.197903 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:00:25.197926 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:00:25.197945 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:00:25.197963 kernel: ACPI: Interpreter enabled Jul 2 00:00:25.197982 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:00:25.198000 kernel: ACPI: MCFG table detected, 1 entries Jul 2 00:00:25.198018 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 2 00:00:25.198316 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:00:25.198564 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 00:00:25.198775 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 00:00:25.198973 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 2 00:00:25.199169 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 2 00:00:25.199194 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 2 00:00:25.199213 kernel: acpiphp: Slot [1] registered Jul 2 00:00:25.199232 kernel: acpiphp: Slot [2] registered Jul 2 00:00:25.199250 kernel: acpiphp: Slot [3] registered Jul 2 00:00:25.199268 kernel: acpiphp: Slot [4] registered Jul 2 00:00:25.199286 kernel: acpiphp: Slot [5] registered Jul 2 00:00:25.199310 kernel: acpiphp: Slot [6] registered Jul 2 00:00:25.199328 kernel: acpiphp: Slot [7] registered Jul 2 00:00:25.199346 kernel: acpiphp: Slot [8] registered Jul 2 00:00:25.199364 kernel: acpiphp: Slot [9] registered Jul 2 00:00:25.199382 kernel: acpiphp: Slot [10] registered Jul 2 00:00:25.199401 kernel: acpiphp: Slot [11] registered Jul 2 00:00:25.199419 kernel: acpiphp: Slot [12] registered Jul 2 00:00:25.199461 kernel: acpiphp: Slot [13] registered Jul 2 00:00:25.199482 kernel: acpiphp: Slot [14] registered Jul 2 00:00:25.199507 kernel: acpiphp: Slot [15] registered Jul 2 00:00:25.199526 kernel: acpiphp: Slot [16] registered Jul 2 00:00:25.199544 kernel: acpiphp: Slot [17] registered Jul 2 00:00:25.199580 kernel: acpiphp: Slot [18] registered Jul 2 00:00:25.199936 kernel: acpiphp: Slot [19] registered Jul 2 00:00:25.200200 kernel: acpiphp: Slot [20] registered Jul 2 00:00:25.200222 kernel: acpiphp: Slot [21] registered Jul 2 00:00:25.200241 kernel: acpiphp: Slot [22] registered Jul 2 00:00:25.200259 kernel: acpiphp: Slot [23] registered Jul 2 00:00:25.200277 kernel: acpiphp: Slot [24] registered Jul 2 00:00:25.200302 kernel: acpiphp: Slot [25] registered Jul 2 00:00:25.200320 kernel: acpiphp: Slot [26] registered Jul 2 00:00:25.200338 kernel: acpiphp: Slot [27] registered Jul 2 00:00:25.200356 kernel: acpiphp: Slot [28] registered Jul 2 00:00:25.200374 kernel: acpiphp: Slot [29] registered Jul 2 00:00:25.200392 kernel: acpiphp: Slot [30] registered Jul 2 00:00:25.200411 kernel: acpiphp: Slot [31] registered Jul 2 00:00:25.200535 kernel: PCI host bridge to bus 0000:00 Jul 2 00:00:25.200844 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 2 00:00:25.201035 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 00:00:25.201216 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 2 00:00:25.201395 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 2 00:00:25.201743 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 2 00:00:25.201979 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 2 00:00:25.202193 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 2 00:00:25.202423 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 00:00:25.204808 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 2 00:00:25.205018 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 00:00:25.205238 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 00:00:25.206570 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 2 00:00:25.206841 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 2 00:00:25.207047 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 2 00:00:25.207262 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 00:00:25.208527 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 2 00:00:25.208776 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 2 00:00:25.208986 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 2 00:00:25.209189 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 2 00:00:25.209408 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 2 00:00:25.211777 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 2 00:00:25.211992 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 00:00:25.212176 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 2 00:00:25.212202 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 00:00:25.212221 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 00:00:25.212240 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 00:00:25.212259 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 00:00:25.212278 kernel: iommu: Default domain type: Translated Jul 2 00:00:25.212296 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:00:25.212321 kernel: efivars: Registered efivars operations Jul 2 00:00:25.212339 kernel: vgaarb: loaded Jul 2 00:00:25.212358 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:00:25.212376 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:00:25.212395 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:00:25.212413 kernel: pnp: PnP ACPI init Jul 2 00:00:25.212652 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 2 00:00:25.212681 kernel: pnp: PnP ACPI: found 1 devices Jul 2 00:00:25.212706 kernel: NET: Registered PF_INET protocol family Jul 2 00:00:25.212726 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:00:25.212745 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:00:25.212763 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:00:25.212782 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:00:25.212801 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:00:25.212819 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:00:25.212838 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:00:25.212856 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:00:25.212880 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:00:25.212899 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:00:25.212917 kernel: kvm [1]: HYP mode not available Jul 2 00:00:25.212935 kernel: Initialise system trusted keyrings Jul 2 00:00:25.212955 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:00:25.212973 kernel: Key type asymmetric registered Jul 2 00:00:25.212991 kernel: Asymmetric key parser 'x509' registered Jul 2 00:00:25.213009 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 00:00:25.213028 kernel: io scheduler mq-deadline registered Jul 2 00:00:25.213051 kernel: io scheduler kyber registered Jul 2 00:00:25.213070 kernel: io scheduler bfq registered Jul 2 00:00:25.213296 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 2 00:00:25.213323 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 00:00:25.213342 kernel: ACPI: button: Power Button [PWRB] Jul 2 00:00:25.213361 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 2 00:00:25.213379 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 00:00:25.213398 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:00:25.213423 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 2 00:00:25.215892 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 2 00:00:25.215921 kernel: printk: console [ttyS0] disabled Jul 2 00:00:25.215941 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 2 00:00:25.215960 kernel: printk: console [ttyS0] enabled Jul 2 00:00:25.215978 kernel: printk: bootconsole [uart0] disabled Jul 2 00:00:25.215996 kernel: thunder_xcv, ver 1.0 Jul 2 00:00:25.216015 kernel: thunder_bgx, ver 1.0 Jul 2 00:00:25.216034 kernel: nicpf, ver 1.0 Jul 2 00:00:25.216052 kernel: nicvf, ver 1.0 Jul 2 00:00:25.216285 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:00:25.216538 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:00:24 UTC (1719878424) Jul 2 00:00:25.216566 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:00:25.216585 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 2 00:00:25.216604 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 00:00:25.216624 kernel: watchdog: Hard watchdog permanently disabled Jul 2 00:00:25.216642 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:00:25.216661 kernel: Segment Routing with IPv6 Jul 2 00:00:25.216689 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:00:25.216707 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:00:25.216726 kernel: Key type dns_resolver registered Jul 2 00:00:25.216744 kernel: registered taskstats version 1 Jul 2 00:00:25.216763 kernel: Loading compiled-in X.509 certificates Jul 2 00:00:25.216781 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 00:00:25.216799 kernel: Key type .fscrypt registered Jul 2 00:00:25.216817 kernel: Key type fscrypt-provisioning registered Jul 2 00:00:25.216835 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:00:25.216859 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:00:25.216877 kernel: ima: No architecture policies found Jul 2 00:00:25.216896 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:00:25.216914 kernel: clk: Disabling unused clocks Jul 2 00:00:25.216933 kernel: Freeing unused kernel memory: 39040K Jul 2 00:00:25.216951 kernel: Run /init as init process Jul 2 00:00:25.216970 kernel: with arguments: Jul 2 00:00:25.216988 kernel: /init Jul 2 00:00:25.217006 kernel: with environment: Jul 2 00:00:25.217029 kernel: HOME=/ Jul 2 00:00:25.217047 kernel: TERM=linux Jul 2 00:00:25.217066 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:00:25.217088 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:00:25.217111 systemd[1]: Detected virtualization amazon. Jul 2 00:00:25.217132 systemd[1]: Detected architecture arm64. Jul 2 00:00:25.217151 systemd[1]: Running in initrd. Jul 2 00:00:25.217170 systemd[1]: No hostname configured, using default hostname. Jul 2 00:00:25.217195 systemd[1]: Hostname set to . Jul 2 00:00:25.217215 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:00:25.217235 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:00:25.217255 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:00:25.217275 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:00:25.217296 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:00:25.217317 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:00:25.217343 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:00:25.217366 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:00:25.217390 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:00:25.217414 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:00:25.218570 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:00:25.218608 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:00:25.218629 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:00:25.218660 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:00:25.218681 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:00:25.218701 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:00:25.218721 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:00:25.218743 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:00:25.218764 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:00:25.218784 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:00:25.218804 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:00:25.218825 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:00:25.218851 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:00:25.218871 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:00:25.218891 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:00:25.218911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:00:25.218932 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:00:25.218952 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:00:25.218974 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:00:25.218996 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:00:25.219021 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:00:25.219043 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:00:25.219063 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:00:25.219084 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:00:25.219107 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:00:25.219193 systemd-journald[250]: Collecting audit messages is disabled. Jul 2 00:00:25.219240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:00:25.219262 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:00:25.219283 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:00:25.219308 systemd-journald[250]: Journal started Jul 2 00:00:25.219347 systemd-journald[250]: Runtime Journal (/run/log/journal/ec21f7cf5bc58352b29d70f6a8025a81) is 8.0M, max 75.3M, 67.3M free. Jul 2 00:00:25.168501 systemd-modules-load[251]: Inserted module 'overlay' Jul 2 00:00:25.223046 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:00:25.227469 kernel: Bridge firewalling registered Jul 2 00:00:25.228506 systemd-modules-load[251]: Inserted module 'br_netfilter' Jul 2 00:00:25.232931 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:00:25.239513 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:00:25.259281 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:00:25.264814 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:00:25.282486 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:00:25.293524 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:00:25.320705 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:00:25.325490 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:00:25.340495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:00:25.353772 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:00:25.365752 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:00:25.378584 dracut-cmdline[288]: dracut-dracut-053 Jul 2 00:00:25.385290 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 2 00:00:25.453036 systemd-resolved[292]: Positive Trust Anchors: Jul 2 00:00:25.453071 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:00:25.453133 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:00:25.532457 kernel: SCSI subsystem initialized Jul 2 00:00:25.539468 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:00:25.552470 kernel: iscsi: registered transport (tcp) Jul 2 00:00:25.575471 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:00:25.575542 kernel: QLogic iSCSI HBA Driver Jul 2 00:00:25.670463 kernel: random: crng init done Jul 2 00:00:25.670833 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 2 00:00:25.674143 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:00:25.691843 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:00:25.700519 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:00:25.709810 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:00:25.753332 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:00:25.753482 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:00:25.753512 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:00:25.821494 kernel: raid6: neonx8 gen() 6691 MB/s Jul 2 00:00:25.838468 kernel: raid6: neonx4 gen() 6546 MB/s Jul 2 00:00:25.855464 kernel: raid6: neonx2 gen() 5463 MB/s Jul 2 00:00:25.872465 kernel: raid6: neonx1 gen() 3962 MB/s Jul 2 00:00:25.889464 kernel: raid6: int64x8 gen() 3820 MB/s Jul 2 00:00:25.906463 kernel: raid6: int64x4 gen() 3719 MB/s Jul 2 00:00:25.923461 kernel: raid6: int64x2 gen() 3580 MB/s Jul 2 00:00:25.941156 kernel: raid6: int64x1 gen() 2768 MB/s Jul 2 00:00:25.941198 kernel: raid6: using algorithm neonx8 gen() 6691 MB/s Jul 2 00:00:25.959277 kernel: raid6: .... xor() 4883 MB/s, rmw enabled Jul 2 00:00:25.959312 kernel: raid6: using neon recovery algorithm Jul 2 00:00:25.967467 kernel: xor: measuring software checksum speed Jul 2 00:00:25.968461 kernel: 8regs : 11021 MB/sec Jul 2 00:00:25.970460 kernel: 32regs : 11937 MB/sec Jul 2 00:00:25.973075 kernel: arm64_neon : 9537 MB/sec Jul 2 00:00:25.973109 kernel: xor: using function: 32regs (11937 MB/sec) Jul 2 00:00:26.058476 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:00:26.077598 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:00:26.089771 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:00:26.128947 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 2 00:00:26.137515 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:00:26.157834 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:00:26.187623 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Jul 2 00:00:26.245485 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:00:26.256797 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:00:26.381537 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:00:26.405746 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:00:26.467288 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:00:26.473296 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:00:26.478594 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:00:26.495885 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:00:26.518713 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:00:26.567611 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:00:26.579034 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 00:00:26.579097 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 2 00:00:26.599084 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 00:00:26.599366 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 00:00:26.599667 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:44:be:82:44:8d Jul 2 00:00:26.603869 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:00:26.605944 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:00:26.624054 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:00:26.626591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:00:26.629397 (udev-worker)[534]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:00:26.639914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:00:26.645792 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:00:26.667212 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 2 00:00:26.667274 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 00:00:26.674042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:00:26.687469 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 00:00:26.692453 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:00:26.692522 kernel: GPT:9289727 != 16777215 Jul 2 00:00:26.692556 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:00:26.693050 kernel: GPT:9289727 != 16777215 Jul 2 00:00:26.693701 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:00:26.695470 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:00:26.699598 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:00:26.718845 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:00:26.748730 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:00:26.834481 kernel: BTRFS: device fsid 2e7aff7f-b51e-4094-8f16-54690a62fb17 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (525) Jul 2 00:00:26.856492 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (516) Jul 2 00:00:26.868021 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 00:00:26.944084 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 00:00:26.961521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:00:26.976395 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 00:00:26.981050 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 00:00:27.001790 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:00:27.015493 disk-uuid[661]: Primary Header is updated. Jul 2 00:00:27.015493 disk-uuid[661]: Secondary Entries is updated. Jul 2 00:00:27.015493 disk-uuid[661]: Secondary Header is updated. Jul 2 00:00:27.026476 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:00:27.035470 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:00:28.044667 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:00:28.046499 disk-uuid[662]: The operation has completed successfully. Jul 2 00:00:28.215778 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:00:28.217632 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:00:28.273735 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:00:28.289135 sh[920]: Success Jul 2 00:00:28.307473 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:00:28.408085 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:00:28.420641 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:00:28.430537 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:00:28.463868 kernel: BTRFS info (device dm-0): first mount of filesystem 2e7aff7f-b51e-4094-8f16-54690a62fb17 Jul 2 00:00:28.463930 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:00:28.465562 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:00:28.466763 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:00:28.467789 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:00:28.551460 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 00:00:28.590322 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:00:28.594093 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:00:28.606696 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:00:28.613892 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:00:28.631814 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:00:28.631883 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:00:28.631920 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:00:28.640400 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:00:28.657973 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:00:28.662506 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:00:28.687513 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:00:28.700881 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:00:28.787300 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:00:28.800878 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:00:28.859831 systemd-networkd[1112]: lo: Link UP Jul 2 00:00:28.859847 systemd-networkd[1112]: lo: Gained carrier Jul 2 00:00:28.862792 systemd-networkd[1112]: Enumeration completed Jul 2 00:00:28.863232 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:00:28.864063 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:00:28.864070 systemd-networkd[1112]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:00:28.868814 systemd-networkd[1112]: eth0: Link UP Jul 2 00:00:28.868821 systemd-networkd[1112]: eth0: Gained carrier Jul 2 00:00:28.868840 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:00:28.872382 systemd[1]: Reached target network.target - Network. Jul 2 00:00:28.910519 systemd-networkd[1112]: eth0: DHCPv4 address 172.31.19.149/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:00:29.002006 ignition[1037]: Ignition 2.18.0 Jul 2 00:00:29.002032 ignition[1037]: Stage: fetch-offline Jul 2 00:00:29.002969 ignition[1037]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:29.002999 ignition[1037]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:29.003561 ignition[1037]: Ignition finished successfully Jul 2 00:00:29.011612 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:00:29.026851 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:00:29.050059 ignition[1123]: Ignition 2.18.0 Jul 2 00:00:29.050081 ignition[1123]: Stage: fetch Jul 2 00:00:29.051196 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:29.051222 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:29.051352 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:29.060908 ignition[1123]: PUT result: OK Jul 2 00:00:29.063450 ignition[1123]: parsed url from cmdline: "" Jul 2 00:00:29.063474 ignition[1123]: no config URL provided Jul 2 00:00:29.063490 ignition[1123]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:00:29.063515 ignition[1123]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:00:29.063564 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:29.066871 ignition[1123]: PUT result: OK Jul 2 00:00:29.066949 ignition[1123]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 00:00:29.068966 ignition[1123]: GET result: OK Jul 2 00:00:29.078669 unknown[1123]: fetched base config from "system" Jul 2 00:00:29.069115 ignition[1123]: parsing config with SHA512: e08f82ccedd2e7527ffd9fda943f246f4c2a805c18ca7a8c4330687cc7268c9763f0e1d53c674e33c82666d74c55139641c659c71e591b93c5ca25dbdeb257b3 Jul 2 00:00:29.078686 unknown[1123]: fetched base config from "system" Jul 2 00:00:29.080096 ignition[1123]: fetch: fetch complete Jul 2 00:00:29.078700 unknown[1123]: fetched user config from "aws" Jul 2 00:00:29.080109 ignition[1123]: fetch: fetch passed Jul 2 00:00:29.087850 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:00:29.080200 ignition[1123]: Ignition finished successfully Jul 2 00:00:29.100728 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:00:29.129884 ignition[1130]: Ignition 2.18.0 Jul 2 00:00:29.130575 ignition[1130]: Stage: kargs Jul 2 00:00:29.131216 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:29.131240 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:29.131380 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:29.134291 ignition[1130]: PUT result: OK Jul 2 00:00:29.143070 ignition[1130]: kargs: kargs passed Jul 2 00:00:29.143355 ignition[1130]: Ignition finished successfully Jul 2 00:00:29.154010 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:00:29.168847 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:00:29.192318 ignition[1137]: Ignition 2.18.0 Jul 2 00:00:29.192857 ignition[1137]: Stage: disks Jul 2 00:00:29.193661 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:29.193686 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:29.193816 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:29.196189 ignition[1137]: PUT result: OK Jul 2 00:00:29.204894 ignition[1137]: disks: disks passed Jul 2 00:00:29.205001 ignition[1137]: Ignition finished successfully Jul 2 00:00:29.211153 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:00:29.215546 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:00:29.217615 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:00:29.219845 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:00:29.223097 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:00:29.223173 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:00:29.239731 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:00:29.288809 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:00:29.295709 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:00:29.306681 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:00:29.395501 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 95038baa-e9f1-4207-86a5-38a4ce3cff7d r/w with ordered data mode. Quota mode: none. Jul 2 00:00:29.396635 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:00:29.400106 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:00:29.412616 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:00:29.417515 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:00:29.423174 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:00:29.423259 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:00:29.423309 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:00:29.444471 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1165) Jul 2 00:00:29.449770 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:00:29.449836 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:00:29.449864 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:00:29.452400 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:00:29.459483 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:00:29.468821 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:00:29.474926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:00:29.853931 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:00:29.862567 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:00:29.870211 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:00:29.878160 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:00:30.176039 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:00:30.185643 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:00:30.190734 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:00:30.226038 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:00:30.227979 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:00:30.261984 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:00:30.279075 ignition[1281]: INFO : Ignition 2.18.0 Jul 2 00:00:30.279075 ignition[1281]: INFO : Stage: mount Jul 2 00:00:30.282344 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:30.282344 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:30.282344 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:30.289027 ignition[1281]: INFO : PUT result: OK Jul 2 00:00:30.292846 ignition[1281]: INFO : mount: mount passed Jul 2 00:00:30.294894 ignition[1281]: INFO : Ignition finished successfully Jul 2 00:00:30.298629 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:00:30.308637 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:00:30.329763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:00:30.353474 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1292) Jul 2 00:00:30.357310 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:00:30.357357 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:00:30.358450 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:00:30.363488 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:00:30.366626 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:00:30.399405 ignition[1309]: INFO : Ignition 2.18.0 Jul 2 00:00:30.399405 ignition[1309]: INFO : Stage: files Jul 2 00:00:30.402585 ignition[1309]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:30.402585 ignition[1309]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:30.402585 ignition[1309]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:30.408997 ignition[1309]: INFO : PUT result: OK Jul 2 00:00:30.413169 ignition[1309]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:00:30.416263 ignition[1309]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:00:30.416263 ignition[1309]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:00:30.451034 ignition[1309]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:00:30.453543 ignition[1309]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:00:30.453543 ignition[1309]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:00:30.452727 unknown[1309]: wrote ssh authorized keys file for user: core Jul 2 00:00:30.462945 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:00:30.466094 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:00:30.466094 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:00:30.466094 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:00:30.466094 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:00:30.466094 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:00:30.466094 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:00:30.466094 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:00:30.466094 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:00:30.466094 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 00:00:30.540603 systemd-networkd[1112]: eth0: Gained IPv6LL Jul 2 00:00:30.810198 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jul 2 00:00:31.211702 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:00:31.211702 ignition[1309]: INFO : files: op(8): [started] processing unit "containerd.service" Jul 2 00:00:31.219960 ignition[1309]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:00:31.219960 ignition[1309]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:00:31.219960 ignition[1309]: INFO : files: op(8): [finished] processing unit "containerd.service" Jul 2 00:00:31.219960 ignition[1309]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:00:31.219960 ignition[1309]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:00:31.219960 ignition[1309]: INFO : files: files passed Jul 2 00:00:31.219960 ignition[1309]: INFO : Ignition finished successfully Jul 2 00:00:31.241159 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:00:31.255737 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:00:31.262719 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:00:31.267691 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:00:31.268675 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:00:31.302873 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:00:31.302873 initrd-setup-root-after-ignition[1338]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:00:31.309071 initrd-setup-root-after-ignition[1342]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:00:31.314800 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:00:31.321557 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:00:31.336673 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:00:31.385638 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:00:31.388084 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:00:31.393018 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:00:31.395821 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:00:31.399402 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:00:31.401066 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:00:31.437510 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:00:31.457870 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:00:31.481052 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:00:31.485525 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:00:31.489928 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:00:31.492596 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:00:31.492831 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:00:31.499680 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:00:31.501888 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:00:31.506663 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:00:31.508794 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:00:31.511721 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:00:31.515036 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:00:31.517751 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:00:31.527252 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:00:31.530289 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:00:31.535201 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:00:31.536783 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:00:31.537009 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:00:31.544187 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:00:31.546482 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:00:31.552409 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:00:31.554647 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:00:31.559372 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:00:31.559639 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:00:31.561992 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:00:31.562209 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:00:31.564735 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:00:31.564931 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:00:31.580287 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:00:31.588971 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:00:31.589258 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:00:31.601644 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:00:31.605131 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:00:31.607975 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:00:31.612515 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:00:31.614330 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:00:31.631958 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:00:31.632154 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:00:31.654033 ignition[1362]: INFO : Ignition 2.18.0 Jul 2 00:00:31.656229 ignition[1362]: INFO : Stage: umount Jul 2 00:00:31.656229 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:31.656229 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:31.656229 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:31.664750 ignition[1362]: INFO : PUT result: OK Jul 2 00:00:31.666745 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:00:31.670285 ignition[1362]: INFO : umount: umount passed Jul 2 00:00:31.671897 ignition[1362]: INFO : Ignition finished successfully Jul 2 00:00:31.675348 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:00:31.677205 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:00:31.679527 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:00:31.680650 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:00:31.683119 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:00:31.683207 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:00:31.685711 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:00:31.685895 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:00:31.688506 systemd[1]: Stopped target network.target - Network. Jul 2 00:00:31.690159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:00:31.690243 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:00:31.693057 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:00:31.695565 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:00:31.710559 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:00:31.712864 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:00:31.716286 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:00:31.718084 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:00:31.718163 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:00:31.720009 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:00:31.720080 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:00:31.722363 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:00:31.722460 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:00:31.724266 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:00:31.724343 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:00:31.726481 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:00:31.729014 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:00:31.754562 systemd-networkd[1112]: eth0: DHCPv6 lease lost Jul 2 00:00:31.760277 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:00:31.760520 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:00:31.765419 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:00:31.766925 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:00:31.775869 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:00:31.776012 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:00:31.789704 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:00:31.791638 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:00:31.791749 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:00:31.794491 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:00:31.794595 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:00:31.796635 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:00:31.796721 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:00:31.798770 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:00:31.798851 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:00:31.802175 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:00:31.834415 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:00:31.841402 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:00:31.847790 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:00:31.849083 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:00:31.855209 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:00:31.855338 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:00:31.856509 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:00:31.856584 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:00:31.856695 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:00:31.856789 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:00:31.869895 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:00:31.869995 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:00:31.875746 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:00:31.875844 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:00:31.887759 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:00:31.892927 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:00:31.893060 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:00:31.896057 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:00:31.896153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:00:31.923697 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:00:31.924079 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:00:32.548828 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:00:32.549098 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:00:32.552205 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:00:32.555381 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:00:32.555685 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:00:32.574810 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:00:32.611988 systemd[1]: Switching root. Jul 2 00:00:32.648735 systemd-journald[250]: Journal stopped Jul 2 00:00:36.437681 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jul 2 00:00:36.437816 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:00:36.437862 kernel: SELinux: policy capability open_perms=1 Jul 2 00:00:36.437893 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:00:36.437925 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:00:36.437954 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:00:36.437985 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:00:36.438015 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:00:36.438055 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:00:36.438082 kernel: audit: type=1403 audit(1719878434.483:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:00:36.438120 systemd[1]: Successfully loaded SELinux policy in 54.303ms. Jul 2 00:00:36.438166 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.044ms. Jul 2 00:00:36.438201 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:00:36.438238 systemd[1]: Detected virtualization amazon. Jul 2 00:00:36.438268 systemd[1]: Detected architecture arm64. Jul 2 00:00:36.438312 systemd[1]: Detected first boot. Jul 2 00:00:36.438346 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:00:36.438377 zram_generator::config[1422]: No configuration found. Jul 2 00:00:36.438412 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:00:36.438992 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:00:36.439039 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 00:00:36.439073 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:00:36.439105 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:00:36.439137 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:00:36.439176 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:00:36.439211 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:00:36.439243 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:00:36.439277 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:00:36.439309 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:00:36.439341 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:00:36.439370 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:00:36.439402 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:00:36.439485 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:00:36.439528 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:00:36.439562 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:00:36.439593 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:00:36.439624 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:00:36.439655 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:00:36.439688 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:00:36.439719 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:00:36.439749 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:00:36.439786 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:00:36.439827 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:00:36.439857 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:00:36.439889 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:00:36.439921 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:00:36.439951 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:00:36.439980 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:00:36.440012 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:00:36.440044 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:00:36.440079 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:00:36.440111 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:00:36.440143 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:00:36.440172 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:00:36.440201 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:00:36.440230 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:00:36.440263 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:00:36.440296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:00:36.440331 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:00:36.440364 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:00:36.440394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:00:36.440425 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:00:36.440599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:00:36.442492 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:00:36.442546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:00:36.442577 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:00:36.442607 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 00:00:36.442649 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 2 00:00:36.442680 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:00:36.442709 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:00:36.442738 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:00:36.442770 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:00:36.442798 kernel: loop: module loaded Jul 2 00:00:36.442830 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:00:36.442862 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:00:36.442891 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:00:36.442926 kernel: fuse: init (API version 7.39) Jul 2 00:00:36.442955 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:00:36.442986 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:00:36.443015 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:00:36.443044 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:00:36.443073 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:00:36.443102 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:00:36.443131 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:00:36.443165 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:00:36.443194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:00:36.443223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:00:36.443253 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:00:36.443283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:00:36.443317 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:00:36.443346 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:00:36.443378 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:00:36.443407 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:00:36.443454 kernel: ACPI: bus type drm_connector registered Jul 2 00:00:36.443507 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:00:36.443540 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:00:36.443570 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:00:36.443610 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:00:36.443646 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:00:36.443729 systemd-journald[1525]: Collecting audit messages is disabled. Jul 2 00:00:36.443783 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:00:36.443812 systemd-journald[1525]: Journal started Jul 2 00:00:36.443861 systemd-journald[1525]: Runtime Journal (/run/log/journal/ec21f7cf5bc58352b29d70f6a8025a81) is 8.0M, max 75.3M, 67.3M free. Jul 2 00:00:36.455397 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:00:36.470465 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:00:36.480555 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:00:36.501180 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:00:36.507422 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:00:36.518515 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:00:36.523483 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:00:36.541659 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:00:36.561587 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:00:36.574244 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:00:36.580874 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:00:36.583823 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:00:36.592338 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:00:36.595313 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:00:36.641418 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:00:36.658721 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:00:36.676737 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:00:36.680063 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:00:36.695740 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Jul 2 00:00:36.697064 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Jul 2 00:00:36.725294 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:00:36.735605 systemd-journald[1525]: Time spent on flushing to /var/log/journal/ec21f7cf5bc58352b29d70f6a8025a81 is 37.383ms for 885 entries. Jul 2 00:00:36.735605 systemd-journald[1525]: System Journal (/var/log/journal/ec21f7cf5bc58352b29d70f6a8025a81) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:00:36.783388 systemd-journald[1525]: Received client request to flush runtime journal. Jul 2 00:00:36.739950 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:00:36.744299 udevadm[1584]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:00:36.789169 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:00:36.817131 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:00:36.832736 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:00:36.885821 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jul 2 00:00:36.885862 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jul 2 00:00:36.895000 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:00:37.636594 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:00:37.649117 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:00:37.707304 systemd-udevd[1606]: Using default interface naming scheme 'v255'. Jul 2 00:00:37.760336 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:00:37.773811 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:00:37.809247 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:00:37.864721 (udev-worker)[1627]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:00:37.880805 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 2 00:00:37.919529 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1608) Jul 2 00:00:38.000829 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:00:38.161056 systemd-networkd[1609]: lo: Link UP Jul 2 00:00:38.161079 systemd-networkd[1609]: lo: Gained carrier Jul 2 00:00:38.163893 systemd-networkd[1609]: Enumeration completed Jul 2 00:00:38.164105 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:00:38.168285 systemd-networkd[1609]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:00:38.168308 systemd-networkd[1609]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:00:38.171566 systemd-networkd[1609]: eth0: Link UP Jul 2 00:00:38.171933 systemd-networkd[1609]: eth0: Gained carrier Jul 2 00:00:38.171968 systemd-networkd[1609]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:00:38.174985 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:00:38.178981 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1626) Jul 2 00:00:38.181559 systemd-networkd[1609]: eth0: DHCPv4 address 172.31.19.149/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:00:38.241897 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:00:38.391697 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:00:38.433326 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:00:38.442720 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:00:38.483997 lvm[1731]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:00:38.523848 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:00:38.526982 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:00:38.535996 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:00:38.558455 lvm[1735]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:00:38.562248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:00:38.597211 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:00:38.600722 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:00:38.603291 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:00:38.603840 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:00:38.606751 systemd[1]: Reached target machines.target - Containers. Jul 2 00:00:38.610405 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:00:38.618830 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:00:38.627861 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:00:38.630845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:00:38.634751 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:00:38.650837 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:00:38.659779 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:00:38.663730 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:00:38.704289 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:00:38.715146 kernel: loop0: detected capacity change from 0 to 51896 Jul 2 00:00:38.715227 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:00:38.742875 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:00:38.744242 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:00:38.822534 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:00:38.861520 kernel: loop1: detected capacity change from 0 to 113672 Jul 2 00:00:38.963490 kernel: loop2: detected capacity change from 0 to 59672 Jul 2 00:00:39.067483 kernel: loop3: detected capacity change from 0 to 193208 Jul 2 00:00:39.109476 kernel: loop4: detected capacity change from 0 to 51896 Jul 2 00:00:39.122490 kernel: loop5: detected capacity change from 0 to 113672 Jul 2 00:00:39.136465 kernel: loop6: detected capacity change from 0 to 59672 Jul 2 00:00:39.149498 kernel: loop7: detected capacity change from 0 to 193208 Jul 2 00:00:39.166066 (sd-merge)[1760]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 00:00:39.167084 (sd-merge)[1760]: Merged extensions into '/usr'. Jul 2 00:00:39.176152 systemd[1]: Reloading requested from client PID 1746 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:00:39.176186 systemd[1]: Reloading... Jul 2 00:00:39.244596 systemd-networkd[1609]: eth0: Gained IPv6LL Jul 2 00:00:39.303600 zram_generator::config[1787]: No configuration found. Jul 2 00:00:39.597069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:00:39.739560 systemd[1]: Reloading finished in 562 ms. Jul 2 00:00:39.745732 ldconfig[1742]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:00:39.764641 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:00:39.767894 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:00:39.770850 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:00:39.785728 systemd[1]: Starting ensure-sysext.service... Jul 2 00:00:39.801862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:00:39.815981 systemd[1]: Reloading requested from client PID 1847 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:00:39.816167 systemd[1]: Reloading... Jul 2 00:00:39.839988 systemd-tmpfiles[1848]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:00:39.840660 systemd-tmpfiles[1848]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:00:39.843032 systemd-tmpfiles[1848]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:00:39.844175 systemd-tmpfiles[1848]: ACLs are not supported, ignoring. Jul 2 00:00:39.844412 systemd-tmpfiles[1848]: ACLs are not supported, ignoring. Jul 2 00:00:39.852345 systemd-tmpfiles[1848]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:00:39.852376 systemd-tmpfiles[1848]: Skipping /boot Jul 2 00:00:39.876288 systemd-tmpfiles[1848]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:00:39.876320 systemd-tmpfiles[1848]: Skipping /boot Jul 2 00:00:39.957493 zram_generator::config[1875]: No configuration found. Jul 2 00:00:40.198044 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:00:40.339077 systemd[1]: Reloading finished in 521 ms. Jul 2 00:00:40.368473 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:00:40.400942 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:00:40.408757 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:00:40.420859 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:00:40.428708 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:00:40.440985 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:00:40.467260 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:00:40.479115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:00:40.498018 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:00:40.514793 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:00:40.518812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:00:40.527476 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:00:40.527867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:00:40.532584 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:00:40.555886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:00:40.574675 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:00:40.578861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:00:40.587695 augenrules[1971]: No rules Jul 2 00:00:40.592658 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:00:40.604922 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:00:40.615413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:00:40.615832 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:00:40.619497 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:00:40.619857 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:00:40.623247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:00:40.623745 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:00:40.646947 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:00:40.665536 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:00:40.688248 systemd[1]: Finished ensure-sysext.service. Jul 2 00:00:40.698792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:00:40.706912 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:00:40.721772 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:00:40.727654 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:00:40.735843 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:00:40.738075 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:00:40.738202 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:00:40.741648 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:00:40.744943 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:00:40.762976 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:00:40.763384 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:00:40.772993 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:00:40.773395 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:00:40.785837 systemd-resolved[1942]: Positive Trust Anchors: Jul 2 00:00:40.785906 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:00:40.786149 systemd-resolved[1942]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:00:40.786213 systemd-resolved[1942]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:00:40.791826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:00:40.795846 systemd-resolved[1942]: Defaulting to hostname 'linux'. Jul 2 00:00:40.800692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:00:40.803106 systemd[1]: Reached target network.target - Network. Jul 2 00:00:40.804771 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:00:40.806833 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:00:40.809057 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:00:40.816144 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:00:40.816808 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:00:40.819137 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:00:40.821296 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:00:40.825761 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:00:40.829064 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:00:40.832306 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:00:40.834595 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:00:40.836788 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:00:40.836843 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:00:40.838417 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:00:40.841624 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:00:40.846549 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:00:40.851028 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:00:40.853694 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:00:40.857318 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:00:40.859349 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:00:40.861105 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:00:40.863074 systemd[1]: System is tainted: cgroupsv1 Jul 2 00:00:40.863150 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:00:40.863195 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:00:40.874823 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:00:40.882246 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:00:40.896745 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:00:40.902735 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:00:40.909410 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:00:40.913681 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:00:40.931486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:00:40.950005 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:00:40.968163 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 00:00:40.972856 jq[2010]: false Jul 2 00:00:40.986769 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:00:41.008632 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 00:00:41.034253 dbus-daemon[2009]: [system] SELinux support is enabled Jul 2 00:00:41.035878 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:00:41.039406 dbus-daemon[2009]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1609 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 00:00:41.051753 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:00:41.074996 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:00:41.080554 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:00:41.086638 extend-filesystems[2011]: Found loop4 Jul 2 00:00:41.086638 extend-filesystems[2011]: Found loop5 Jul 2 00:00:41.086638 extend-filesystems[2011]: Found loop6 Jul 2 00:00:41.086638 extend-filesystems[2011]: Found loop7 Jul 2 00:00:41.086638 extend-filesystems[2011]: Found nvme0n1 Jul 2 00:00:41.086638 extend-filesystems[2011]: Found nvme0n1p1 Jul 2 00:00:41.086638 extend-filesystems[2011]: Found nvme0n1p2 Jul 2 00:00:41.125681 extend-filesystems[2011]: Found nvme0n1p3 Jul 2 00:00:41.125681 extend-filesystems[2011]: Found usr Jul 2 00:00:41.125681 extend-filesystems[2011]: Found nvme0n1p4 Jul 2 00:00:41.125681 extend-filesystems[2011]: Found nvme0n1p6 Jul 2 00:00:41.125681 extend-filesystems[2011]: Found nvme0n1p7 Jul 2 00:00:41.125681 extend-filesystems[2011]: Found nvme0n1p9 Jul 2 00:00:41.125681 extend-filesystems[2011]: Checking size of /dev/nvme0n1p9 Jul 2 00:00:41.100139 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:00:41.134821 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:00:41.163314 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:00:41.179253 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:00:41.180626 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 00:00:41.180626 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:00:41.180626 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: ---------------------------------------------------- Jul 2 00:00:41.180626 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:00:41.180626 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:00:41.180626 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: corporation. Support and training for ntp-4 are Jul 2 00:00:41.180626 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: available at https://www.nwtime.org/support Jul 2 00:00:41.180626 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: ---------------------------------------------------- Jul 2 00:00:41.179728 ntpd[2016]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 00:00:41.179825 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:00:41.207385 jq[2039]: true Jul 2 00:00:41.179774 ntpd[2016]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:00:41.211401 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: proto: precision = 0.096 usec (-23) Jul 2 00:00:41.211401 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: basedate set to 2024-06-19 Jul 2 00:00:41.211401 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: gps base set to 2024-06-23 (week 2320) Jul 2 00:00:41.211401 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:00:41.211401 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:00:41.211401 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:00:41.211401 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: Listen normally on 3 eth0 172.31.19.149:123 Jul 2 00:00:41.211401 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: Listen normally on 4 lo [::1]:123 Jul 2 00:00:41.211401 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: Listen normally on 5 eth0 [fe80::444:beff:fe82:448d%2]:123 Jul 2 00:00:41.211401 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: Listening on routing socket on fd #22 for interface updates Jul 2 00:00:41.211401 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:00:41.211401 ntpd[2016]: 2 Jul 00:00:41 ntpd[2016]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:00:41.197126 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:00:41.179796 ntpd[2016]: ---------------------------------------------------- Jul 2 00:00:41.200968 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:00:41.179815 ntpd[2016]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:00:41.179834 ntpd[2016]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:00:41.179853 ntpd[2016]: corporation. Support and training for ntp-4 are Jul 2 00:00:41.179872 ntpd[2016]: available at https://www.nwtime.org/support Jul 2 00:00:41.229770 coreos-metadata[2007]: Jul 02 00:00:41.227 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:00:41.293294 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 00:00:41.179889 ntpd[2016]: ---------------------------------------------------- Jul 2 00:00:41.235724 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.234 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.243 INFO Fetch successful Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.243 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.244 INFO Fetch successful Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.244 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.246 INFO Fetch successful Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.246 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.251 INFO Fetch successful Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.252 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.253 INFO Fetch failed with 404: resource not found Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.253 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.253 INFO Fetch successful Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.253 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.255 INFO Fetch successful Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.255 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.260 INFO Fetch successful Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.260 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.264 INFO Fetch successful Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.264 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 00:00:41.295958 coreos-metadata[2007]: Jul 02 00:00:41.266 INFO Fetch successful Jul 2 00:00:41.296916 extend-filesystems[2011]: Resized partition /dev/nvme0n1p9 Jul 2 00:00:41.313080 update_engine[2035]: I0702 00:00:41.272071 2035 main.cc:92] Flatcar Update Engine starting Jul 2 00:00:41.313080 update_engine[2035]: I0702 00:00:41.279181 2035 update_check_scheduler.cc:74] Next update check in 7m40s Jul 2 00:00:41.183942 ntpd[2016]: proto: precision = 0.096 usec (-23) Jul 2 00:00:41.236239 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:00:41.332229 extend-filesystems[2053]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:00:41.184364 ntpd[2016]: basedate set to 2024-06-19 Jul 2 00:00:41.297795 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:00:41.184393 ntpd[2016]: gps base set to 2024-06-23 (week 2320) Jul 2 00:00:41.187006 ntpd[2016]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:00:41.187088 ntpd[2016]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:00:41.187362 ntpd[2016]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:00:41.187424 ntpd[2016]: Listen normally on 3 eth0 172.31.19.149:123 Jul 2 00:00:41.189599 ntpd[2016]: Listen normally on 4 lo [::1]:123 Jul 2 00:00:41.189680 ntpd[2016]: Listen normally on 5 eth0 [fe80::444:beff:fe82:448d%2]:123 Jul 2 00:00:41.189751 ntpd[2016]: Listening on routing socket on fd #22 for interface updates Jul 2 00:00:41.194176 ntpd[2016]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:00:41.194238 ntpd[2016]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:00:41.364389 (ntainerd)[2064]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:00:41.374605 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 00:00:41.402853 dbus-daemon[2009]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 00:00:41.405150 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:00:41.407736 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:00:41.423120 jq[2058]: true Jul 2 00:00:41.407794 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:00:41.410233 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:00:41.410270 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:00:41.414855 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:00:41.427984 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:00:41.441177 extend-filesystems[2053]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 00:00:41.441177 extend-filesystems[2053]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:00:41.441177 extend-filesystems[2053]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 00:00:41.455655 extend-filesystems[2011]: Resized filesystem in /dev/nvme0n1p9 Jul 2 00:00:41.454727 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:00:41.455274 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:00:41.569822 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 00:00:41.585835 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 00:00:41.589499 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:00:41.605942 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 00:00:41.608181 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:00:41.645841 bash[2105]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:00:41.639029 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:00:41.650834 systemd[1]: Starting sshkeys.service... Jul 2 00:00:41.707560 systemd-logind[2029]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 00:00:41.707617 systemd-logind[2029]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 2 00:00:41.712607 systemd-logind[2029]: New seat seat0. Jul 2 00:00:41.720026 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:00:41.753129 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:00:41.772103 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:00:41.956995 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2133) Jul 2 00:00:41.967564 amazon-ssm-agent[2112]: Initializing new seelog logger Jul 2 00:00:41.967564 amazon-ssm-agent[2112]: New Seelog Logger Creation Complete Jul 2 00:00:41.967564 amazon-ssm-agent[2112]: 2024/07/02 00:00:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.967564 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.967564 amazon-ssm-agent[2112]: 2024/07/02 00:00:41 processing appconfig overrides Jul 2 00:00:41.982474 amazon-ssm-agent[2112]: 2024/07/02 00:00:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.982474 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.982474 amazon-ssm-agent[2112]: 2024/07/02 00:00:41 processing appconfig overrides Jul 2 00:00:41.982474 amazon-ssm-agent[2112]: 2024/07/02 00:00:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.982474 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.982474 amazon-ssm-agent[2112]: 2024/07/02 00:00:41 processing appconfig overrides Jul 2 00:00:41.982474 amazon-ssm-agent[2112]: 2024-07-02 00:00:41 INFO Proxy environment variables: Jul 2 00:00:41.998368 amazon-ssm-agent[2112]: 2024/07/02 00:00:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.998368 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.998619 amazon-ssm-agent[2112]: 2024/07/02 00:00:41 processing appconfig overrides Jul 2 00:00:42.101019 amazon-ssm-agent[2112]: 2024-07-02 00:00:41 INFO http_proxy: Jul 2 00:00:42.179060 containerd[2064]: time="2024-07-02T00:00:42.178892315Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:00:42.196868 amazon-ssm-agent[2112]: 2024-07-02 00:00:41 INFO no_proxy: Jul 2 00:00:42.292126 locksmithd[2078]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:00:42.295398 coreos-metadata[2129]: Jul 02 00:00:42.293 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:00:42.297146 amazon-ssm-agent[2112]: 2024-07-02 00:00:41 INFO https_proxy: Jul 2 00:00:42.301049 coreos-metadata[2129]: Jul 02 00:00:42.300 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 00:00:42.305114 coreos-metadata[2129]: Jul 02 00:00:42.304 INFO Fetch successful Jul 2 00:00:42.305114 coreos-metadata[2129]: Jul 02 00:00:42.304 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 00:00:42.306573 coreos-metadata[2129]: Jul 02 00:00:42.306 INFO Fetch successful Jul 2 00:00:42.313662 unknown[2129]: wrote ssh authorized keys file for user: core Jul 2 00:00:42.398372 amazon-ssm-agent[2112]: 2024-07-02 00:00:41 INFO Checking if agent identity type OnPrem can be assumed Jul 2 00:00:42.417737 dbus-daemon[2009]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 00:00:42.417989 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 00:00:42.426100 dbus-daemon[2009]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2106 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 00:00:42.442202 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 00:00:42.446945 update-ssh-keys[2219]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:00:42.454807 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:00:42.472930 systemd[1]: Finished sshkeys.service. Jul 2 00:00:42.498638 polkitd[2224]: Started polkitd version 121 Jul 2 00:00:42.501667 amazon-ssm-agent[2112]: 2024-07-02 00:00:41 INFO Checking if agent identity type EC2 can be assumed Jul 2 00:00:42.544591 containerd[2064]: time="2024-07-02T00:00:42.544372849Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:00:42.544591 containerd[2064]: time="2024-07-02T00:00:42.544530829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.547902 containerd[2064]: time="2024-07-02T00:00:42.547817029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:00:42.547902 containerd[2064]: time="2024-07-02T00:00:42.547891465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.548374 containerd[2064]: time="2024-07-02T00:00:42.548323129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:00:42.548501 containerd[2064]: time="2024-07-02T00:00:42.548369449Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:00:42.548625 containerd[2064]: time="2024-07-02T00:00:42.548580025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.548765 containerd[2064]: time="2024-07-02T00:00:42.548712517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:00:42.548838 containerd[2064]: time="2024-07-02T00:00:42.548761537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.548960 containerd[2064]: time="2024-07-02T00:00:42.548923165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.549365 containerd[2064]: time="2024-07-02T00:00:42.549324073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.549450 containerd[2064]: time="2024-07-02T00:00:42.549371245Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:00:42.549450 containerd[2064]: time="2024-07-02T00:00:42.549398101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.556386 polkitd[2224]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 00:00:42.559324 containerd[2064]: time="2024-07-02T00:00:42.557400889Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:00:42.559324 containerd[2064]: time="2024-07-02T00:00:42.557536105Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:00:42.559324 containerd[2064]: time="2024-07-02T00:00:42.557695909Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:00:42.559324 containerd[2064]: time="2024-07-02T00:00:42.557725741Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:00:42.557626 polkitd[2224]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 00:00:42.561946 polkitd[2224]: Finished loading, compiling and executing 2 rules Jul 2 00:00:42.564374 dbus-daemon[2009]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 00:00:42.565906 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 00:00:42.569421 containerd[2064]: time="2024-07-02T00:00:42.568316437Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:00:42.569421 containerd[2064]: time="2024-07-02T00:00:42.568381969Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:00:42.569421 containerd[2064]: time="2024-07-02T00:00:42.568414729Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:00:42.569421 containerd[2064]: time="2024-07-02T00:00:42.568507561Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:00:42.569421 containerd[2064]: time="2024-07-02T00:00:42.568544425Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:00:42.569421 containerd[2064]: time="2024-07-02T00:00:42.568573969Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:00:42.569421 containerd[2064]: time="2024-07-02T00:00:42.568606189Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:00:42.569421 containerd[2064]: time="2024-07-02T00:00:42.568846525Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:00:42.570682 polkitd[2224]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 00:00:42.571979 containerd[2064]: time="2024-07-02T00:00:42.571277701Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:00:42.571979 containerd[2064]: time="2024-07-02T00:00:42.571330801Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:00:42.571979 containerd[2064]: time="2024-07-02T00:00:42.571389241Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:00:42.571979 containerd[2064]: time="2024-07-02T00:00:42.571462777Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.571979 containerd[2064]: time="2024-07-02T00:00:42.571507957Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.571979 containerd[2064]: time="2024-07-02T00:00:42.571656157Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.576539 containerd[2064]: time="2024-07-02T00:00:42.571691485Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.576539 containerd[2064]: time="2024-07-02T00:00:42.572423545Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.576539 containerd[2064]: time="2024-07-02T00:00:42.573845821Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.576539 containerd[2064]: time="2024-07-02T00:00:42.573904909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.576539 containerd[2064]: time="2024-07-02T00:00:42.573938317Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:00:42.577067 containerd[2064]: time="2024-07-02T00:00:42.577027753Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:00:42.579640 containerd[2064]: time="2024-07-02T00:00:42.579557317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.579841 containerd[2064]: time="2024-07-02T00:00:42.579810841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.579988 containerd[2064]: time="2024-07-02T00:00:42.579959629Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:00:42.580531 containerd[2064]: time="2024-07-02T00:00:42.580110049Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:00:42.580908 containerd[2064]: time="2024-07-02T00:00:42.580760893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.584746 containerd[2064]: time="2024-07-02T00:00:42.584659933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.586893 containerd[2064]: time="2024-07-02T00:00:42.585944605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.586893 containerd[2064]: time="2024-07-02T00:00:42.586023685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.586893 containerd[2064]: time="2024-07-02T00:00:42.586135969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.586893 containerd[2064]: time="2024-07-02T00:00:42.586168585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.586893 containerd[2064]: time="2024-07-02T00:00:42.586224625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.586893 containerd[2064]: time="2024-07-02T00:00:42.586256545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.586893 containerd[2064]: time="2024-07-02T00:00:42.586316749Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:00:42.586893 containerd[2064]: time="2024-07-02T00:00:42.586761037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.586893 containerd[2064]: time="2024-07-02T00:00:42.586828069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.590218 containerd[2064]: time="2024-07-02T00:00:42.586866001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.590218 containerd[2064]: time="2024-07-02T00:00:42.587415241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.590218 containerd[2064]: time="2024-07-02T00:00:42.587710777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.590218 containerd[2064]: time="2024-07-02T00:00:42.588786685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.590218 containerd[2064]: time="2024-07-02T00:00:42.589998193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.590218 containerd[2064]: time="2024-07-02T00:00:42.590072173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.595718 containerd[2064]: time="2024-07-02T00:00:42.595041661Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:00:42.595718 containerd[2064]: time="2024-07-02T00:00:42.595217365Z" level=info msg="Connect containerd service" Jul 2 00:00:42.595718 containerd[2064]: time="2024-07-02T00:00:42.595304437Z" level=info msg="using legacy CRI server" Jul 2 00:00:42.595718 containerd[2064]: time="2024-07-02T00:00:42.595348381Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:00:42.595718 containerd[2064]: time="2024-07-02T00:00:42.595605025Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:00:42.601403 containerd[2064]: time="2024-07-02T00:00:42.600635497Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:00:42.601403 containerd[2064]: time="2024-07-02T00:00:42.600735925Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:00:42.601403 containerd[2064]: time="2024-07-02T00:00:42.601180609Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:00:42.601403 containerd[2064]: time="2024-07-02T00:00:42.601216057Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:00:42.601403 containerd[2064]: time="2024-07-02T00:00:42.601246285Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:00:42.601754 containerd[2064]: time="2024-07-02T00:00:42.601691557Z" level=info msg="Start subscribing containerd event" Jul 2 00:00:42.601805 containerd[2064]: time="2024-07-02T00:00:42.601771309Z" level=info msg="Start recovering state" Jul 2 00:00:42.602284 containerd[2064]: time="2024-07-02T00:00:42.601895785Z" level=info msg="Start event monitor" Jul 2 00:00:42.602284 containerd[2064]: time="2024-07-02T00:00:42.601934173Z" level=info msg="Start snapshots syncer" Jul 2 00:00:42.602284 containerd[2064]: time="2024-07-02T00:00:42.601958365Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:00:42.602284 containerd[2064]: time="2024-07-02T00:00:42.601977541Z" level=info msg="Start streaming server" Jul 2 00:00:42.605332 containerd[2064]: time="2024-07-02T00:00:42.602837269Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:00:42.605332 containerd[2064]: time="2024-07-02T00:00:42.602938681Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:00:42.605332 containerd[2064]: time="2024-07-02T00:00:42.603048709Z" level=info msg="containerd successfully booted in 0.433282s" Jul 2 00:00:42.603244 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:00:42.607803 amazon-ssm-agent[2112]: 2024-07-02 00:00:42 INFO Agent will take identity from EC2 Jul 2 00:00:42.621535 systemd-hostnamed[2106]: Hostname set to (transient) Jul 2 00:00:42.621908 systemd-resolved[1942]: System hostname changed to 'ip-172-31-19-149'. Jul 2 00:00:42.707587 amazon-ssm-agent[2112]: 2024-07-02 00:00:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:00:42.810893 amazon-ssm-agent[2112]: 2024-07-02 00:00:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:00:42.877742 sshd_keygen[2055]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:00:42.910517 amazon-ssm-agent[2112]: 2024-07-02 00:00:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:00:42.968731 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:00:42.986028 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:00:43.009709 amazon-ssm-agent[2112]: 2024-07-02 00:00:42 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 00:00:43.021722 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:00:43.023008 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:00:43.040992 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:00:43.078195 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:00:43.090953 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:00:43.110499 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:00:43.113728 amazon-ssm-agent[2112]: 2024-07-02 00:00:42 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 2 00:00:43.114993 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:00:43.214026 amazon-ssm-agent[2112]: 2024-07-02 00:00:42 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 00:00:43.314660 amazon-ssm-agent[2112]: 2024-07-02 00:00:42 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 00:00:43.369767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:00:43.375221 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:00:43.380534 systemd[1]: Startup finished in 10.833s (kernel) + 8.949s (userspace) = 19.783s. Jul 2 00:00:43.387369 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:00:43.416181 amazon-ssm-agent[2112]: 2024-07-02 00:00:42 INFO [Registrar] Starting registrar module Jul 2 00:00:43.515980 amazon-ssm-agent[2112]: 2024-07-02 00:00:42 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 00:00:43.901222 amazon-ssm-agent[2112]: 2024-07-02 00:00:43 INFO [EC2Identity] EC2 registration was successful. Jul 2 00:00:43.930489 amazon-ssm-agent[2112]: 2024-07-02 00:00:43 INFO [CredentialRefresher] credentialRefresher has started Jul 2 00:00:43.930489 amazon-ssm-agent[2112]: 2024-07-02 00:00:43 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 00:00:43.930489 amazon-ssm-agent[2112]: 2024-07-02 00:00:43 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 00:00:44.001994 amazon-ssm-agent[2112]: 2024-07-02 00:00:43 INFO [CredentialRefresher] Next credential rotation will be in 31.7249932181 minutes Jul 2 00:00:44.156223 kubelet[2290]: E0702 00:00:44.156028 2290 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:00:44.161320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:00:44.161758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:00:44.964248 amazon-ssm-agent[2112]: 2024-07-02 00:00:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 00:00:45.064538 amazon-ssm-agent[2112]: 2024-07-02 00:00:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2304) started Jul 2 00:00:45.165325 amazon-ssm-agent[2112]: 2024-07-02 00:00:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 00:00:48.619365 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:00:48.631882 systemd[1]: Started sshd@0-172.31.19.149:22-147.75.109.163:49396.service - OpenSSH per-connection server daemon (147.75.109.163:49396). Jul 2 00:00:48.811315 sshd[2314]: Accepted publickey for core from 147.75.109.163 port 49396 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:48.814783 sshd[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:48.830479 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:00:48.837925 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:00:48.843140 systemd-logind[2029]: New session 1 of user core. Jul 2 00:00:48.870130 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:00:48.880982 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:00:48.897801 (systemd)[2320]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:49.042349 systemd[1]: Started sshd@1-172.31.19.149:22-116.48.150.115:50656.service - OpenSSH per-connection server daemon (116.48.150.115:50656). Jul 2 00:00:49.111128 systemd[2320]: Queued start job for default target default.target. Jul 2 00:00:49.111864 systemd[2320]: Created slice app.slice - User Application Slice. Jul 2 00:00:49.111919 systemd[2320]: Reached target paths.target - Paths. Jul 2 00:00:49.111950 systemd[2320]: Reached target timers.target - Timers. Jul 2 00:00:49.118609 systemd[2320]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:00:49.146965 systemd[2320]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:00:49.147086 systemd[2320]: Reached target sockets.target - Sockets. Jul 2 00:00:49.147118 systemd[2320]: Reached target basic.target - Basic System. Jul 2 00:00:49.147206 systemd[2320]: Reached target default.target - Main User Target. Jul 2 00:00:49.147266 systemd[2320]: Startup finished in 237ms. Jul 2 00:00:49.148077 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:00:49.160125 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:00:49.314396 systemd[1]: Started sshd@2-172.31.19.149:22-147.75.109.163:49410.service - OpenSSH per-connection server daemon (147.75.109.163:49410). Jul 2 00:00:49.327562 sshd[2326]: Connection closed by 116.48.150.115 port 50656 Jul 2 00:00:49.330756 systemd[1]: sshd@1-172.31.19.149:22-116.48.150.115:50656.service: Deactivated successfully. Jul 2 00:00:49.489307 sshd[2333]: Accepted publickey for core from 147.75.109.163 port 49410 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:49.491902 sshd[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:49.499672 systemd-logind[2029]: New session 2 of user core. Jul 2 00:00:49.507940 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:00:49.636318 sshd[2333]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:49.642964 systemd[1]: sshd@2-172.31.19.149:22-147.75.109.163:49410.service: Deactivated successfully. Jul 2 00:00:49.644787 systemd-logind[2029]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:00:49.649356 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:00:49.651335 systemd-logind[2029]: Removed session 2. Jul 2 00:00:49.665934 systemd[1]: Started sshd@3-172.31.19.149:22-147.75.109.163:49424.service - OpenSSH per-connection server daemon (147.75.109.163:49424). Jul 2 00:00:49.845732 sshd[2344]: Accepted publickey for core from 147.75.109.163 port 49424 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:49.848169 sshd[2344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:49.856808 systemd-logind[2029]: New session 3 of user core. Jul 2 00:00:49.862952 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:00:49.983704 sshd[2344]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:49.989875 systemd[1]: sshd@3-172.31.19.149:22-147.75.109.163:49424.service: Deactivated successfully. Jul 2 00:00:49.994412 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:00:49.994939 systemd-logind[2029]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:00:49.997545 systemd-logind[2029]: Removed session 3. Jul 2 00:00:50.014905 systemd[1]: Started sshd@4-172.31.19.149:22-147.75.109.163:49432.service - OpenSSH per-connection server daemon (147.75.109.163:49432). Jul 2 00:00:50.184697 sshd[2352]: Accepted publickey for core from 147.75.109.163 port 49432 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:50.187142 sshd[2352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:50.195886 systemd-logind[2029]: New session 4 of user core. Jul 2 00:00:50.198912 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:00:50.329791 sshd[2352]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:50.334711 systemd[1]: sshd@4-172.31.19.149:22-147.75.109.163:49432.service: Deactivated successfully. Jul 2 00:00:50.342342 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:00:50.344305 systemd-logind[2029]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:00:50.346227 systemd-logind[2029]: Removed session 4. Jul 2 00:00:50.357929 systemd[1]: Started sshd@5-172.31.19.149:22-147.75.109.163:49434.service - OpenSSH per-connection server daemon (147.75.109.163:49434). Jul 2 00:00:50.532973 sshd[2360]: Accepted publickey for core from 147.75.109.163 port 49434 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:50.535489 sshd[2360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:50.542829 systemd-logind[2029]: New session 5 of user core. Jul 2 00:00:50.554006 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:00:50.672049 sudo[2364]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:00:50.672653 sudo[2364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:00:50.688534 sudo[2364]: pam_unix(sudo:session): session closed for user root Jul 2 00:00:50.711663 sshd[2360]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:50.718803 systemd[1]: sshd@5-172.31.19.149:22-147.75.109.163:49434.service: Deactivated successfully. Jul 2 00:00:50.724416 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:00:50.724518 systemd-logind[2029]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:00:50.728248 systemd-logind[2029]: Removed session 5. Jul 2 00:00:50.743924 systemd[1]: Started sshd@6-172.31.19.149:22-147.75.109.163:49446.service - OpenSSH per-connection server daemon (147.75.109.163:49446). Jul 2 00:00:50.917759 sshd[2369]: Accepted publickey for core from 147.75.109.163 port 49446 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:50.920916 sshd[2369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:50.929866 systemd-logind[2029]: New session 6 of user core. Jul 2 00:00:50.935960 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:00:51.042061 sudo[2374]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:00:51.042660 sudo[2374]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:00:51.049298 sudo[2374]: pam_unix(sudo:session): session closed for user root Jul 2 00:00:51.059262 sudo[2373]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:00:51.059871 sudo[2373]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:00:51.080942 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:00:51.097333 auditctl[2377]: No rules Jul 2 00:00:51.100098 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:00:51.100711 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:00:51.115997 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:00:51.154875 augenrules[2396]: No rules Jul 2 00:00:51.158086 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:00:51.163139 sudo[2373]: pam_unix(sudo:session): session closed for user root Jul 2 00:00:51.187765 sshd[2369]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:51.193607 systemd[1]: sshd@6-172.31.19.149:22-147.75.109.163:49446.service: Deactivated successfully. Jul 2 00:00:51.198849 systemd-logind[2029]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:00:51.199822 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:00:51.203987 systemd-logind[2029]: Removed session 6. Jul 2 00:00:51.217941 systemd[1]: Started sshd@7-172.31.19.149:22-147.75.109.163:49458.service - OpenSSH per-connection server daemon (147.75.109.163:49458). Jul 2 00:00:51.398170 sshd[2405]: Accepted publickey for core from 147.75.109.163 port 49458 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:51.400645 sshd[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:51.409710 systemd-logind[2029]: New session 7 of user core. Jul 2 00:00:51.417987 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:00:51.523990 sudo[2409]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:00:51.524588 sudo[2409]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:00:52.549330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:00:52.558037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:00:52.606549 systemd[1]: Reloading requested from client PID 2448 ('systemctl') (unit session-7.scope)... Jul 2 00:00:52.606711 systemd[1]: Reloading... Jul 2 00:00:52.822541 zram_generator::config[2489]: No configuration found. Jul 2 00:00:53.071588 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:00:53.229029 systemd[1]: Reloading finished in 621 ms. Jul 2 00:00:53.301878 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:00:53.302076 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:00:53.302848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:00:53.310171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:00:53.624851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:00:53.646046 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:00:53.730569 kubelet[2558]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:00:53.730569 kubelet[2558]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:00:53.730569 kubelet[2558]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:00:53.731144 kubelet[2558]: I0702 00:00:53.730587 2558 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:00:54.913575 kubelet[2558]: I0702 00:00:54.913406 2558 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:00:54.913575 kubelet[2558]: I0702 00:00:54.913475 2558 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:00:54.914215 kubelet[2558]: I0702 00:00:54.913822 2558 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:00:54.940562 kubelet[2558]: I0702 00:00:54.940510 2558 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:00:54.954474 kubelet[2558]: W0702 00:00:54.954213 2558 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 00:00:54.956819 kubelet[2558]: I0702 00:00:54.956768 2558 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:00:54.957492 kubelet[2558]: I0702 00:00:54.957462 2558 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:00:54.957798 kubelet[2558]: I0702 00:00:54.957767 2558 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:00:54.957997 kubelet[2558]: I0702 00:00:54.957823 2558 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:00:54.957997 kubelet[2558]: I0702 00:00:54.957848 2558 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:00:54.958090 kubelet[2558]: I0702 00:00:54.958040 2558 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:00:54.959997 kubelet[2558]: I0702 00:00:54.959938 2558 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:00:54.959997 kubelet[2558]: I0702 00:00:54.959990 2558 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:00:54.960118 kubelet[2558]: I0702 00:00:54.960062 2558 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:00:54.960118 kubelet[2558]: I0702 00:00:54.960086 2558 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:00:54.962301 kubelet[2558]: E0702 00:00:54.962219 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:00:54.962301 kubelet[2558]: E0702 00:00:54.962281 2558 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:00:54.962951 kubelet[2558]: I0702 00:00:54.962806 2558 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:00:54.970423 kubelet[2558]: W0702 00:00:54.970355 2558 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:00:54.971510 kubelet[2558]: I0702 00:00:54.971425 2558 server.go:1232] "Started kubelet" Jul 2 00:00:54.972774 kubelet[2558]: I0702 00:00:54.972234 2558 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:00:54.974042 kubelet[2558]: I0702 00:00:54.973764 2558 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:00:54.975361 kubelet[2558]: I0702 00:00:54.975329 2558 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:00:54.975893 kubelet[2558]: I0702 00:00:54.975864 2558 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:00:54.979479 kubelet[2558]: E0702 00:00:54.979240 2558 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:00:54.979479 kubelet[2558]: E0702 00:00:54.979289 2558 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:00:54.980053 kubelet[2558]: I0702 00:00:54.979994 2558 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:00:54.994491 kubelet[2558]: W0702 00:00:54.992770 2558 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.19.149" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 00:00:54.994491 kubelet[2558]: E0702 00:00:54.992822 2558 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.19.149" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 00:00:54.994491 kubelet[2558]: E0702 00:00:54.992926 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57ad6accf3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 54, 971387123, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 54, 971387123, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:54.994811 kubelet[2558]: W0702 00:00:54.993288 2558 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 00:00:54.994811 kubelet[2558]: E0702 00:00:54.993318 2558 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 00:00:54.994811 kubelet[2558]: E0702 00:00:54.993426 2558 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.149\" not found" Jul 2 00:00:54.994811 kubelet[2558]: I0702 00:00:54.993496 2558 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:00:54.994811 kubelet[2558]: I0702 00:00:54.993642 2558 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:00:54.994811 kubelet[2558]: I0702 00:00:54.993729 2558 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:00:54.995088 kubelet[2558]: E0702 00:00:54.994309 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57ade31983", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 54, 979271043, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 54, 979271043, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:54.997488 kubelet[2558]: W0702 00:00:54.997423 2558 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 2 00:00:54.997488 kubelet[2558]: E0702 00:00:54.997494 2558 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 2 00:00:54.997694 kubelet[2558]: E0702 00:00:54.997576 2558 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.19.149\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jul 2 00:00:55.096546 kubelet[2558]: I0702 00:00:55.095757 2558 kubelet_node_status.go:70] "Attempting to register node" node="172.31.19.149" Jul 2 00:00:55.097773 kubelet[2558]: E0702 00:00:55.097724 2558 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.19.149" Jul 2 00:00:55.097929 kubelet[2558]: E0702 00:00:55.097809 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b4d39b7b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.19.149 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95696251, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95696251, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.099318 kubelet[2558]: E0702 00:00:55.098994 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b4d3b88f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.19.149 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95703695, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95703695, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.100395 kubelet[2558]: E0702 00:00:55.100219 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b4d3d79b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.19.149 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95711643, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95711643, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.115498 kubelet[2558]: I0702 00:00:55.115085 2558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:00:55.119118 kubelet[2558]: I0702 00:00:55.119047 2558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:00:55.119118 kubelet[2558]: I0702 00:00:55.119100 2558 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:00:55.119317 kubelet[2558]: I0702 00:00:55.119134 2558 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:00:55.119317 kubelet[2558]: E0702 00:00:55.119280 2558 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:00:55.125630 kubelet[2558]: E0702 00:00:55.124268 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b4d39b7b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.19.149 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95696251, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 114509053, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events "172.31.19.149.17de3c57b4d39b7b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.126252 kubelet[2558]: I0702 00:00:55.126211 2558 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:00:55.126252 kubelet[2558]: I0702 00:00:55.126250 2558 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:00:55.126383 kubelet[2558]: I0702 00:00:55.126283 2558 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:00:55.128501 kubelet[2558]: W0702 00:00:55.127840 2558 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jul 2 00:00:55.128501 kubelet[2558]: E0702 00:00:55.127888 2558 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jul 2 00:00:55.128692 kubelet[2558]: E0702 00:00:55.127979 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b4d3b88f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.19.149 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95703695, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 114517409, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events "172.31.19.149.17de3c57b4d3b88f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.129256 kubelet[2558]: I0702 00:00:55.129095 2558 policy_none.go:49] "None policy: Start" Jul 2 00:00:55.129473 kubelet[2558]: E0702 00:00:55.129330 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b4d3d79b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.19.149 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95711643, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 114526558, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events "172.31.19.149.17de3c57b4d3d79b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.130370 kubelet[2558]: I0702 00:00:55.130304 2558 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:00:55.130370 kubelet[2558]: I0702 00:00:55.130361 2558 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:00:55.142496 kubelet[2558]: I0702 00:00:55.140962 2558 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:00:55.142496 kubelet[2558]: I0702 00:00:55.141337 2558 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:00:55.147289 kubelet[2558]: E0702 00:00:55.147102 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b7b0d913", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 143749907, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 143749907, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.147839 kubelet[2558]: E0702 00:00:55.147798 2558 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.19.149\" not found" Jul 2 00:00:55.200157 kubelet[2558]: E0702 00:00:55.200035 2558 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.19.149\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Jul 2 00:00:55.300395 kubelet[2558]: I0702 00:00:55.300253 2558 kubelet_node_status.go:70] "Attempting to register node" node="172.31.19.149" Jul 2 00:00:55.302406 kubelet[2558]: E0702 00:00:55.302242 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b4d39b7b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.19.149 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95696251, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 300068076, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events "172.31.19.149.17de3c57b4d39b7b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.302843 kubelet[2558]: E0702 00:00:55.302814 2558 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.19.149" Jul 2 00:00:55.304626 kubelet[2558]: E0702 00:00:55.304253 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b4d3b88f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.19.149 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95703695, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 300203323, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events "172.31.19.149.17de3c57b4d3b88f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.306672 kubelet[2558]: E0702 00:00:55.306345 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b4d3d79b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.19.149 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95711643, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 300213156, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events "172.31.19.149.17de3c57b4d3d79b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.602489 kubelet[2558]: E0702 00:00:55.602405 2558 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.19.149\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Jul 2 00:00:55.704989 kubelet[2558]: I0702 00:00:55.704934 2558 kubelet_node_status.go:70] "Attempting to register node" node="172.31.19.149" Jul 2 00:00:55.707226 kubelet[2558]: E0702 00:00:55.707180 2558 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.19.149" Jul 2 00:00:55.707628 kubelet[2558]: E0702 00:00:55.707157 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b4d39b7b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.19.149 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95696251, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 704867165, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events "172.31.19.149.17de3c57b4d39b7b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.712000 kubelet[2558]: E0702 00:00:55.711894 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b4d3b88f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.19.149 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95703695, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 704875065, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events "172.31.19.149.17de3c57b4d3b88f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.713712 kubelet[2558]: E0702 00:00:55.713613 2558 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149.17de3c57b4d3d79b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.149", UID:"172.31.19.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.19.149 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.149"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 95711643, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 0, 55, 704895115, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.149"}': 'events "172.31.19.149.17de3c57b4d3d79b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:00:55.926751 kubelet[2558]: I0702 00:00:55.926525 2558 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 2 00:00:55.963194 kubelet[2558]: E0702 00:00:55.963145 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:00:56.375237 kubelet[2558]: E0702 00:00:56.375182 2558 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.19.149" not found Jul 2 00:00:56.415745 kubelet[2558]: E0702 00:00:56.415625 2558 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.19.149\" not found" node="172.31.19.149" Jul 2 00:00:56.509498 kubelet[2558]: I0702 00:00:56.508772 2558 kubelet_node_status.go:70] "Attempting to register node" node="172.31.19.149" Jul 2 00:00:56.515770 kubelet[2558]: I0702 00:00:56.515708 2558 kubelet_node_status.go:73] "Successfully registered node" node="172.31.19.149" Jul 2 00:00:56.586963 kubelet[2558]: I0702 00:00:56.586771 2558 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.2.0/24" Jul 2 00:00:56.587528 containerd[2064]: time="2024-07-02T00:00:56.587368585Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:00:56.588064 kubelet[2558]: I0702 00:00:56.587919 2558 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.2.0/24" Jul 2 00:00:56.675384 sudo[2409]: pam_unix(sudo:session): session closed for user root Jul 2 00:00:56.699609 sshd[2405]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:56.707277 systemd-logind[2029]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:00:56.708473 systemd[1]: sshd@7-172.31.19.149:22-147.75.109.163:49458.service: Deactivated successfully. Jul 2 00:00:56.716996 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:00:56.719412 systemd-logind[2029]: Removed session 7. Jul 2 00:00:56.963002 kubelet[2558]: I0702 00:00:56.962615 2558 apiserver.go:52] "Watching apiserver" Jul 2 00:00:56.963935 kubelet[2558]: E0702 00:00:56.963548 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:00:56.968403 kubelet[2558]: I0702 00:00:56.968343 2558 topology_manager.go:215] "Topology Admit Handler" podUID="f1cba04b-ed30-4318-ba8f-c76188671085" podNamespace="calico-system" podName="calico-node-cm7z4" Jul 2 00:00:56.969208 kubelet[2558]: I0702 00:00:56.968900 2558 topology_manager.go:215] "Topology Admit Handler" podUID="30f2d765-6f0e-4f62-97b8-cf9269464124" podNamespace="calico-system" podName="csi-node-driver-j69rn" Jul 2 00:00:56.969208 kubelet[2558]: I0702 00:00:56.969060 2558 topology_manager.go:215] "Topology Admit Handler" podUID="1a17091f-8500-4bfb-891f-81f42bbfe535" podNamespace="kube-system" podName="kube-proxy-tmxk6" Jul 2 00:00:56.970813 kubelet[2558]: E0702 00:00:56.970592 2558 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j69rn" podUID="30f2d765-6f0e-4f62-97b8-cf9269464124" Jul 2 00:00:56.994683 kubelet[2558]: I0702 00:00:56.994627 2558 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:00:57.002184 kubelet[2558]: I0702 00:00:57.002123 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrf5r\" (UniqueName: \"kubernetes.io/projected/30f2d765-6f0e-4f62-97b8-cf9269464124-kube-api-access-lrf5r\") pod \"csi-node-driver-j69rn\" (UID: \"30f2d765-6f0e-4f62-97b8-cf9269464124\") " pod="calico-system/csi-node-driver-j69rn" Jul 2 00:00:57.002314 kubelet[2558]: I0702 00:00:57.002200 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1a17091f-8500-4bfb-891f-81f42bbfe535-kube-proxy\") pod \"kube-proxy-tmxk6\" (UID: \"1a17091f-8500-4bfb-891f-81f42bbfe535\") " pod="kube-system/kube-proxy-tmxk6" Jul 2 00:00:57.002314 kubelet[2558]: I0702 00:00:57.002252 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a17091f-8500-4bfb-891f-81f42bbfe535-xtables-lock\") pod \"kube-proxy-tmxk6\" (UID: \"1a17091f-8500-4bfb-891f-81f42bbfe535\") " pod="kube-system/kube-proxy-tmxk6" Jul 2 00:00:57.002314 kubelet[2558]: I0702 00:00:57.002296 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlp8b\" (UniqueName: \"kubernetes.io/projected/1a17091f-8500-4bfb-891f-81f42bbfe535-kube-api-access-xlp8b\") pod \"kube-proxy-tmxk6\" (UID: \"1a17091f-8500-4bfb-891f-81f42bbfe535\") " pod="kube-system/kube-proxy-tmxk6" Jul 2 00:00:57.002517 kubelet[2558]: I0702 00:00:57.002378 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28t79\" (UniqueName: \"kubernetes.io/projected/f1cba04b-ed30-4318-ba8f-c76188671085-kube-api-access-28t79\") pod \"calico-node-cm7z4\" (UID: \"f1cba04b-ed30-4318-ba8f-c76188671085\") " pod="calico-system/calico-node-cm7z4" Jul 2 00:00:57.002517 kubelet[2558]: I0702 00:00:57.002425 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/30f2d765-6f0e-4f62-97b8-cf9269464124-varrun\") pod \"csi-node-driver-j69rn\" (UID: \"30f2d765-6f0e-4f62-97b8-cf9269464124\") " pod="calico-system/csi-node-driver-j69rn" Jul 2 00:00:57.002517 kubelet[2558]: I0702 00:00:57.002492 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30f2d765-6f0e-4f62-97b8-cf9269464124-kubelet-dir\") pod \"csi-node-driver-j69rn\" (UID: \"30f2d765-6f0e-4f62-97b8-cf9269464124\") " pod="calico-system/csi-node-driver-j69rn" Jul 2 00:00:57.002656 kubelet[2558]: I0702 00:00:57.002537 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f1cba04b-ed30-4318-ba8f-c76188671085-flexvol-driver-host\") pod \"calico-node-cm7z4\" (UID: \"f1cba04b-ed30-4318-ba8f-c76188671085\") " pod="calico-system/calico-node-cm7z4" Jul 2 00:00:57.002656 kubelet[2558]: I0702 00:00:57.002638 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f1cba04b-ed30-4318-ba8f-c76188671085-var-run-calico\") pod \"calico-node-cm7z4\" (UID: \"f1cba04b-ed30-4318-ba8f-c76188671085\") " pod="calico-system/calico-node-cm7z4" Jul 2 00:00:57.002762 kubelet[2558]: I0702 00:00:57.002683 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f1cba04b-ed30-4318-ba8f-c76188671085-var-lib-calico\") pod \"calico-node-cm7z4\" (UID: \"f1cba04b-ed30-4318-ba8f-c76188671085\") " pod="calico-system/calico-node-cm7z4" Jul 2 00:00:57.002762 kubelet[2558]: I0702 00:00:57.002726 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f1cba04b-ed30-4318-ba8f-c76188671085-cni-bin-dir\") pod \"calico-node-cm7z4\" (UID: \"f1cba04b-ed30-4318-ba8f-c76188671085\") " pod="calico-system/calico-node-cm7z4" Jul 2 00:00:57.002864 kubelet[2558]: I0702 00:00:57.002767 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a17091f-8500-4bfb-891f-81f42bbfe535-lib-modules\") pod \"kube-proxy-tmxk6\" (UID: \"1a17091f-8500-4bfb-891f-81f42bbfe535\") " pod="kube-system/kube-proxy-tmxk6" Jul 2 00:00:57.002864 kubelet[2558]: I0702 00:00:57.002835 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1cba04b-ed30-4318-ba8f-c76188671085-xtables-lock\") pod \"calico-node-cm7z4\" (UID: \"f1cba04b-ed30-4318-ba8f-c76188671085\") " pod="calico-system/calico-node-cm7z4" Jul 2 00:00:57.002966 kubelet[2558]: I0702 00:00:57.002881 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f1cba04b-ed30-4318-ba8f-c76188671085-policysync\") pod \"calico-node-cm7z4\" (UID: \"f1cba04b-ed30-4318-ba8f-c76188671085\") " pod="calico-system/calico-node-cm7z4" Jul 2 00:00:57.002966 kubelet[2558]: I0702 00:00:57.002927 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30f2d765-6f0e-4f62-97b8-cf9269464124-registration-dir\") pod \"csi-node-driver-j69rn\" (UID: \"30f2d765-6f0e-4f62-97b8-cf9269464124\") " pod="calico-system/csi-node-driver-j69rn" Jul 2 00:00:57.003058 kubelet[2558]: I0702 00:00:57.002968 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f1cba04b-ed30-4318-ba8f-c76188671085-cni-net-dir\") pod \"calico-node-cm7z4\" (UID: \"f1cba04b-ed30-4318-ba8f-c76188671085\") " pod="calico-system/calico-node-cm7z4" Jul 2 00:00:57.003058 kubelet[2558]: I0702 00:00:57.003008 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f1cba04b-ed30-4318-ba8f-c76188671085-cni-log-dir\") pod \"calico-node-cm7z4\" (UID: \"f1cba04b-ed30-4318-ba8f-c76188671085\") " pod="calico-system/calico-node-cm7z4" Jul 2 00:00:57.003148 kubelet[2558]: I0702 00:00:57.003054 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/30f2d765-6f0e-4f62-97b8-cf9269464124-socket-dir\") pod \"csi-node-driver-j69rn\" (UID: \"30f2d765-6f0e-4f62-97b8-cf9269464124\") " pod="calico-system/csi-node-driver-j69rn" Jul 2 00:00:57.003148 kubelet[2558]: I0702 00:00:57.003103 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1cba04b-ed30-4318-ba8f-c76188671085-lib-modules\") pod \"calico-node-cm7z4\" (UID: \"f1cba04b-ed30-4318-ba8f-c76188671085\") " pod="calico-system/calico-node-cm7z4" Jul 2 00:00:57.003148 kubelet[2558]: I0702 00:00:57.003146 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1cba04b-ed30-4318-ba8f-c76188671085-tigera-ca-bundle\") pod \"calico-node-cm7z4\" (UID: \"f1cba04b-ed30-4318-ba8f-c76188671085\") " pod="calico-system/calico-node-cm7z4" Jul 2 00:00:57.003301 kubelet[2558]: I0702 00:00:57.003187 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f1cba04b-ed30-4318-ba8f-c76188671085-node-certs\") pod \"calico-node-cm7z4\" (UID: \"f1cba04b-ed30-4318-ba8f-c76188671085\") " pod="calico-system/calico-node-cm7z4" Jul 2 00:00:57.107930 kubelet[2558]: E0702 00:00:57.107789 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.107930 kubelet[2558]: W0702 00:00:57.107857 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.108266 kubelet[2558]: E0702 00:00:57.108106 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.108877 kubelet[2558]: E0702 00:00:57.108813 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.108877 kubelet[2558]: W0702 00:00:57.108840 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.112134 kubelet[2558]: E0702 00:00:57.111919 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.112134 kubelet[2558]: W0702 00:00:57.111977 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.112134 kubelet[2558]: E0702 00:00:57.112012 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.113605 kubelet[2558]: E0702 00:00:57.112043 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.113605 kubelet[2558]: E0702 00:00:57.112862 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.113605 kubelet[2558]: W0702 00:00:57.112946 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.113605 kubelet[2558]: E0702 00:00:57.112978 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.114811 kubelet[2558]: E0702 00:00:57.114096 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.114811 kubelet[2558]: W0702 00:00:57.114125 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.114811 kubelet[2558]: E0702 00:00:57.114156 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.114811 kubelet[2558]: E0702 00:00:57.114669 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.114811 kubelet[2558]: W0702 00:00:57.114686 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.114811 kubelet[2558]: E0702 00:00:57.114710 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.116776 kubelet[2558]: E0702 00:00:57.115666 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.116776 kubelet[2558]: W0702 00:00:57.115693 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.116776 kubelet[2558]: E0702 00:00:57.115726 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.116776 kubelet[2558]: E0702 00:00:57.116633 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.116776 kubelet[2558]: W0702 00:00:57.116649 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.116776 kubelet[2558]: E0702 00:00:57.116675 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.117620 kubelet[2558]: E0702 00:00:57.117554 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.117620 kubelet[2558]: W0702 00:00:57.117586 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.117620 kubelet[2558]: E0702 00:00:57.117622 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.118355 kubelet[2558]: E0702 00:00:57.118295 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.118355 kubelet[2558]: W0702 00:00:57.118315 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.118355 kubelet[2558]: E0702 00:00:57.118357 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.121463 kubelet[2558]: E0702 00:00:57.118929 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.121463 kubelet[2558]: W0702 00:00:57.118964 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.121463 kubelet[2558]: E0702 00:00:57.119246 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.121463 kubelet[2558]: W0702 00:00:57.119262 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.121463 kubelet[2558]: E0702 00:00:57.119560 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.121463 kubelet[2558]: W0702 00:00:57.119576 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.121463 kubelet[2558]: E0702 00:00:57.119810 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.121463 kubelet[2558]: W0702 00:00:57.119824 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.121463 kubelet[2558]: E0702 00:00:57.120072 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.121463 kubelet[2558]: W0702 00:00:57.120086 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.121463 kubelet[2558]: E0702 00:00:57.120423 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.122047 kubelet[2558]: W0702 00:00:57.120486 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.122047 kubelet[2558]: E0702 00:00:57.120994 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.122047 kubelet[2558]: W0702 00:00:57.121024 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.122047 kubelet[2558]: E0702 00:00:57.121055 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.122047 kubelet[2558]: E0702 00:00:57.121614 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.122047 kubelet[2558]: W0702 00:00:57.121632 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.122047 kubelet[2558]: E0702 00:00:57.121662 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.122047 kubelet[2558]: E0702 00:00:57.121936 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.122047 kubelet[2558]: W0702 00:00:57.121949 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.122047 kubelet[2558]: E0702 00:00:57.121972 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.122529 kubelet[2558]: E0702 00:00:57.122248 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.122529 kubelet[2558]: W0702 00:00:57.122261 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.122529 kubelet[2558]: E0702 00:00:57.122283 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.125262 kubelet[2558]: E0702 00:00:57.123822 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.125262 kubelet[2558]: W0702 00:00:57.123856 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.125262 kubelet[2558]: E0702 00:00:57.123887 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.125262 kubelet[2558]: E0702 00:00:57.123950 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.125262 kubelet[2558]: E0702 00:00:57.123981 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.125262 kubelet[2558]: E0702 00:00:57.124024 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.125262 kubelet[2558]: E0702 00:00:57.124053 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.125262 kubelet[2558]: E0702 00:00:57.124325 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.125262 kubelet[2558]: W0702 00:00:57.124340 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.125262 kubelet[2558]: E0702 00:00:57.124374 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.125843 kubelet[2558]: E0702 00:00:57.125102 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.125843 kubelet[2558]: W0702 00:00:57.125121 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.125843 kubelet[2558]: E0702 00:00:57.125149 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.125843 kubelet[2558]: E0702 00:00:57.125190 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.125843 kubelet[2558]: E0702 00:00:57.125678 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.125843 kubelet[2558]: W0702 00:00:57.125697 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.125843 kubelet[2558]: E0702 00:00:57.125723 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.126161 kubelet[2558]: E0702 00:00:57.126018 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.126161 kubelet[2558]: W0702 00:00:57.126032 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.126161 kubelet[2558]: E0702 00:00:57.126057 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.127383 kubelet[2558]: E0702 00:00:57.126362 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.127383 kubelet[2558]: E0702 00:00:57.126788 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.127383 kubelet[2558]: W0702 00:00:57.126808 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.127383 kubelet[2558]: E0702 00:00:57.126836 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.127383 kubelet[2558]: E0702 00:00:57.127177 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.127383 kubelet[2558]: W0702 00:00:57.127194 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.127383 kubelet[2558]: E0702 00:00:57.127219 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.139556 kubelet[2558]: E0702 00:00:57.139508 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.139719 kubelet[2558]: W0702 00:00:57.139547 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.139719 kubelet[2558]: E0702 00:00:57.139606 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.157835 kubelet[2558]: E0702 00:00:57.157781 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.157835 kubelet[2558]: W0702 00:00:57.157821 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.158004 kubelet[2558]: E0702 00:00:57.157969 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.161571 kubelet[2558]: E0702 00:00:57.161521 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:57.161721 kubelet[2558]: W0702 00:00:57.161580 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:57.161721 kubelet[2558]: E0702 00:00:57.161619 2558 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:57.276303 containerd[2064]: time="2024-07-02T00:00:57.275795679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmxk6,Uid:1a17091f-8500-4bfb-891f-81f42bbfe535,Namespace:kube-system,Attempt:0,}" Jul 2 00:00:57.278046 containerd[2064]: time="2024-07-02T00:00:57.277771435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cm7z4,Uid:f1cba04b-ed30-4318-ba8f-c76188671085,Namespace:calico-system,Attempt:0,}" Jul 2 00:00:57.912204 containerd[2064]: time="2024-07-02T00:00:57.912122965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:00:57.913926 containerd[2064]: time="2024-07-02T00:00:57.913855755Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:00:57.915783 containerd[2064]: time="2024-07-02T00:00:57.915733745Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:00:57.916988 containerd[2064]: time="2024-07-02T00:00:57.916930359Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 2 00:00:57.918755 containerd[2064]: time="2024-07-02T00:00:57.918679369Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:00:57.924355 containerd[2064]: time="2024-07-02T00:00:57.924249829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:00:57.927857 containerd[2064]: time="2024-07-02T00:00:57.927418897Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 649.521075ms" Jul 2 00:00:57.930213 containerd[2064]: time="2024-07-02T00:00:57.930121640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 654.197412ms" Jul 2 00:00:57.964057 kubelet[2558]: E0702 00:00:57.963989 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:00:58.118282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628381854.mount: Deactivated successfully. Jul 2 00:00:58.176345 containerd[2064]: time="2024-07-02T00:00:58.175972333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:58.177145 containerd[2064]: time="2024-07-02T00:00:58.176508113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:58.178238 containerd[2064]: time="2024-07-02T00:00:58.177207871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:58.178238 containerd[2064]: time="2024-07-02T00:00:58.178156945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:58.191616 containerd[2064]: time="2024-07-02T00:00:58.182687265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:58.191616 containerd[2064]: time="2024-07-02T00:00:58.182782569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:58.191616 containerd[2064]: time="2024-07-02T00:00:58.182813712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:58.191616 containerd[2064]: time="2024-07-02T00:00:58.182837604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:58.408398 containerd[2064]: time="2024-07-02T00:00:58.408135959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cm7z4,Uid:f1cba04b-ed30-4318-ba8f-c76188671085,Namespace:calico-system,Attempt:0,} returns sandbox id \"cdf257b5c17b3c5dc6f1e3a7e128c6269f9741b80f081df89e83881ac5ef7f48\"" Jul 2 00:00:58.415021 containerd[2064]: time="2024-07-02T00:00:58.414904905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:00:58.419206 containerd[2064]: time="2024-07-02T00:00:58.418498397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmxk6,Uid:1a17091f-8500-4bfb-891f-81f42bbfe535,Namespace:kube-system,Attempt:0,} returns sandbox id \"843a7a2d36549d085027ea5fa7266ad2587d3e06d2c7fbb9fd5e0aa7262009d9\"" Jul 2 00:00:58.964748 kubelet[2558]: E0702 00:00:58.964688 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:00:59.120243 kubelet[2558]: E0702 00:00:59.120022 2558 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j69rn" podUID="30f2d765-6f0e-4f62-97b8-cf9269464124" Jul 2 00:00:59.545742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990006342.mount: Deactivated successfully. Jul 2 00:00:59.679371 containerd[2064]: time="2024-07-02T00:00:59.678813261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:59.680543 containerd[2064]: time="2024-07-02T00:00:59.680318657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=6282715" Jul 2 00:00:59.682316 containerd[2064]: time="2024-07-02T00:00:59.682252799Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:59.688738 containerd[2064]: time="2024-07-02T00:00:59.688629522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:59.690500 containerd[2064]: time="2024-07-02T00:00:59.690051525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.275084741s" Jul 2 00:00:59.690500 containerd[2064]: time="2024-07-02T00:00:59.690116381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 00:00:59.691842 containerd[2064]: time="2024-07-02T00:00:59.691757890Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:00:59.695494 containerd[2064]: time="2024-07-02T00:00:59.695395456Z" level=info msg="CreateContainer within sandbox \"cdf257b5c17b3c5dc6f1e3a7e128c6269f9741b80f081df89e83881ac5ef7f48\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:00:59.720416 containerd[2064]: time="2024-07-02T00:00:59.720362473Z" level=info msg="CreateContainer within sandbox \"cdf257b5c17b3c5dc6f1e3a7e128c6269f9741b80f081df89e83881ac5ef7f48\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"be612353da36ab6bcba06dc6feeab3bb624ad53f2f13885be55dab77cf9faabb\"" Jul 2 00:00:59.721690 containerd[2064]: time="2024-07-02T00:00:59.721544824Z" level=info msg="StartContainer for \"be612353da36ab6bcba06dc6feeab3bb624ad53f2f13885be55dab77cf9faabb\"" Jul 2 00:00:59.822218 containerd[2064]: time="2024-07-02T00:00:59.821690845Z" level=info msg="StartContainer for \"be612353da36ab6bcba06dc6feeab3bb624ad53f2f13885be55dab77cf9faabb\" returns successfully" Jul 2 00:00:59.964965 kubelet[2558]: E0702 00:00:59.964907 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:00:59.993066 containerd[2064]: time="2024-07-02T00:00:59.992963027Z" level=info msg="shim disconnected" id=be612353da36ab6bcba06dc6feeab3bb624ad53f2f13885be55dab77cf9faabb namespace=k8s.io Jul 2 00:00:59.993306 containerd[2064]: time="2024-07-02T00:00:59.993063878Z" level=warning msg="cleaning up after shim disconnected" id=be612353da36ab6bcba06dc6feeab3bb624ad53f2f13885be55dab77cf9faabb namespace=k8s.io Jul 2 00:00:59.993306 containerd[2064]: time="2024-07-02T00:00:59.993087769Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:01:00.503326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be612353da36ab6bcba06dc6feeab3bb624ad53f2f13885be55dab77cf9faabb-rootfs.mount: Deactivated successfully. Jul 2 00:01:00.965561 kubelet[2558]: E0702 00:01:00.965514 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:01.034078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318378202.mount: Deactivated successfully. Jul 2 00:01:01.121582 kubelet[2558]: E0702 00:01:01.121092 2558 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j69rn" podUID="30f2d765-6f0e-4f62-97b8-cf9269464124" Jul 2 00:01:01.561629 containerd[2064]: time="2024-07-02T00:01:01.561556867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:01.563767 containerd[2064]: time="2024-07-02T00:01:01.563700430Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772461" Jul 2 00:01:01.565600 containerd[2064]: time="2024-07-02T00:01:01.565556306Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:01.570284 containerd[2064]: time="2024-07-02T00:01:01.569793271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:01.571242 containerd[2064]: time="2024-07-02T00:01:01.571182894Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 1.879357975s" Jul 2 00:01:01.571367 containerd[2064]: time="2024-07-02T00:01:01.571241807Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 00:01:01.572763 containerd[2064]: time="2024-07-02T00:01:01.572539103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:01:01.576602 containerd[2064]: time="2024-07-02T00:01:01.576221236Z" level=info msg="CreateContainer within sandbox \"843a7a2d36549d085027ea5fa7266ad2587d3e06d2c7fbb9fd5e0aa7262009d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:01:01.597733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549440095.mount: Deactivated successfully. Jul 2 00:01:01.605623 containerd[2064]: time="2024-07-02T00:01:01.605563636Z" level=info msg="CreateContainer within sandbox \"843a7a2d36549d085027ea5fa7266ad2587d3e06d2c7fbb9fd5e0aa7262009d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ae4907c0148681d5b8aa4764184d96a28daae6a8e5db53f147c2d988059be459\"" Jul 2 00:01:01.608457 containerd[2064]: time="2024-07-02T00:01:01.606704686Z" level=info msg="StartContainer for \"ae4907c0148681d5b8aa4764184d96a28daae6a8e5db53f147c2d988059be459\"" Jul 2 00:01:01.711578 containerd[2064]: time="2024-07-02T00:01:01.711491050Z" level=info msg="StartContainer for \"ae4907c0148681d5b8aa4764184d96a28daae6a8e5db53f147c2d988059be459\" returns successfully" Jul 2 00:01:01.966184 kubelet[2558]: E0702 00:01:01.966045 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:02.966688 kubelet[2558]: E0702 00:01:02.966618 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:03.121656 kubelet[2558]: E0702 00:01:03.121580 2558 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j69rn" podUID="30f2d765-6f0e-4f62-97b8-cf9269464124" Jul 2 00:01:03.967668 kubelet[2558]: E0702 00:01:03.967589 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:04.719578 containerd[2064]: time="2024-07-02T00:01:04.719518049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:04.721148 containerd[2064]: time="2024-07-02T00:01:04.721096405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 00:01:04.723210 containerd[2064]: time="2024-07-02T00:01:04.722933479Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:04.728260 containerd[2064]: time="2024-07-02T00:01:04.727679223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:04.729242 containerd[2064]: time="2024-07-02T00:01:04.729184980Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 3.156478381s" Jul 2 00:01:04.729358 containerd[2064]: time="2024-07-02T00:01:04.729241036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 00:01:04.734171 containerd[2064]: time="2024-07-02T00:01:04.734104894Z" level=info msg="CreateContainer within sandbox \"cdf257b5c17b3c5dc6f1e3a7e128c6269f9741b80f081df89e83881ac5ef7f48\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:01:04.760859 containerd[2064]: time="2024-07-02T00:01:04.760786333Z" level=info msg="CreateContainer within sandbox \"cdf257b5c17b3c5dc6f1e3a7e128c6269f9741b80f081df89e83881ac5ef7f48\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d9c4b3dd02431430c8f20e97848949708a2c91094da497a848426e455c1b47d5\"" Jul 2 00:01:04.761830 containerd[2064]: time="2024-07-02T00:01:04.761732285Z" level=info msg="StartContainer for \"d9c4b3dd02431430c8f20e97848949708a2c91094da497a848426e455c1b47d5\"" Jul 2 00:01:04.882047 containerd[2064]: time="2024-07-02T00:01:04.881881851Z" level=info msg="StartContainer for \"d9c4b3dd02431430c8f20e97848949708a2c91094da497a848426e455c1b47d5\" returns successfully" Jul 2 00:01:04.968569 kubelet[2558]: E0702 00:01:04.968491 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:05.121341 kubelet[2558]: E0702 00:01:05.120850 2558 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j69rn" podUID="30f2d765-6f0e-4f62-97b8-cf9269464124" Jul 2 00:01:05.207042 kubelet[2558]: I0702 00:01:05.206827 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tmxk6" podStartSLOduration=6.055469399 podCreationTimestamp="2024-07-02 00:00:56 +0000 UTC" firstStartedPulling="2024-07-02 00:00:58.420512212 +0000 UTC m=+4.767614198" lastFinishedPulling="2024-07-02 00:01:01.57178347 +0000 UTC m=+7.918885456" observedRunningTime="2024-07-02 00:01:02.171835502 +0000 UTC m=+8.518937512" watchObservedRunningTime="2024-07-02 00:01:05.206740657 +0000 UTC m=+11.553842643" Jul 2 00:01:05.618165 containerd[2064]: time="2024-07-02T00:01:05.617915017Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:01:05.650911 kubelet[2558]: I0702 00:01:05.649812 2558 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:01:05.659981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9c4b3dd02431430c8f20e97848949708a2c91094da497a848426e455c1b47d5-rootfs.mount: Deactivated successfully. Jul 2 00:01:05.969789 kubelet[2558]: E0702 00:01:05.969588 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:06.970754 kubelet[2558]: E0702 00:01:06.970698 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:07.125460 containerd[2064]: time="2024-07-02T00:01:07.124592533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j69rn,Uid:30f2d765-6f0e-4f62-97b8-cf9269464124,Namespace:calico-system,Attempt:0,}" Jul 2 00:01:07.274582 containerd[2064]: time="2024-07-02T00:01:07.274128621Z" level=info msg="shim disconnected" id=d9c4b3dd02431430c8f20e97848949708a2c91094da497a848426e455c1b47d5 namespace=k8s.io Jul 2 00:01:07.274582 containerd[2064]: time="2024-07-02T00:01:07.274201233Z" level=warning msg="cleaning up after shim disconnected" id=d9c4b3dd02431430c8f20e97848949708a2c91094da497a848426e455c1b47d5 namespace=k8s.io Jul 2 00:01:07.274582 containerd[2064]: time="2024-07-02T00:01:07.274221584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:01:07.392286 containerd[2064]: time="2024-07-02T00:01:07.392203634Z" level=error msg="Failed to destroy network for sandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:01:07.395140 containerd[2064]: time="2024-07-02T00:01:07.395032848Z" level=error msg="encountered an error cleaning up failed sandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:01:07.395269 containerd[2064]: time="2024-07-02T00:01:07.395192900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j69rn,Uid:30f2d765-6f0e-4f62-97b8-cf9269464124,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:01:07.395864 kubelet[2558]: E0702 00:01:07.395750 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:01:07.395864 kubelet[2558]: E0702 00:01:07.395832 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j69rn" Jul 2 00:01:07.396060 kubelet[2558]: E0702 00:01:07.395872 2558 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j69rn" Jul 2 00:01:07.396060 kubelet[2558]: E0702 00:01:07.395958 2558 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j69rn_calico-system(30f2d765-6f0e-4f62-97b8-cf9269464124)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j69rn_calico-system(30f2d765-6f0e-4f62-97b8-cf9269464124)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j69rn" podUID="30f2d765-6f0e-4f62-97b8-cf9269464124" Jul 2 00:01:07.397165 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf-shm.mount: Deactivated successfully. Jul 2 00:01:07.970862 kubelet[2558]: E0702 00:01:07.970806 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:08.183845 containerd[2064]: time="2024-07-02T00:01:08.183508570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:01:08.184534 kubelet[2558]: I0702 00:01:08.184485 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:08.186553 containerd[2064]: time="2024-07-02T00:01:08.185507521Z" level=info msg="StopPodSandbox for \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\"" Jul 2 00:01:08.186553 containerd[2064]: time="2024-07-02T00:01:08.185831719Z" level=info msg="Ensure that sandbox 889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf in task-service has been cleanup successfully" Jul 2 00:01:08.234398 containerd[2064]: time="2024-07-02T00:01:08.234121412Z" level=error msg="StopPodSandbox for \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\" failed" error="failed to destroy network for sandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:01:08.235020 kubelet[2558]: E0702 00:01:08.234760 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:08.235020 kubelet[2558]: E0702 00:01:08.234852 2558 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf"} Jul 2 00:01:08.235020 kubelet[2558]: E0702 00:01:08.234926 2558 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30f2d765-6f0e-4f62-97b8-cf9269464124\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:01:08.235020 kubelet[2558]: E0702 00:01:08.234977 2558 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30f2d765-6f0e-4f62-97b8-cf9269464124\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j69rn" podUID="30f2d765-6f0e-4f62-97b8-cf9269464124" Jul 2 00:01:08.971176 kubelet[2558]: E0702 00:01:08.971112 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:09.972028 kubelet[2558]: E0702 00:01:09.971967 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:10.430289 kubelet[2558]: I0702 00:01:10.430218 2558 topology_manager.go:215] "Topology Admit Handler" podUID="eeb96218-1cd9-4a49-a564-50235636a5c9" podNamespace="default" podName="nginx-deployment-6d5f899847-lmsq8" Jul 2 00:01:10.585850 kubelet[2558]: I0702 00:01:10.585787 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmzzd\" (UniqueName: \"kubernetes.io/projected/eeb96218-1cd9-4a49-a564-50235636a5c9-kube-api-access-wmzzd\") pod \"nginx-deployment-6d5f899847-lmsq8\" (UID: \"eeb96218-1cd9-4a49-a564-50235636a5c9\") " pod="default/nginx-deployment-6d5f899847-lmsq8" Jul 2 00:01:10.742640 containerd[2064]: time="2024-07-02T00:01:10.741921371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-lmsq8,Uid:eeb96218-1cd9-4a49-a564-50235636a5c9,Namespace:default,Attempt:0,}" Jul 2 00:01:10.891041 containerd[2064]: time="2024-07-02T00:01:10.890978684Z" level=error msg="Failed to destroy network for sandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:01:10.896471 containerd[2064]: time="2024-07-02T00:01:10.893978083Z" level=error msg="encountered an error cleaning up failed sandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:01:10.896849 containerd[2064]: time="2024-07-02T00:01:10.896791461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-lmsq8,Uid:eeb96218-1cd9-4a49-a564-50235636a5c9,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:01:10.903129 kubelet[2558]: E0702 00:01:10.897692 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:01:10.903129 kubelet[2558]: E0702 00:01:10.897771 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-lmsq8" Jul 2 00:01:10.903129 kubelet[2558]: E0702 00:01:10.897813 2558 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-lmsq8" Jul 2 00:01:10.901854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4-shm.mount: Deactivated successfully. Jul 2 00:01:10.903891 kubelet[2558]: E0702 00:01:10.897906 2558 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-lmsq8_default(eeb96218-1cd9-4a49-a564-50235636a5c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-lmsq8_default(eeb96218-1cd9-4a49-a564-50235636a5c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-lmsq8" podUID="eeb96218-1cd9-4a49-a564-50235636a5c9" Jul 2 00:01:10.972779 kubelet[2558]: E0702 00:01:10.972639 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:11.193295 kubelet[2558]: I0702 00:01:11.192423 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:11.194321 containerd[2064]: time="2024-07-02T00:01:11.193670320Z" level=info msg="StopPodSandbox for \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\"" Jul 2 00:01:11.194321 containerd[2064]: time="2024-07-02T00:01:11.193971034Z" level=info msg="Ensure that sandbox 8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4 in task-service has been cleanup successfully" Jul 2 00:01:11.284845 containerd[2064]: time="2024-07-02T00:01:11.284778224Z" level=error msg="StopPodSandbox for \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\" failed" error="failed to destroy network for sandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:01:11.285878 kubelet[2558]: E0702 00:01:11.285845 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:11.286692 kubelet[2558]: E0702 00:01:11.286526 2558 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4"} Jul 2 00:01:11.286692 kubelet[2558]: E0702 00:01:11.286604 2558 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eeb96218-1cd9-4a49-a564-50235636a5c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:01:11.286692 kubelet[2558]: E0702 00:01:11.286658 2558 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eeb96218-1cd9-4a49-a564-50235636a5c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-lmsq8" podUID="eeb96218-1cd9-4a49-a564-50235636a5c9" Jul 2 00:01:11.974323 kubelet[2558]: E0702 00:01:11.974238 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:12.641711 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 00:01:12.976830 kubelet[2558]: E0702 00:01:12.976153 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:13.451088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2345916682.mount: Deactivated successfully. Jul 2 00:01:13.513031 containerd[2064]: time="2024-07-02T00:01:13.512950949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:13.514485 containerd[2064]: time="2024-07-02T00:01:13.514396459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 00:01:13.516296 containerd[2064]: time="2024-07-02T00:01:13.516219474Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:13.520307 containerd[2064]: time="2024-07-02T00:01:13.520240224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:13.521846 containerd[2064]: time="2024-07-02T00:01:13.521682625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 5.338107397s" Jul 2 00:01:13.521846 containerd[2064]: time="2024-07-02T00:01:13.521736952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 00:01:13.545510 containerd[2064]: time="2024-07-02T00:01:13.545345934Z" level=info msg="CreateContainer within sandbox \"cdf257b5c17b3c5dc6f1e3a7e128c6269f9741b80f081df89e83881ac5ef7f48\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:01:13.569144 containerd[2064]: time="2024-07-02T00:01:13.569068350Z" level=info msg="CreateContainer within sandbox \"cdf257b5c17b3c5dc6f1e3a7e128c6269f9741b80f081df89e83881ac5ef7f48\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b01d5a9db9f016d7a7b8247522d17fc73261e9fabd49c29dcd276215329ce623\"" Jul 2 00:01:13.570292 containerd[2064]: time="2024-07-02T00:01:13.570251829Z" level=info msg="StartContainer for \"b01d5a9db9f016d7a7b8247522d17fc73261e9fabd49c29dcd276215329ce623\"" Jul 2 00:01:13.672299 containerd[2064]: time="2024-07-02T00:01:13.672222726Z" level=info msg="StartContainer for \"b01d5a9db9f016d7a7b8247522d17fc73261e9fabd49c29dcd276215329ce623\" returns successfully" Jul 2 00:01:13.775668 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:01:13.775825 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:01:13.976904 kubelet[2558]: E0702 00:01:13.976843 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:14.960679 kubelet[2558]: E0702 00:01:14.960617 2558 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:14.977131 kubelet[2558]: E0702 00:01:14.977093 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:15.574799 kernel: Initializing XFRM netlink socket Jul 2 00:01:15.723248 (udev-worker)[3182]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:01:15.726651 systemd-networkd[1609]: vxlan.calico: Link UP Jul 2 00:01:15.726662 systemd-networkd[1609]: vxlan.calico: Gained carrier Jul 2 00:01:15.761736 (udev-worker)[3181]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:01:15.977936 kubelet[2558]: E0702 00:01:15.977806 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:16.978986 kubelet[2558]: E0702 00:01:16.978936 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:17.260902 systemd-networkd[1609]: vxlan.calico: Gained IPv6LL Jul 2 00:01:17.979821 kubelet[2558]: E0702 00:01:17.979755 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:18.980098 kubelet[2558]: E0702 00:01:18.980035 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:19.980293 kubelet[2558]: E0702 00:01:19.980223 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:20.180481 ntpd[2016]: Listen normally on 6 vxlan.calico 192.168.39.64:123 Jul 2 00:01:20.180609 ntpd[2016]: Listen normally on 7 vxlan.calico [fe80::6443:e1ff:fe20:282e%3]:123 Jul 2 00:01:20.181189 ntpd[2016]: 2 Jul 00:01:20 ntpd[2016]: Listen normally on 6 vxlan.calico 192.168.39.64:123 Jul 2 00:01:20.181189 ntpd[2016]: 2 Jul 00:01:20 ntpd[2016]: Listen normally on 7 vxlan.calico [fe80::6443:e1ff:fe20:282e%3]:123 Jul 2 00:01:20.981291 kubelet[2558]: E0702 00:01:20.981231 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:21.981759 kubelet[2558]: E0702 00:01:21.981710 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:22.981940 kubelet[2558]: E0702 00:01:22.981882 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:23.982339 kubelet[2558]: E0702 00:01:23.982282 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:24.120881 containerd[2064]: time="2024-07-02T00:01:24.120797347Z" level=info msg="StopPodSandbox for \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\"" Jul 2 00:01:24.121422 containerd[2064]: time="2024-07-02T00:01:24.121284443Z" level=info msg="StopPodSandbox for \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\"" Jul 2 00:01:24.208293 kubelet[2558]: I0702 00:01:24.208247 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-cm7z4" podStartSLOduration=13.099871689 podCreationTimestamp="2024-07-02 00:00:56 +0000 UTC" firstStartedPulling="2024-07-02 00:00:58.413898022 +0000 UTC m=+4.760999996" lastFinishedPulling="2024-07-02 00:01:13.522215715 +0000 UTC m=+19.869317689" observedRunningTime="2024-07-02 00:01:14.225103688 +0000 UTC m=+20.572205674" watchObservedRunningTime="2024-07-02 00:01:24.208189382 +0000 UTC m=+30.555291368" Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.211 [INFO][3437] k8s.go 608: Cleaning up netns ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.211 [INFO][3437] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" iface="eth0" netns="/var/run/netns/cni-056988e1-b1c4-8211-2782-4d3275477135" Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.212 [INFO][3437] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" iface="eth0" netns="/var/run/netns/cni-056988e1-b1c4-8211-2782-4d3275477135" Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.212 [INFO][3437] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" iface="eth0" netns="/var/run/netns/cni-056988e1-b1c4-8211-2782-4d3275477135" Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.212 [INFO][3437] k8s.go 615: Releasing IP address(es) ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.212 [INFO][3437] utils.go 188: Calico CNI releasing IP address ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.262 [INFO][3447] ipam_plugin.go 411: Releasing address using handleID ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" HandleID="k8s-pod-network.8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Workload="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.263 [INFO][3447] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.263 [INFO][3447] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.275 [WARNING][3447] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" HandleID="k8s-pod-network.8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Workload="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.275 [INFO][3447] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" HandleID="k8s-pod-network.8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Workload="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.277 [INFO][3447] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:24.282764 containerd[2064]: 2024-07-02 00:01:24.280 [INFO][3437] k8s.go 621: Teardown processing complete. ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:24.284564 containerd[2064]: time="2024-07-02T00:01:24.283828023Z" level=info msg="TearDown network for sandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\" successfully" Jul 2 00:01:24.284564 containerd[2064]: time="2024-07-02T00:01:24.283874978Z" level=info msg="StopPodSandbox for \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\" returns successfully" Jul 2 00:01:24.287588 containerd[2064]: time="2024-07-02T00:01:24.285169897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-lmsq8,Uid:eeb96218-1cd9-4a49-a564-50235636a5c9,Namespace:default,Attempt:1,}" Jul 2 00:01:24.288348 systemd[1]: run-netns-cni\x2d056988e1\x2db1c4\x2d8211\x2d2782\x2d4d3275477135.mount: Deactivated successfully. Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.210 [INFO][3430] k8s.go 608: Cleaning up netns ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.210 [INFO][3430] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" iface="eth0" netns="/var/run/netns/cni-df481bcf-be38-e93c-10f1-3299ad61691b" Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.211 [INFO][3430] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" iface="eth0" netns="/var/run/netns/cni-df481bcf-be38-e93c-10f1-3299ad61691b" Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.211 [INFO][3430] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" iface="eth0" netns="/var/run/netns/cni-df481bcf-be38-e93c-10f1-3299ad61691b" Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.211 [INFO][3430] k8s.go 615: Releasing IP address(es) ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.212 [INFO][3430] utils.go 188: Calico CNI releasing IP address ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.262 [INFO][3446] ipam_plugin.go 411: Releasing address using handleID ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" HandleID="k8s-pod-network.889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Workload="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.263 [INFO][3446] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.278 [INFO][3446] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.295 [WARNING][3446] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" HandleID="k8s-pod-network.889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Workload="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.296 [INFO][3446] ipam_plugin.go 439: Releasing address using workloadID ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" HandleID="k8s-pod-network.889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Workload="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.298 [INFO][3446] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:24.308534 containerd[2064]: 2024-07-02 00:01:24.305 [INFO][3430] k8s.go 621: Teardown processing complete. ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:24.311562 systemd[1]: run-netns-cni\x2ddf481bcf\x2dbe38\x2de93c\x2d10f1\x2d3299ad61691b.mount: Deactivated successfully. Jul 2 00:01:24.312740 containerd[2064]: time="2024-07-02T00:01:24.311536791Z" level=info msg="TearDown network for sandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\" successfully" Jul 2 00:01:24.312740 containerd[2064]: time="2024-07-02T00:01:24.311660032Z" level=info msg="StopPodSandbox for \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\" returns successfully" Jul 2 00:01:24.313486 containerd[2064]: time="2024-07-02T00:01:24.313097414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j69rn,Uid:30f2d765-6f0e-4f62-97b8-cf9269464124,Namespace:calico-system,Attempt:1,}" Jul 2 00:01:24.537190 systemd-networkd[1609]: cali953c40e1c3e: Link UP Jul 2 00:01:24.539124 systemd-networkd[1609]: cali953c40e1c3e: Gained carrier Jul 2 00:01:24.540401 (udev-worker)[3498]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.409 [INFO][3470] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.19.149-k8s-csi--node--driver--j69rn-eth0 csi-node-driver- calico-system 30f2d765-6f0e-4f62-97b8-cf9269464124 904 0 2024-07-02 00:00:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.19.149 csi-node-driver-j69rn eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali953c40e1c3e [] []}} ContainerID="bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" Namespace="calico-system" Pod="csi-node-driver-j69rn" WorkloadEndpoint="172.31.19.149-k8s-csi--node--driver--j69rn-" Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.409 [INFO][3470] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" Namespace="calico-system" Pod="csi-node-driver-j69rn" WorkloadEndpoint="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.472 [INFO][3485] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" HandleID="k8s-pod-network.bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" Workload="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.489 [INFO][3485] ipam_plugin.go 264: Auto assigning IP ContainerID="bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" HandleID="k8s-pod-network.bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" Workload="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cf60), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.19.149", "pod":"csi-node-driver-j69rn", "timestamp":"2024-07-02 00:01:24.472070951 +0000 UTC"}, Hostname:"172.31.19.149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.489 [INFO][3485] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.489 [INFO][3485] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.489 [INFO][3485] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.19.149' Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.491 [INFO][3485] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" host="172.31.19.149" Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.497 [INFO][3485] ipam.go 372: Looking up existing affinities for host host="172.31.19.149" Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.504 [INFO][3485] ipam.go 489: Trying affinity for 192.168.39.64/26 host="172.31.19.149" Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.507 [INFO][3485] ipam.go 155: Attempting to load block cidr=192.168.39.64/26 host="172.31.19.149" Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.510 [INFO][3485] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.64/26 host="172.31.19.149" Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.510 [INFO][3485] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.64/26 handle="k8s-pod-network.bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" host="172.31.19.149" Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.512 [INFO][3485] ipam.go 1685: Creating new handle: k8s-pod-network.bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280 Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.517 [INFO][3485] ipam.go 1203: Writing block in order to claim IPs block=192.168.39.64/26 handle="k8s-pod-network.bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" host="172.31.19.149" Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.525 [INFO][3485] ipam.go 1216: Successfully claimed IPs: [192.168.39.65/26] block=192.168.39.64/26 handle="k8s-pod-network.bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" host="172.31.19.149" Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.525 [INFO][3485] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.65/26] handle="k8s-pod-network.bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" host="172.31.19.149" Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.525 [INFO][3485] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:24.564424 containerd[2064]: 2024-07-02 00:01:24.525 [INFO][3485] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.39.65/26] IPv6=[] ContainerID="bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" HandleID="k8s-pod-network.bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" Workload="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:24.566548 containerd[2064]: 2024-07-02 00:01:24.528 [INFO][3470] k8s.go 386: Populated endpoint ContainerID="bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" Namespace="calico-system" Pod="csi-node-driver-j69rn" WorkloadEndpoint="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-csi--node--driver--j69rn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30f2d765-6f0e-4f62-97b8-cf9269464124", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"", Pod:"csi-node-driver-j69rn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali953c40e1c3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:24.566548 containerd[2064]: 2024-07-02 00:01:24.529 [INFO][3470] k8s.go 387: Calico CNI using IPs: [192.168.39.65/32] ContainerID="bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" Namespace="calico-system" Pod="csi-node-driver-j69rn" WorkloadEndpoint="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:24.566548 containerd[2064]: 2024-07-02 00:01:24.529 [INFO][3470] dataplane_linux.go 68: Setting the host side veth name to cali953c40e1c3e ContainerID="bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" Namespace="calico-system" Pod="csi-node-driver-j69rn" WorkloadEndpoint="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:24.566548 containerd[2064]: 2024-07-02 00:01:24.540 [INFO][3470] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" Namespace="calico-system" Pod="csi-node-driver-j69rn" WorkloadEndpoint="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:24.566548 containerd[2064]: 2024-07-02 00:01:24.543 [INFO][3470] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" Namespace="calico-system" Pod="csi-node-driver-j69rn" WorkloadEndpoint="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-csi--node--driver--j69rn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30f2d765-6f0e-4f62-97b8-cf9269464124", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280", Pod:"csi-node-driver-j69rn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali953c40e1c3e", MAC:"da:bb:04:3d:91:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:24.566548 containerd[2064]: 2024-07-02 00:01:24.561 [INFO][3470] k8s.go 500: Wrote updated endpoint to datastore ContainerID="bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280" Namespace="calico-system" Pod="csi-node-driver-j69rn" WorkloadEndpoint="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:24.603857 systemd-networkd[1609]: cali4b26e3371b7: Link UP Jul 2 00:01:24.605403 systemd-networkd[1609]: cali4b26e3371b7: Gained carrier Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.415 [INFO][3460] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0 nginx-deployment-6d5f899847- default eeb96218-1cd9-4a49-a564-50235636a5c9 905 0 2024-07-02 00:01:10 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.19.149 nginx-deployment-6d5f899847-lmsq8 eth0 default [] [] [kns.default ksa.default.default] cali4b26e3371b7 [] []}} ContainerID="cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" Namespace="default" Pod="nginx-deployment-6d5f899847-lmsq8" WorkloadEndpoint="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-" Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.416 [INFO][3460] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" Namespace="default" Pod="nginx-deployment-6d5f899847-lmsq8" WorkloadEndpoint="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.472 [INFO][3486] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" HandleID="k8s-pod-network.cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" Workload="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.495 [INFO][3486] ipam_plugin.go 264: Auto assigning IP ContainerID="cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" HandleID="k8s-pod-network.cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" Workload="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058dbd0), Attrs:map[string]string{"namespace":"default", "node":"172.31.19.149", "pod":"nginx-deployment-6d5f899847-lmsq8", "timestamp":"2024-07-02 00:01:24.472358494 +0000 UTC"}, Hostname:"172.31.19.149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.495 [INFO][3486] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.525 [INFO][3486] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.526 [INFO][3486] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.19.149' Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.529 [INFO][3486] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" host="172.31.19.149" Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.542 [INFO][3486] ipam.go 372: Looking up existing affinities for host host="172.31.19.149" Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.550 [INFO][3486] ipam.go 489: Trying affinity for 192.168.39.64/26 host="172.31.19.149" Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.553 [INFO][3486] ipam.go 155: Attempting to load block cidr=192.168.39.64/26 host="172.31.19.149" Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.561 [INFO][3486] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.64/26 host="172.31.19.149" Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.562 [INFO][3486] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.64/26 handle="k8s-pod-network.cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" host="172.31.19.149" Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.572 [INFO][3486] ipam.go 1685: Creating new handle: k8s-pod-network.cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.579 [INFO][3486] ipam.go 1203: Writing block in order to claim IPs block=192.168.39.64/26 handle="k8s-pod-network.cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" host="172.31.19.149" Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.588 [INFO][3486] ipam.go 1216: Successfully claimed IPs: [192.168.39.66/26] block=192.168.39.64/26 handle="k8s-pod-network.cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" host="172.31.19.149" Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.589 [INFO][3486] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.66/26] handle="k8s-pod-network.cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" host="172.31.19.149" Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.589 [INFO][3486] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:24.639764 containerd[2064]: 2024-07-02 00:01:24.589 [INFO][3486] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.39.66/26] IPv6=[] ContainerID="cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" HandleID="k8s-pod-network.cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" Workload="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:24.643212 containerd[2064]: 2024-07-02 00:01:24.594 [INFO][3460] k8s.go 386: Populated endpoint ContainerID="cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" Namespace="default" Pod="nginx-deployment-6d5f899847-lmsq8" WorkloadEndpoint="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"eeb96218-1cd9-4a49-a564-50235636a5c9", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"", Pod:"nginx-deployment-6d5f899847-lmsq8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4b26e3371b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:24.643212 containerd[2064]: 2024-07-02 00:01:24.595 [INFO][3460] k8s.go 387: Calico CNI using IPs: [192.168.39.66/32] ContainerID="cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" Namespace="default" Pod="nginx-deployment-6d5f899847-lmsq8" WorkloadEndpoint="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:24.643212 containerd[2064]: 2024-07-02 00:01:24.595 [INFO][3460] dataplane_linux.go 68: Setting the host side veth name to cali4b26e3371b7 ContainerID="cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" Namespace="default" Pod="nginx-deployment-6d5f899847-lmsq8" WorkloadEndpoint="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:24.643212 containerd[2064]: 2024-07-02 00:01:24.606 [INFO][3460] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" Namespace="default" Pod="nginx-deployment-6d5f899847-lmsq8" WorkloadEndpoint="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:24.643212 containerd[2064]: 2024-07-02 00:01:24.607 [INFO][3460] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" Namespace="default" Pod="nginx-deployment-6d5f899847-lmsq8" WorkloadEndpoint="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"eeb96218-1cd9-4a49-a564-50235636a5c9", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca", Pod:"nginx-deployment-6d5f899847-lmsq8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4b26e3371b7", MAC:"a6:88:16:32:f0:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:24.643212 containerd[2064]: 2024-07-02 00:01:24.616 [INFO][3460] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca" Namespace="default" Pod="nginx-deployment-6d5f899847-lmsq8" WorkloadEndpoint="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:24.665466 containerd[2064]: time="2024-07-02T00:01:24.662191965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:24.665466 containerd[2064]: time="2024-07-02T00:01:24.662913393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:24.665466 containerd[2064]: time="2024-07-02T00:01:24.662967132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:24.665466 containerd[2064]: time="2024-07-02T00:01:24.663001590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:24.722981 containerd[2064]: time="2024-07-02T00:01:24.722832374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:24.724922 containerd[2064]: time="2024-07-02T00:01:24.722946287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:24.724922 containerd[2064]: time="2024-07-02T00:01:24.724619660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:24.724922 containerd[2064]: time="2024-07-02T00:01:24.724675920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:24.743015 containerd[2064]: time="2024-07-02T00:01:24.742952906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j69rn,Uid:30f2d765-6f0e-4f62-97b8-cf9269464124,Namespace:calico-system,Attempt:1,} returns sandbox id \"bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280\"" Jul 2 00:01:24.746677 containerd[2064]: time="2024-07-02T00:01:24.746625289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:01:24.815491 containerd[2064]: time="2024-07-02T00:01:24.813020379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-lmsq8,Uid:eeb96218-1cd9-4a49-a564-50235636a5c9,Namespace:default,Attempt:1,} returns sandbox id \"cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca\"" Jul 2 00:01:24.983053 kubelet[2558]: E0702 00:01:24.982980 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:25.901454 systemd-networkd[1609]: cali4b26e3371b7: Gained IPv6LL Jul 2 00:01:25.981492 containerd[2064]: time="2024-07-02T00:01:25.981402250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:25.982975 containerd[2064]: time="2024-07-02T00:01:25.982875314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 00:01:25.984155 kubelet[2558]: E0702 00:01:25.984103 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:25.984914 containerd[2064]: time="2024-07-02T00:01:25.984835533Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:25.988918 containerd[2064]: time="2024-07-02T00:01:25.988822006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:25.990509 containerd[2064]: time="2024-07-02T00:01:25.990283364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.243485993s" Jul 2 00:01:25.990509 containerd[2064]: time="2024-07-02T00:01:25.990335542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 00:01:25.992182 containerd[2064]: time="2024-07-02T00:01:25.991620052Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 00:01:25.993812 containerd[2064]: time="2024-07-02T00:01:25.993736566Z" level=info msg="CreateContainer within sandbox \"bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:01:26.020138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4158560436.mount: Deactivated successfully. Jul 2 00:01:26.025460 containerd[2064]: time="2024-07-02T00:01:26.024093365Z" level=info msg="CreateContainer within sandbox \"bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f96b33861ea3221f73ba0bf75000e6abf555ce6b44e4a13341c76eb1536bdfd9\"" Jul 2 00:01:26.025460 containerd[2064]: time="2024-07-02T00:01:26.025091652Z" level=info msg="StartContainer for \"f96b33861ea3221f73ba0bf75000e6abf555ce6b44e4a13341c76eb1536bdfd9\"" Jul 2 00:01:26.029314 systemd-networkd[1609]: cali953c40e1c3e: Gained IPv6LL Jul 2 00:01:26.138461 containerd[2064]: time="2024-07-02T00:01:26.137768179Z" level=info msg="StartContainer for \"f96b33861ea3221f73ba0bf75000e6abf555ce6b44e4a13341c76eb1536bdfd9\" returns successfully" Jul 2 00:01:26.163377 kubelet[2558]: I0702 00:01:26.163242 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:01:26.611176 update_engine[2035]: I0702 00:01:26.611104 2035 update_attempter.cc:509] Updating boot flags... Jul 2 00:01:26.687802 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3709) Jul 2 00:01:26.973476 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3713) Jul 2 00:01:26.984854 kubelet[2558]: E0702 00:01:26.984797 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:27.985028 kubelet[2558]: E0702 00:01:27.984987 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:28.180621 ntpd[2016]: Listen normally on 8 cali953c40e1c3e [fe80::ecee:eeff:feee:eeee%6]:123 Jul 2 00:01:28.180737 ntpd[2016]: Listen normally on 9 cali4b26e3371b7 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 00:01:28.181198 ntpd[2016]: 2 Jul 00:01:28 ntpd[2016]: Listen normally on 8 cali953c40e1c3e [fe80::ecee:eeff:feee:eeee%6]:123 Jul 2 00:01:28.181198 ntpd[2016]: 2 Jul 00:01:28 ntpd[2016]: Listen normally on 9 cali4b26e3371b7 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 00:01:28.792335 kubelet[2558]: I0702 00:01:28.792278 2558 topology_manager.go:215] "Topology Admit Handler" podUID="8815cd8c-8f24-487f-894b-9977b2ae75bc" podNamespace="calico-apiserver" podName="calico-apiserver-b47bfd8c7-jj8j4" Jul 2 00:01:28.892217 kubelet[2558]: I0702 00:01:28.892019 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn4zv\" (UniqueName: \"kubernetes.io/projected/8815cd8c-8f24-487f-894b-9977b2ae75bc-kube-api-access-sn4zv\") pod \"calico-apiserver-b47bfd8c7-jj8j4\" (UID: \"8815cd8c-8f24-487f-894b-9977b2ae75bc\") " pod="calico-apiserver/calico-apiserver-b47bfd8c7-jj8j4" Jul 2 00:01:28.892217 kubelet[2558]: I0702 00:01:28.892101 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8815cd8c-8f24-487f-894b-9977b2ae75bc-calico-apiserver-certs\") pod \"calico-apiserver-b47bfd8c7-jj8j4\" (UID: \"8815cd8c-8f24-487f-894b-9977b2ae75bc\") " pod="calico-apiserver/calico-apiserver-b47bfd8c7-jj8j4" Jul 2 00:01:28.986077 kubelet[2558]: E0702 00:01:28.985968 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:28.995504 kubelet[2558]: E0702 00:01:28.994460 2558 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:01:28.995504 kubelet[2558]: E0702 00:01:28.994602 2558 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8815cd8c-8f24-487f-894b-9977b2ae75bc-calico-apiserver-certs podName:8815cd8c-8f24-487f-894b-9977b2ae75bc nodeName:}" failed. No retries permitted until 2024-07-02 00:01:29.494564191 +0000 UTC m=+35.841666165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8815cd8c-8f24-487f-894b-9977b2ae75bc-calico-apiserver-certs") pod "calico-apiserver-b47bfd8c7-jj8j4" (UID: "8815cd8c-8f24-487f-894b-9977b2ae75bc") : secret "calico-apiserver-certs" not found Jul 2 00:01:29.332955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4125535928.mount: Deactivated successfully. Jul 2 00:01:29.714279 containerd[2064]: time="2024-07-02T00:01:29.714132676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b47bfd8c7-jj8j4,Uid:8815cd8c-8f24-487f-894b-9977b2ae75bc,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:01:29.986932 kubelet[2558]: E0702 00:01:29.986720 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:30.000964 (udev-worker)[3711]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:01:30.005239 systemd-networkd[1609]: cali6a81d8bfefe: Link UP Jul 2 00:01:30.008846 systemd-networkd[1609]: cali6a81d8bfefe: Gained carrier Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.838 [INFO][3898] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0 calico-apiserver-b47bfd8c7- calico-apiserver 8815cd8c-8f24-487f-894b-9977b2ae75bc 974 0 2024-07-02 00:01:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b47bfd8c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172.31.19.149 calico-apiserver-b47bfd8c7-jj8j4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6a81d8bfefe [] []}} ContainerID="9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" Namespace="calico-apiserver" Pod="calico-apiserver-b47bfd8c7-jj8j4" WorkloadEndpoint="172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-" Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.838 [INFO][3898] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" Namespace="calico-apiserver" Pod="calico-apiserver-b47bfd8c7-jj8j4" WorkloadEndpoint="172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0" Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.901 [INFO][3907] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" HandleID="k8s-pod-network.9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" Workload="172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0" Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.928 [INFO][3907] ipam_plugin.go 264: Auto assigning IP ContainerID="9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" HandleID="k8s-pod-network.9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" Workload="172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003187a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172.31.19.149", "pod":"calico-apiserver-b47bfd8c7-jj8j4", "timestamp":"2024-07-02 00:01:29.901499082 +0000 UTC"}, Hostname:"172.31.19.149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.928 [INFO][3907] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.928 [INFO][3907] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.929 [INFO][3907] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.19.149' Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.938 [INFO][3907] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" host="172.31.19.149" Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.947 [INFO][3907] ipam.go 372: Looking up existing affinities for host host="172.31.19.149" Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.956 [INFO][3907] ipam.go 489: Trying affinity for 192.168.39.64/26 host="172.31.19.149" Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.960 [INFO][3907] ipam.go 155: Attempting to load block cidr=192.168.39.64/26 host="172.31.19.149" Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.965 [INFO][3907] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.64/26 host="172.31.19.149" Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.965 [INFO][3907] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.64/26 handle="k8s-pod-network.9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" host="172.31.19.149" Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.969 [INFO][3907] ipam.go 1685: Creating new handle: k8s-pod-network.9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.978 [INFO][3907] ipam.go 1203: Writing block in order to claim IPs block=192.168.39.64/26 handle="k8s-pod-network.9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" host="172.31.19.149" Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.990 [INFO][3907] ipam.go 1216: Successfully claimed IPs: [192.168.39.67/26] block=192.168.39.64/26 handle="k8s-pod-network.9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" host="172.31.19.149" Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.990 [INFO][3907] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.67/26] handle="k8s-pod-network.9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" host="172.31.19.149" Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.990 [INFO][3907] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:30.049545 containerd[2064]: 2024-07-02 00:01:29.990 [INFO][3907] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.39.67/26] IPv6=[] ContainerID="9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" HandleID="k8s-pod-network.9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" Workload="172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0" Jul 2 00:01:30.052232 containerd[2064]: 2024-07-02 00:01:29.997 [INFO][3898] k8s.go 386: Populated endpoint ContainerID="9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" Namespace="calico-apiserver" Pod="calico-apiserver-b47bfd8c7-jj8j4" WorkloadEndpoint="172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0", GenerateName:"calico-apiserver-b47bfd8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"8815cd8c-8f24-487f-894b-9977b2ae75bc", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b47bfd8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"", Pod:"calico-apiserver-b47bfd8c7-jj8j4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a81d8bfefe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:30.052232 containerd[2064]: 2024-07-02 00:01:29.997 [INFO][3898] k8s.go 387: Calico CNI using IPs: [192.168.39.67/32] ContainerID="9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" Namespace="calico-apiserver" Pod="calico-apiserver-b47bfd8c7-jj8j4" WorkloadEndpoint="172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0" Jul 2 00:01:30.052232 containerd[2064]: 2024-07-02 00:01:29.997 [INFO][3898] dataplane_linux.go 68: Setting the host side veth name to cali6a81d8bfefe ContainerID="9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" Namespace="calico-apiserver" Pod="calico-apiserver-b47bfd8c7-jj8j4" WorkloadEndpoint="172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0" Jul 2 00:01:30.052232 containerd[2064]: 2024-07-02 00:01:30.008 [INFO][3898] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" Namespace="calico-apiserver" Pod="calico-apiserver-b47bfd8c7-jj8j4" WorkloadEndpoint="172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0" Jul 2 00:01:30.052232 containerd[2064]: 2024-07-02 00:01:30.008 [INFO][3898] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" Namespace="calico-apiserver" Pod="calico-apiserver-b47bfd8c7-jj8j4" WorkloadEndpoint="172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0", GenerateName:"calico-apiserver-b47bfd8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"8815cd8c-8f24-487f-894b-9977b2ae75bc", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b47bfd8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f", Pod:"calico-apiserver-b47bfd8c7-jj8j4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a81d8bfefe", MAC:"36:51:19:dc:8a:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:30.052232 containerd[2064]: 2024-07-02 00:01:30.041 [INFO][3898] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f" Namespace="calico-apiserver" Pod="calico-apiserver-b47bfd8c7-jj8j4" WorkloadEndpoint="172.31.19.149-k8s-calico--apiserver--b47bfd8c7--jj8j4-eth0" Jul 2 00:01:30.116352 containerd[2064]: time="2024-07-02T00:01:30.115956666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:30.116352 containerd[2064]: time="2024-07-02T00:01:30.116077699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:30.116352 containerd[2064]: time="2024-07-02T00:01:30.116120488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:30.117454 containerd[2064]: time="2024-07-02T00:01:30.117320392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:30.233738 containerd[2064]: time="2024-07-02T00:01:30.233684334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b47bfd8c7-jj8j4,Uid:8815cd8c-8f24-487f-894b-9977b2ae75bc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f\"" Jul 2 00:01:30.978720 containerd[2064]: time="2024-07-02T00:01:30.978643000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:30.980859 containerd[2064]: time="2024-07-02T00:01:30.980802916Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67659691" Jul 2 00:01:30.982805 containerd[2064]: time="2024-07-02T00:01:30.982727189Z" level=info msg="ImageCreate event name:\"sha256:2d3caadc252cc3b24921aae8c484cb83879b0b39cb20bb8d23a3a54872427653\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:30.986975 kubelet[2558]: E0702 00:01:30.986899 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:30.988726 containerd[2064]: time="2024-07-02T00:01:30.988567316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:30.991044 containerd[2064]: time="2024-07-02T00:01:30.990947001Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:2d3caadc252cc3b24921aae8c484cb83879b0b39cb20bb8d23a3a54872427653\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de\", size \"67659569\" in 4.999270449s" Jul 2 00:01:30.991502 containerd[2064]: time="2024-07-02T00:01:30.991176808Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:2d3caadc252cc3b24921aae8c484cb83879b0b39cb20bb8d23a3a54872427653\"" Jul 2 00:01:30.992035 containerd[2064]: time="2024-07-02T00:01:30.991953584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:01:30.996566 containerd[2064]: time="2024-07-02T00:01:30.996230205Z" level=info msg="CreateContainer within sandbox \"cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 2 00:01:31.024461 containerd[2064]: time="2024-07-02T00:01:31.024318867Z" level=info msg="CreateContainer within sandbox \"cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"33b1f97bcbaa1c78e25a5f69fcb1cf3bc1d453484554fd94eb7126b2556e72f6\"" Jul 2 00:01:31.026470 containerd[2064]: time="2024-07-02T00:01:31.025229666Z" level=info msg="StartContainer for \"33b1f97bcbaa1c78e25a5f69fcb1cf3bc1d453484554fd94eb7126b2556e72f6\"" Jul 2 00:01:31.126460 containerd[2064]: time="2024-07-02T00:01:31.124113808Z" level=info msg="StartContainer for \"33b1f97bcbaa1c78e25a5f69fcb1cf3bc1d453484554fd94eb7126b2556e72f6\" returns successfully" Jul 2 00:01:31.660685 systemd-networkd[1609]: cali6a81d8bfefe: Gained IPv6LL Jul 2 00:01:31.987419 kubelet[2558]: E0702 00:01:31.987261 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:32.291368 containerd[2064]: time="2024-07-02T00:01:32.291303351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:32.293367 containerd[2064]: time="2024-07-02T00:01:32.293287139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 00:01:32.295371 containerd[2064]: time="2024-07-02T00:01:32.295324833Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:32.301901 containerd[2064]: time="2024-07-02T00:01:32.300537370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:32.302597 containerd[2064]: time="2024-07-02T00:01:32.302533800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.310524952s" Jul 2 00:01:32.302742 containerd[2064]: time="2024-07-02T00:01:32.302602726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 00:01:32.303953 containerd[2064]: time="2024-07-02T00:01:32.303708239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:01:32.306468 containerd[2064]: time="2024-07-02T00:01:32.306105429Z" level=info msg="CreateContainer within sandbox \"bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:01:32.331587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3992017793.mount: Deactivated successfully. Jul 2 00:01:32.333636 containerd[2064]: time="2024-07-02T00:01:32.332315163Z" level=info msg="CreateContainer within sandbox \"bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3d5de31e51e006f88dd6e308287623bc8d69bc6180a8766060f863d0352b630c\"" Jul 2 00:01:32.335576 containerd[2064]: time="2024-07-02T00:01:32.334223409Z" level=info msg="StartContainer for \"3d5de31e51e006f88dd6e308287623bc8d69bc6180a8766060f863d0352b630c\"" Jul 2 00:01:32.447321 containerd[2064]: time="2024-07-02T00:01:32.446205365Z" level=info msg="StartContainer for \"3d5de31e51e006f88dd6e308287623bc8d69bc6180a8766060f863d0352b630c\" returns successfully" Jul 2 00:01:32.987748 kubelet[2558]: E0702 00:01:32.987678 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:33.171449 kubelet[2558]: I0702 00:01:33.171363 2558 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:01:33.171584 kubelet[2558]: I0702 00:01:33.171425 2558 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:01:33.293601 kubelet[2558]: I0702 00:01:33.293545 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-lmsq8" podStartSLOduration=17.118831199 podCreationTimestamp="2024-07-02 00:01:10 +0000 UTC" firstStartedPulling="2024-07-02 00:01:24.81717353 +0000 UTC m=+31.164275516" lastFinishedPulling="2024-07-02 00:01:30.991666701 +0000 UTC m=+37.338768687" observedRunningTime="2024-07-02 00:01:31.298064241 +0000 UTC m=+37.645166227" watchObservedRunningTime="2024-07-02 00:01:33.29332437 +0000 UTC m=+39.640426393" Jul 2 00:01:33.294129 kubelet[2558]: I0702 00:01:33.293955 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-j69rn" podStartSLOduration=29.735941199 podCreationTimestamp="2024-07-02 00:00:56 +0000 UTC" firstStartedPulling="2024-07-02 00:01:24.745223805 +0000 UTC m=+31.092325791" lastFinishedPulling="2024-07-02 00:01:32.303192677 +0000 UTC m=+38.650294651" observedRunningTime="2024-07-02 00:01:33.290902952 +0000 UTC m=+39.638004974" watchObservedRunningTime="2024-07-02 00:01:33.293910059 +0000 UTC m=+39.641012069" Jul 2 00:01:33.988871 kubelet[2558]: E0702 00:01:33.988710 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:34.181177 ntpd[2016]: Listen normally on 10 cali6a81d8bfefe [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 00:01:34.182115 ntpd[2016]: 2 Jul 00:01:34 ntpd[2016]: Listen normally on 10 cali6a81d8bfefe [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 00:01:34.369885 containerd[2064]: time="2024-07-02T00:01:34.369456051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:34.371418 containerd[2064]: time="2024-07-02T00:01:34.371348221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jul 2 00:01:34.373179 containerd[2064]: time="2024-07-02T00:01:34.373104974Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:34.377184 containerd[2064]: time="2024-07-02T00:01:34.377104449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:34.378677 containerd[2064]: time="2024-07-02T00:01:34.378627879Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 2.07485644s" Jul 2 00:01:34.378940 containerd[2064]: time="2024-07-02T00:01:34.378810706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 00:01:34.382256 containerd[2064]: time="2024-07-02T00:01:34.382196290Z" level=info msg="CreateContainer within sandbox \"9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:01:34.404723 containerd[2064]: time="2024-07-02T00:01:34.404565413Z" level=info msg="CreateContainer within sandbox \"9090649745f6326625b1e3832b2c4fc39a4b4ea4c98be592ba4f4133cefad03f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2e0ecdfaf281540849e57f12636a1532824ae16175c6e04da3b011ef2f0ab9ff\"" Jul 2 00:01:34.405827 containerd[2064]: time="2024-07-02T00:01:34.405621209Z" level=info msg="StartContainer for \"2e0ecdfaf281540849e57f12636a1532824ae16175c6e04da3b011ef2f0ab9ff\"" Jul 2 00:01:34.525471 containerd[2064]: time="2024-07-02T00:01:34.523197372Z" level=info msg="StartContainer for \"2e0ecdfaf281540849e57f12636a1532824ae16175c6e04da3b011ef2f0ab9ff\" returns successfully" Jul 2 00:01:34.960919 kubelet[2558]: E0702 00:01:34.960864 2558 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:34.989247 kubelet[2558]: E0702 00:01:34.989210 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:35.314406 kubelet[2558]: I0702 00:01:35.314233 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b47bfd8c7-jj8j4" podStartSLOduration=3.172576448 podCreationTimestamp="2024-07-02 00:01:28 +0000 UTC" firstStartedPulling="2024-07-02 00:01:30.237770612 +0000 UTC m=+36.584872598" lastFinishedPulling="2024-07-02 00:01:34.379258554 +0000 UTC m=+40.726360552" observedRunningTime="2024-07-02 00:01:35.311347852 +0000 UTC m=+41.658449850" watchObservedRunningTime="2024-07-02 00:01:35.314064402 +0000 UTC m=+41.661166400" Jul 2 00:01:35.989839 kubelet[2558]: E0702 00:01:35.989771 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:36.291319 kubelet[2558]: I0702 00:01:36.291263 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:01:36.832065 kubelet[2558]: I0702 00:01:36.831989 2558 topology_manager.go:215] "Topology Admit Handler" podUID="7004433b-c482-4b99-9bc6-a2d0e872e337" podNamespace="default" podName="nfs-server-provisioner-0" Jul 2 00:01:36.844260 kubelet[2558]: I0702 00:01:36.844050 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7004433b-c482-4b99-9bc6-a2d0e872e337-data\") pod \"nfs-server-provisioner-0\" (UID: \"7004433b-c482-4b99-9bc6-a2d0e872e337\") " pod="default/nfs-server-provisioner-0" Jul 2 00:01:36.844260 kubelet[2558]: I0702 00:01:36.844136 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82clx\" (UniqueName: \"kubernetes.io/projected/7004433b-c482-4b99-9bc6-a2d0e872e337-kube-api-access-82clx\") pod \"nfs-server-provisioner-0\" (UID: \"7004433b-c482-4b99-9bc6-a2d0e872e337\") " pod="default/nfs-server-provisioner-0" Jul 2 00:01:36.990521 kubelet[2558]: E0702 00:01:36.990482 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:37.138831 containerd[2064]: time="2024-07-02T00:01:37.138684092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7004433b-c482-4b99-9bc6-a2d0e872e337,Namespace:default,Attempt:0,}" Jul 2 00:01:37.369295 systemd-networkd[1609]: cali60e51b789ff: Link UP Jul 2 00:01:37.371937 (udev-worker)[4164]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:01:37.373246 systemd-networkd[1609]: cali60e51b789ff: Gained carrier Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.234 [INFO][4148] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.19.149-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 7004433b-c482-4b99-9bc6-a2d0e872e337 1051 0 2024-07-02 00:01:36 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.19.149 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.149-k8s-nfs--server--provisioner--0-" Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.235 [INFO][4148] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.149-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.280 [INFO][4157] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" HandleID="k8s-pod-network.2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" Workload="172.31.19.149-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.306 [INFO][4157] ipam_plugin.go 264: Auto assigning IP ContainerID="2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" HandleID="k8s-pod-network.2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" Workload="172.31.19.149-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000263e20), Attrs:map[string]string{"namespace":"default", "node":"172.31.19.149", "pod":"nfs-server-provisioner-0", "timestamp":"2024-07-02 00:01:37.280272604 +0000 UTC"}, Hostname:"172.31.19.149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.306 [INFO][4157] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.306 [INFO][4157] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.306 [INFO][4157] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.19.149' Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.312 [INFO][4157] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" host="172.31.19.149" Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.321 [INFO][4157] ipam.go 372: Looking up existing affinities for host host="172.31.19.149" Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.330 [INFO][4157] ipam.go 489: Trying affinity for 192.168.39.64/26 host="172.31.19.149" Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.333 [INFO][4157] ipam.go 155: Attempting to load block cidr=192.168.39.64/26 host="172.31.19.149" Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.338 [INFO][4157] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.64/26 host="172.31.19.149" Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.338 [INFO][4157] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.64/26 handle="k8s-pod-network.2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" host="172.31.19.149" Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.341 [INFO][4157] ipam.go 1685: Creating new handle: k8s-pod-network.2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.347 [INFO][4157] ipam.go 1203: Writing block in order to claim IPs block=192.168.39.64/26 handle="k8s-pod-network.2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" host="172.31.19.149" Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.360 [INFO][4157] ipam.go 1216: Successfully claimed IPs: [192.168.39.68/26] block=192.168.39.64/26 handle="k8s-pod-network.2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" host="172.31.19.149" Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.360 [INFO][4157] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.68/26] handle="k8s-pod-network.2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" host="172.31.19.149" Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.361 [INFO][4157] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:37.393650 containerd[2064]: 2024-07-02 00:01:37.361 [INFO][4157] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.39.68/26] IPv6=[] ContainerID="2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" HandleID="k8s-pod-network.2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" Workload="172.31.19.149-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:01:37.395127 containerd[2064]: 2024-07-02 00:01:37.363 [INFO][4148] k8s.go 386: Populated endpoint ContainerID="2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.149-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7004433b-c482-4b99-9bc6-a2d0e872e337", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.39.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:37.395127 containerd[2064]: 2024-07-02 00:01:37.363 [INFO][4148] k8s.go 387: Calico CNI using IPs: [192.168.39.68/32] ContainerID="2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.149-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:01:37.395127 containerd[2064]: 2024-07-02 00:01:37.364 [INFO][4148] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.149-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:01:37.395127 containerd[2064]: 2024-07-02 00:01:37.373 [INFO][4148] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.149-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:01:37.395573 containerd[2064]: 2024-07-02 00:01:37.376 [INFO][4148] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.149-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7004433b-c482-4b99-9bc6-a2d0e872e337", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.39.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"72:f8:2c:a6:6a:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:37.395573 containerd[2064]: 2024-07-02 00:01:37.386 [INFO][4148] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.149-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:01:37.436486 containerd[2064]: time="2024-07-02T00:01:37.435968370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:37.436486 containerd[2064]: time="2024-07-02T00:01:37.436055234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:37.436486 containerd[2064]: time="2024-07-02T00:01:37.436085585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:37.436486 containerd[2064]: time="2024-07-02T00:01:37.436109189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:37.540480 containerd[2064]: time="2024-07-02T00:01:37.540377878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7004433b-c482-4b99-9bc6-a2d0e872e337,Namespace:default,Attempt:0,} returns sandbox id \"2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae\"" Jul 2 00:01:37.543849 containerd[2064]: time="2024-07-02T00:01:37.543423308Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 2 00:01:37.991694 kubelet[2558]: E0702 00:01:37.991625 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:38.508812 systemd-networkd[1609]: cali60e51b789ff: Gained IPv6LL Jul 2 00:01:38.993470 kubelet[2558]: E0702 00:01:38.992301 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:39.992948 kubelet[2558]: E0702 00:01:39.992842 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:40.370798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount39348971.mount: Deactivated successfully. Jul 2 00:01:40.994093 kubelet[2558]: E0702 00:01:40.993970 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:41.180860 ntpd[2016]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:01:41.183891 ntpd[2016]: 2 Jul 00:01:41 ntpd[2016]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:01:41.994673 kubelet[2558]: E0702 00:01:41.994609 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:42.995183 kubelet[2558]: E0702 00:01:42.995102 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:43.089243 containerd[2064]: time="2024-07-02T00:01:43.089165375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:43.091229 containerd[2064]: time="2024-07-02T00:01:43.091157267Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jul 2 00:01:43.092268 containerd[2064]: time="2024-07-02T00:01:43.092181343Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:43.099445 containerd[2064]: time="2024-07-02T00:01:43.099344350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:43.101665 containerd[2064]: time="2024-07-02T00:01:43.101408962Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.557898215s" Jul 2 00:01:43.101665 containerd[2064]: time="2024-07-02T00:01:43.101493100Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 2 00:01:43.105156 containerd[2064]: time="2024-07-02T00:01:43.104916635Z" level=info msg="CreateContainer within sandbox \"2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 2 00:01:43.131894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2106187238.mount: Deactivated successfully. Jul 2 00:01:43.134181 containerd[2064]: time="2024-07-02T00:01:43.133970135Z" level=info msg="CreateContainer within sandbox \"2e6d62d87f244660c82382e9b2c4c0d63d8c0ccdc8e322ac8c919f9425a46eae\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2fce096a06e34e59db265fd91ad79f90fcce5f365dd10017e005450f4ca23b54\"" Jul 2 00:01:43.135510 containerd[2064]: time="2024-07-02T00:01:43.134905066Z" level=info msg="StartContainer for \"2fce096a06e34e59db265fd91ad79f90fcce5f365dd10017e005450f4ca23b54\"" Jul 2 00:01:43.240918 containerd[2064]: time="2024-07-02T00:01:43.240748306Z" level=info msg="StartContainer for \"2fce096a06e34e59db265fd91ad79f90fcce5f365dd10017e005450f4ca23b54\" returns successfully" Jul 2 00:01:43.550913 kubelet[2558]: I0702 00:01:43.550586 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:01:43.578631 kubelet[2558]: I0702 00:01:43.578476 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.019195926 podCreationTimestamp="2024-07-02 00:01:36 +0000 UTC" firstStartedPulling="2024-07-02 00:01:37.54302071 +0000 UTC m=+43.890122696" lastFinishedPulling="2024-07-02 00:01:43.102228048 +0000 UTC m=+49.449330034" observedRunningTime="2024-07-02 00:01:43.37269672 +0000 UTC m=+49.719798718" watchObservedRunningTime="2024-07-02 00:01:43.578403264 +0000 UTC m=+49.925505250" Jul 2 00:01:43.996332 kubelet[2558]: E0702 00:01:43.996185 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:44.996846 kubelet[2558]: E0702 00:01:44.996794 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:45.997669 kubelet[2558]: E0702 00:01:45.997618 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:46.998598 kubelet[2558]: E0702 00:01:46.998535 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:47.998740 kubelet[2558]: E0702 00:01:47.998668 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:48.999709 kubelet[2558]: E0702 00:01:48.999652 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:50.000003 kubelet[2558]: E0702 00:01:49.999938 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:51.001159 kubelet[2558]: E0702 00:01:51.001066 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:52.001667 kubelet[2558]: E0702 00:01:52.001599 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:53.002125 kubelet[2558]: E0702 00:01:53.002065 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:54.002574 kubelet[2558]: E0702 00:01:54.002522 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:54.961066 kubelet[2558]: E0702 00:01:54.961003 2558 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:54.989317 containerd[2064]: time="2024-07-02T00:01:54.989235536Z" level=info msg="StopPodSandbox for \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\"" Jul 2 00:01:55.004500 kubelet[2558]: E0702 00:01:55.003509 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:55.109930 containerd[2064]: 2024-07-02 00:01:55.054 [WARNING][4346] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-csi--node--driver--j69rn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30f2d765-6f0e-4f62-97b8-cf9269464124", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280", Pod:"csi-node-driver-j69rn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali953c40e1c3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:55.109930 containerd[2064]: 2024-07-02 00:01:55.054 [INFO][4346] k8s.go 608: Cleaning up netns ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:55.109930 containerd[2064]: 2024-07-02 00:01:55.055 [INFO][4346] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" iface="eth0" netns="" Jul 2 00:01:55.109930 containerd[2064]: 2024-07-02 00:01:55.055 [INFO][4346] k8s.go 615: Releasing IP address(es) ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:55.109930 containerd[2064]: 2024-07-02 00:01:55.055 [INFO][4346] utils.go 188: Calico CNI releasing IP address ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:55.109930 containerd[2064]: 2024-07-02 00:01:55.089 [INFO][4352] ipam_plugin.go 411: Releasing address using handleID ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" HandleID="k8s-pod-network.889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Workload="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:55.109930 containerd[2064]: 2024-07-02 00:01:55.089 [INFO][4352] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:55.109930 containerd[2064]: 2024-07-02 00:01:55.089 [INFO][4352] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:55.109930 containerd[2064]: 2024-07-02 00:01:55.101 [WARNING][4352] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" HandleID="k8s-pod-network.889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Workload="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:55.109930 containerd[2064]: 2024-07-02 00:01:55.101 [INFO][4352] ipam_plugin.go 439: Releasing address using workloadID ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" HandleID="k8s-pod-network.889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Workload="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:55.109930 containerd[2064]: 2024-07-02 00:01:55.105 [INFO][4352] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:55.109930 containerd[2064]: 2024-07-02 00:01:55.107 [INFO][4346] k8s.go 621: Teardown processing complete. ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:55.111032 containerd[2064]: time="2024-07-02T00:01:55.110825713Z" level=info msg="TearDown network for sandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\" successfully" Jul 2 00:01:55.111032 containerd[2064]: time="2024-07-02T00:01:55.110869067Z" level=info msg="StopPodSandbox for \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\" returns successfully" Jul 2 00:01:55.113108 containerd[2064]: time="2024-07-02T00:01:55.113050509Z" level=info msg="RemovePodSandbox for \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\"" Jul 2 00:01:55.113232 containerd[2064]: time="2024-07-02T00:01:55.113110719Z" level=info msg="Forcibly stopping sandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\"" Jul 2 00:01:55.251905 containerd[2064]: 2024-07-02 00:01:55.196 [WARNING][4370] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-csi--node--driver--j69rn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30f2d765-6f0e-4f62-97b8-cf9269464124", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"bc684961f637446f91c4844d7aea8e8a575a84f4c079f225d30381ce7c3e9280", Pod:"csi-node-driver-j69rn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali953c40e1c3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:55.251905 containerd[2064]: 2024-07-02 00:01:55.196 [INFO][4370] k8s.go 608: Cleaning up netns ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:55.251905 containerd[2064]: 2024-07-02 00:01:55.196 [INFO][4370] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" iface="eth0" netns="" Jul 2 00:01:55.251905 containerd[2064]: 2024-07-02 00:01:55.197 [INFO][4370] k8s.go 615: Releasing IP address(es) ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:55.251905 containerd[2064]: 2024-07-02 00:01:55.197 [INFO][4370] utils.go 188: Calico CNI releasing IP address ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:55.251905 containerd[2064]: 2024-07-02 00:01:55.233 [INFO][4379] ipam_plugin.go 411: Releasing address using handleID ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" HandleID="k8s-pod-network.889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Workload="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:55.251905 containerd[2064]: 2024-07-02 00:01:55.233 [INFO][4379] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:55.251905 containerd[2064]: 2024-07-02 00:01:55.234 [INFO][4379] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:55.251905 containerd[2064]: 2024-07-02 00:01:55.245 [WARNING][4379] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" HandleID="k8s-pod-network.889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Workload="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:55.251905 containerd[2064]: 2024-07-02 00:01:55.245 [INFO][4379] ipam_plugin.go 439: Releasing address using workloadID ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" HandleID="k8s-pod-network.889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Workload="172.31.19.149-k8s-csi--node--driver--j69rn-eth0" Jul 2 00:01:55.251905 containerd[2064]: 2024-07-02 00:01:55.247 [INFO][4379] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:55.251905 containerd[2064]: 2024-07-02 00:01:55.249 [INFO][4370] k8s.go 621: Teardown processing complete. ContainerID="889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf" Jul 2 00:01:55.251905 containerd[2064]: time="2024-07-02T00:01:55.251775662Z" level=info msg="TearDown network for sandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\" successfully" Jul 2 00:01:55.257010 containerd[2064]: time="2024-07-02T00:01:55.256893508Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:01:55.257149 containerd[2064]: time="2024-07-02T00:01:55.257058890Z" level=info msg="RemovePodSandbox \"889d73e7e6afd30963d8bfdef4bdae636ab1088267195c6847e1e1fd2c83ebaf\" returns successfully" Jul 2 00:01:55.258098 containerd[2064]: time="2024-07-02T00:01:55.258050358Z" level=info msg="StopPodSandbox for \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\"" Jul 2 00:01:55.372404 containerd[2064]: 2024-07-02 00:01:55.318 [WARNING][4398] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"eeb96218-1cd9-4a49-a564-50235636a5c9", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca", Pod:"nginx-deployment-6d5f899847-lmsq8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4b26e3371b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:55.372404 containerd[2064]: 2024-07-02 00:01:55.318 [INFO][4398] k8s.go 608: Cleaning up netns ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:55.372404 containerd[2064]: 2024-07-02 00:01:55.318 [INFO][4398] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" iface="eth0" netns="" Jul 2 00:01:55.372404 containerd[2064]: 2024-07-02 00:01:55.318 [INFO][4398] k8s.go 615: Releasing IP address(es) ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:55.372404 containerd[2064]: 2024-07-02 00:01:55.318 [INFO][4398] utils.go 188: Calico CNI releasing IP address ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:55.372404 containerd[2064]: 2024-07-02 00:01:55.353 [INFO][4404] ipam_plugin.go 411: Releasing address using handleID ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" HandleID="k8s-pod-network.8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Workload="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:55.372404 containerd[2064]: 2024-07-02 00:01:55.353 [INFO][4404] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:55.372404 containerd[2064]: 2024-07-02 00:01:55.353 [INFO][4404] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:55.372404 containerd[2064]: 2024-07-02 00:01:55.365 [WARNING][4404] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" HandleID="k8s-pod-network.8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Workload="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:55.372404 containerd[2064]: 2024-07-02 00:01:55.365 [INFO][4404] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" HandleID="k8s-pod-network.8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Workload="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:55.372404 containerd[2064]: 2024-07-02 00:01:55.367 [INFO][4404] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:55.372404 containerd[2064]: 2024-07-02 00:01:55.369 [INFO][4398] k8s.go 621: Teardown processing complete. ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:55.373582 containerd[2064]: time="2024-07-02T00:01:55.373214012Z" level=info msg="TearDown network for sandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\" successfully" Jul 2 00:01:55.373582 containerd[2064]: time="2024-07-02T00:01:55.373286132Z" level=info msg="StopPodSandbox for \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\" returns successfully" Jul 2 00:01:55.374005 containerd[2064]: time="2024-07-02T00:01:55.373960269Z" level=info msg="RemovePodSandbox for \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\"" Jul 2 00:01:55.374138 containerd[2064]: time="2024-07-02T00:01:55.374015136Z" level=info msg="Forcibly stopping sandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\"" Jul 2 00:01:55.512146 containerd[2064]: 2024-07-02 00:01:55.454 [WARNING][4422] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"eeb96218-1cd9-4a49-a564-50235636a5c9", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"cc66684546d9e06510dd8946d2d2df9f1a35bc96c525e9b4516cdc7a652538ca", Pod:"nginx-deployment-6d5f899847-lmsq8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4b26e3371b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:55.512146 containerd[2064]: 2024-07-02 00:01:55.455 [INFO][4422] k8s.go 608: Cleaning up netns ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:55.512146 containerd[2064]: 2024-07-02 00:01:55.455 [INFO][4422] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" iface="eth0" netns="" Jul 2 00:01:55.512146 containerd[2064]: 2024-07-02 00:01:55.455 [INFO][4422] k8s.go 615: Releasing IP address(es) ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:55.512146 containerd[2064]: 2024-07-02 00:01:55.455 [INFO][4422] utils.go 188: Calico CNI releasing IP address ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:55.512146 containerd[2064]: 2024-07-02 00:01:55.491 [INFO][4429] ipam_plugin.go 411: Releasing address using handleID ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" HandleID="k8s-pod-network.8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Workload="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:55.512146 containerd[2064]: 2024-07-02 00:01:55.493 [INFO][4429] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:55.512146 containerd[2064]: 2024-07-02 00:01:55.493 [INFO][4429] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:55.512146 containerd[2064]: 2024-07-02 00:01:55.505 [WARNING][4429] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" HandleID="k8s-pod-network.8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Workload="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:55.512146 containerd[2064]: 2024-07-02 00:01:55.505 [INFO][4429] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" HandleID="k8s-pod-network.8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Workload="172.31.19.149-k8s-nginx--deployment--6d5f899847--lmsq8-eth0" Jul 2 00:01:55.512146 containerd[2064]: 2024-07-02 00:01:55.507 [INFO][4429] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:55.512146 containerd[2064]: 2024-07-02 00:01:55.509 [INFO][4422] k8s.go 621: Teardown processing complete. ContainerID="8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4" Jul 2 00:01:55.512146 containerd[2064]: time="2024-07-02T00:01:55.512047051Z" level=info msg="TearDown network for sandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\" successfully" Jul 2 00:01:55.517231 containerd[2064]: time="2024-07-02T00:01:55.517158581Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:01:55.517970 containerd[2064]: time="2024-07-02T00:01:55.517243500Z" level=info msg="RemovePodSandbox \"8420af3c6dde5fea0925eaf75433320e15cce816cbd583ab6fbb66f527379da4\" returns successfully" Jul 2 00:01:56.003960 kubelet[2558]: E0702 00:01:56.003907 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:57.005102 kubelet[2558]: E0702 00:01:57.005046 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:58.005879 kubelet[2558]: E0702 00:01:58.005819 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:01:59.006481 kubelet[2558]: E0702 00:01:59.006409 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:00.007221 kubelet[2558]: E0702 00:02:00.007163 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:01.008170 kubelet[2558]: E0702 00:02:01.008111 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:02.009205 kubelet[2558]: E0702 00:02:02.009149 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:03.009958 kubelet[2558]: E0702 00:02:03.009884 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:04.010381 kubelet[2558]: E0702 00:02:04.010321 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:05.011151 kubelet[2558]: E0702 00:02:05.011084 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:06.011687 kubelet[2558]: E0702 00:02:06.011621 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:07.012157 kubelet[2558]: E0702 00:02:07.012038 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:08.012771 kubelet[2558]: E0702 00:02:08.012703 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:08.330850 kubelet[2558]: I0702 00:02:08.330805 2558 topology_manager.go:215] "Topology Admit Handler" podUID="46d9ecf2-1fd8-4b6a-9d78-bbc602d10216" podNamespace="default" podName="test-pod-1" Jul 2 00:02:08.439677 kubelet[2558]: I0702 00:02:08.439378 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b07d4616-3fa1-41d9-b0ca-32f03a91ca9e\" (UniqueName: \"kubernetes.io/nfs/46d9ecf2-1fd8-4b6a-9d78-bbc602d10216-pvc-b07d4616-3fa1-41d9-b0ca-32f03a91ca9e\") pod \"test-pod-1\" (UID: \"46d9ecf2-1fd8-4b6a-9d78-bbc602d10216\") " pod="default/test-pod-1" Jul 2 00:02:08.439677 kubelet[2558]: I0702 00:02:08.439482 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjvtc\" (UniqueName: \"kubernetes.io/projected/46d9ecf2-1fd8-4b6a-9d78-bbc602d10216-kube-api-access-fjvtc\") pod \"test-pod-1\" (UID: \"46d9ecf2-1fd8-4b6a-9d78-bbc602d10216\") " pod="default/test-pod-1" Jul 2 00:02:08.578605 kernel: FS-Cache: Loaded Jul 2 00:02:08.622908 kernel: RPC: Registered named UNIX socket transport module. Jul 2 00:02:08.623056 kernel: RPC: Registered udp transport module. Jul 2 00:02:08.623119 kernel: RPC: Registered tcp transport module. Jul 2 00:02:08.624916 kernel: RPC: Registered tcp-with-tls transport module. Jul 2 00:02:08.625008 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 2 00:02:08.961971 kernel: NFS: Registering the id_resolver key type Jul 2 00:02:08.962104 kernel: Key type id_resolver registered Jul 2 00:02:08.963033 kernel: Key type id_legacy registered Jul 2 00:02:09.000382 nfsidmap[4490]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jul 2 00:02:09.006210 nfsidmap[4491]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jul 2 00:02:09.013276 kubelet[2558]: E0702 00:02:09.013216 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:09.237582 containerd[2064]: time="2024-07-02T00:02:09.237309444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:46d9ecf2-1fd8-4b6a-9d78-bbc602d10216,Namespace:default,Attempt:0,}" Jul 2 00:02:09.425043 (udev-worker)[4483]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:02:09.425703 systemd-networkd[1609]: cali5ec59c6bf6e: Link UP Jul 2 00:02:09.427604 systemd-networkd[1609]: cali5ec59c6bf6e: Gained carrier Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.317 [INFO][4496] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.19.149-k8s-test--pod--1-eth0 default 46d9ecf2-1fd8-4b6a-9d78-bbc602d10216 1168 0 2024-07-02 00:01:38 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.19.149 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.149-k8s-test--pod--1-" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.317 [INFO][4496] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.149-k8s-test--pod--1-eth0" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.363 [INFO][4503] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" HandleID="k8s-pod-network.cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" Workload="172.31.19.149-k8s-test--pod--1-eth0" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.379 [INFO][4503] ipam_plugin.go 264: Auto assigning IP ContainerID="cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" HandleID="k8s-pod-network.cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" Workload="172.31.19.149-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002911f0), Attrs:map[string]string{"namespace":"default", "node":"172.31.19.149", "pod":"test-pod-1", "timestamp":"2024-07-02 00:02:09.363092273 +0000 UTC"}, Hostname:"172.31.19.149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.380 [INFO][4503] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.380 [INFO][4503] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.380 [INFO][4503] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.19.149' Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.382 [INFO][4503] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" host="172.31.19.149" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.388 [INFO][4503] ipam.go 372: Looking up existing affinities for host host="172.31.19.149" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.395 [INFO][4503] ipam.go 489: Trying affinity for 192.168.39.64/26 host="172.31.19.149" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.398 [INFO][4503] ipam.go 155: Attempting to load block cidr=192.168.39.64/26 host="172.31.19.149" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.401 [INFO][4503] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.64/26 host="172.31.19.149" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.401 [INFO][4503] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.64/26 handle="k8s-pod-network.cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" host="172.31.19.149" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.404 [INFO][4503] ipam.go 1685: Creating new handle: k8s-pod-network.cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.408 [INFO][4503] ipam.go 1203: Writing block in order to claim IPs block=192.168.39.64/26 handle="k8s-pod-network.cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" host="172.31.19.149" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.417 [INFO][4503] ipam.go 1216: Successfully claimed IPs: [192.168.39.69/26] block=192.168.39.64/26 handle="k8s-pod-network.cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" host="172.31.19.149" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.417 [INFO][4503] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.69/26] handle="k8s-pod-network.cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" host="172.31.19.149" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.417 [INFO][4503] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.417 [INFO][4503] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.39.69/26] IPv6=[] ContainerID="cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" HandleID="k8s-pod-network.cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" Workload="172.31.19.149-k8s-test--pod--1-eth0" Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.421 [INFO][4496] k8s.go 386: Populated endpoint ContainerID="cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.149-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"46d9ecf2-1fd8-4b6a-9d78-bbc602d10216", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:09.441504 containerd[2064]: 2024-07-02 00:02:09.421 [INFO][4496] k8s.go 387: Calico CNI using IPs: [192.168.39.69/32] ContainerID="cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.149-k8s-test--pod--1-eth0" Jul 2 00:02:09.449345 containerd[2064]: 2024-07-02 00:02:09.421 [INFO][4496] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.149-k8s-test--pod--1-eth0" Jul 2 00:02:09.449345 containerd[2064]: 2024-07-02 00:02:09.428 [INFO][4496] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.149-k8s-test--pod--1-eth0" Jul 2 00:02:09.449345 containerd[2064]: 2024-07-02 00:02:09.429 [INFO][4496] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.149-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.149-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"46d9ecf2-1fd8-4b6a-9d78-bbc602d10216", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.149", ContainerID:"cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"8a:2b:5b:ec:17:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:09.449345 containerd[2064]: 2024-07-02 00:02:09.437 [INFO][4496] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.149-k8s-test--pod--1-eth0" Jul 2 00:02:09.490998 containerd[2064]: time="2024-07-02T00:02:09.490291846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:02:09.491296 containerd[2064]: time="2024-07-02T00:02:09.491178861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:09.491761 containerd[2064]: time="2024-07-02T00:02:09.491674132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:02:09.492952 containerd[2064]: time="2024-07-02T00:02:09.492727191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:09.581822 containerd[2064]: time="2024-07-02T00:02:09.581748920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:46d9ecf2-1fd8-4b6a-9d78-bbc602d10216,Namespace:default,Attempt:0,} returns sandbox id \"cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea\"" Jul 2 00:02:09.584943 containerd[2064]: time="2024-07-02T00:02:09.584828471Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 00:02:09.879447 containerd[2064]: time="2024-07-02T00:02:09.879347863Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:09.880997 containerd[2064]: time="2024-07-02T00:02:09.880934060Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jul 2 00:02:09.886904 containerd[2064]: time="2024-07-02T00:02:09.886756321Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:2d3caadc252cc3b24921aae8c484cb83879b0b39cb20bb8d23a3a54872427653\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de\", size \"67659569\" in 301.866559ms" Jul 2 00:02:09.886904 containerd[2064]: time="2024-07-02T00:02:09.886811801Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:2d3caadc252cc3b24921aae8c484cb83879b0b39cb20bb8d23a3a54872427653\"" Jul 2 00:02:09.889324 containerd[2064]: time="2024-07-02T00:02:09.889251084Z" level=info msg="CreateContainer within sandbox \"cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 2 00:02:09.919070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount537756040.mount: Deactivated successfully. Jul 2 00:02:09.923636 containerd[2064]: time="2024-07-02T00:02:09.923566838Z" level=info msg="CreateContainer within sandbox \"cee4ecbbc22416d7c02cf851ed34d986e9a74e014a10ecbc53a961f798ed99ea\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3b863b5a4b9ae9677f5638b5d8d07e80f5baf7c43395cd40f3257f233090793e\"" Jul 2 00:02:09.924326 containerd[2064]: time="2024-07-02T00:02:09.924269801Z" level=info msg="StartContainer for \"3b863b5a4b9ae9677f5638b5d8d07e80f5baf7c43395cd40f3257f233090793e\"" Jul 2 00:02:10.014360 kubelet[2558]: E0702 00:02:10.014290 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:10.021825 containerd[2064]: time="2024-07-02T00:02:10.021688623Z" level=info msg="StartContainer for \"3b863b5a4b9ae9677f5638b5d8d07e80f5baf7c43395cd40f3257f233090793e\" returns successfully" Jul 2 00:02:10.439502 kubelet[2558]: I0702 00:02:10.439076 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.136050037 podCreationTimestamp="2024-07-02 00:01:38 +0000 UTC" firstStartedPulling="2024-07-02 00:02:09.584159509 +0000 UTC m=+75.931261495" lastFinishedPulling="2024-07-02 00:02:09.887096391 +0000 UTC m=+76.234198377" observedRunningTime="2024-07-02 00:02:10.438861756 +0000 UTC m=+76.785963766" watchObservedRunningTime="2024-07-02 00:02:10.438986919 +0000 UTC m=+76.786088905" Jul 2 00:02:10.509279 systemd-networkd[1609]: cali5ec59c6bf6e: Gained IPv6LL Jul 2 00:02:11.015011 kubelet[2558]: E0702 00:02:11.014942 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:12.015652 kubelet[2558]: E0702 00:02:12.015596 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:13.015976 kubelet[2558]: E0702 00:02:13.015916 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:13.180592 ntpd[2016]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:02:13.181710 ntpd[2016]: 2 Jul 00:02:13 ntpd[2016]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:02:14.016337 kubelet[2558]: E0702 00:02:14.016285 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:14.960752 kubelet[2558]: E0702 00:02:14.960685 2558 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:02:15.017363 kubelet[2558]: E0702 00:02:15.017301 2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"