Dec 13 01:54:48.293734 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 01:54:48.293785 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:54:48.293811 kernel: KASLR disabled due to lack of seed Dec 13 01:54:48.293828 kernel: efi: EFI v2.7 by EDK II Dec 13 01:54:48.293844 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Dec 13 01:54:48.293859 kernel: ACPI: Early table checksum verification disabled Dec 13 01:54:48.293877 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 01:54:48.293893 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 01:54:48.293909 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:54:48.293924 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 01:54:48.293945 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:54:48.293961 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 01:54:48.293976 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 01:54:48.293995 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 01:54:48.294014 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:54:48.294034 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 01:54:48.294052 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 01:54:48.294069 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 01:54:48.294086 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 01:54:48.294103 kernel: printk: bootconsole [uart0] enabled Dec 13 01:54:48.294121 kernel: NUMA: Failed to initialise from firmware Dec 13 01:54:48.294138 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:48.294155 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 01:54:48.294171 kernel: Zone ranges: Dec 13 01:54:48.294188 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 01:54:48.294205 kernel: DMA32 empty Dec 13 01:54:48.294226 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 01:54:48.294243 kernel: Movable zone start for each node Dec 13 01:54:48.294260 kernel: Early memory node ranges Dec 13 01:54:48.294277 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 01:54:48.294294 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 01:54:48.294311 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 01:54:48.294328 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 01:54:48.294345 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 01:54:48.294361 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 01:54:48.294378 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 01:54:48.294394 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 01:54:48.294412 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:48.294432 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 01:54:48.294449 kernel: psci: probing for conduit method from ACPI. Dec 13 01:54:48.294473 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 01:54:48.294491 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:54:48.294509 kernel: psci: Trusted OS migration not required Dec 13 01:54:48.294530 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:54:48.294548 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:54:48.294565 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:54:48.294584 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:54:48.294601 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:54:48.294618 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:54:48.294636 kernel: CPU features: detected: Spectre-v2 Dec 13 01:54:48.294653 kernel: CPU features: detected: Spectre-v3a Dec 13 01:54:48.294696 kernel: CPU features: detected: Spectre-BHB Dec 13 01:54:48.294752 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 01:54:48.294771 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 01:54:48.294797 kernel: alternatives: applying boot alternatives Dec 13 01:54:48.294818 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:48.294838 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:54:48.294856 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:54:48.294874 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:54:48.294892 kernel: Fallback order for Node 0: 0 Dec 13 01:54:48.294910 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 01:54:48.294928 kernel: Policy zone: Normal Dec 13 01:54:48.294946 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:54:48.294963 kernel: software IO TLB: area num 2. Dec 13 01:54:48.295018 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 01:54:48.295051 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Dec 13 01:54:48.295070 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:54:48.295087 kernel: trace event string verifier disabled Dec 13 01:54:48.295105 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:54:48.295125 kernel: rcu: RCU event tracing is enabled. Dec 13 01:54:48.295144 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:54:48.295162 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:54:48.295181 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:54:48.295199 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:54:48.295216 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:54:48.295234 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:54:48.295256 kernel: GICv3: 96 SPIs implemented Dec 13 01:54:48.295274 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:54:48.295291 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:54:48.295308 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 01:54:48.295325 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 01:54:48.295343 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 01:54:48.295360 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:54:48.295378 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:54:48.295395 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 01:54:48.295413 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 01:54:48.295430 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 01:54:48.295447 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:54:48.295469 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 01:54:48.295487 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 01:54:48.295505 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 01:54:48.295523 kernel: Console: colour dummy device 80x25 Dec 13 01:54:48.295541 kernel: printk: console [tty1] enabled Dec 13 01:54:48.295559 kernel: ACPI: Core revision 20230628 Dec 13 01:54:48.295578 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 01:54:48.295596 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:54:48.295614 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:54:48.295635 kernel: landlock: Up and running. Dec 13 01:54:48.295653 kernel: SELinux: Initializing. Dec 13 01:54:48.295698 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:48.295730 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:48.295997 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:48.296017 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:48.296035 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:54:48.296054 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:54:48.296072 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 01:54:48.296098 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 01:54:48.296116 kernel: Remapping and enabling EFI services. Dec 13 01:54:48.296133 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:54:48.296151 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:54:48.296169 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 01:54:48.296188 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 01:54:48.296206 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 01:54:48.296224 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:54:48.296242 kernel: SMP: Total of 2 processors activated. Dec 13 01:54:48.296264 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:54:48.296282 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 01:54:48.296300 kernel: CPU features: detected: CRC32 instructions Dec 13 01:54:48.296329 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:54:48.296352 kernel: alternatives: applying system-wide alternatives Dec 13 01:54:48.296370 kernel: devtmpfs: initialized Dec 13 01:54:48.296389 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:54:48.296407 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:54:48.296426 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:54:48.296445 kernel: SMBIOS 3.0.0 present. Dec 13 01:54:48.296468 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 01:54:48.296487 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:54:48.296505 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:54:48.296524 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:54:48.296543 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:54:48.296561 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:54:48.296579 kernel: audit: type=2000 audit(0.359:1): state=initialized audit_enabled=0 res=1 Dec 13 01:54:48.296621 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:54:48.296642 kernel: cpuidle: using governor menu Dec 13 01:54:48.296661 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:54:48.296809 kernel: ASID allocator initialised with 65536 entries Dec 13 01:54:48.296832 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:54:48.296851 kernel: Serial: AMBA PL011 UART driver Dec 13 01:54:48.296869 kernel: Modules: 17520 pages in range for non-PLT usage Dec 13 01:54:48.296889 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:54:48.296907 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:54:48.296934 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:54:48.296953 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:54:48.296972 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:54:48.296990 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:54:48.297009 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:54:48.297028 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:54:48.297046 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:54:48.297065 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:54:48.297083 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:54:48.297106 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:54:48.297125 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:54:48.297144 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:54:48.297162 kernel: ACPI: Interpreter enabled Dec 13 01:54:48.297181 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:54:48.297199 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:54:48.297218 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 01:54:48.297525 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:54:48.297776 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:54:48.297981 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:54:48.298195 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 01:54:48.298402 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 01:54:48.298429 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 01:54:48.298448 kernel: acpiphp: Slot [1] registered Dec 13 01:54:48.298467 kernel: acpiphp: Slot [2] registered Dec 13 01:54:48.298486 kernel: acpiphp: Slot [3] registered Dec 13 01:54:48.298512 kernel: acpiphp: Slot [4] registered Dec 13 01:54:48.298531 kernel: acpiphp: Slot [5] registered Dec 13 01:54:48.298550 kernel: acpiphp: Slot [6] registered Dec 13 01:54:48.298568 kernel: acpiphp: Slot [7] registered Dec 13 01:54:48.298587 kernel: acpiphp: Slot [8] registered Dec 13 01:54:48.298605 kernel: acpiphp: Slot [9] registered Dec 13 01:54:48.298624 kernel: acpiphp: Slot [10] registered Dec 13 01:54:48.298643 kernel: acpiphp: Slot [11] registered Dec 13 01:54:48.298662 kernel: acpiphp: Slot [12] registered Dec 13 01:54:48.298725 kernel: acpiphp: Slot [13] registered Dec 13 01:54:48.299995 kernel: acpiphp: Slot [14] registered Dec 13 01:54:48.300021 kernel: acpiphp: Slot [15] registered Dec 13 01:54:48.300040 kernel: acpiphp: Slot [16] registered Dec 13 01:54:48.300059 kernel: acpiphp: Slot [17] registered Dec 13 01:54:48.300078 kernel: acpiphp: Slot [18] registered Dec 13 01:54:48.300098 kernel: acpiphp: Slot [19] registered Dec 13 01:54:48.300117 kernel: acpiphp: Slot [20] registered Dec 13 01:54:48.300136 kernel: acpiphp: Slot [21] registered Dec 13 01:54:48.300154 kernel: acpiphp: Slot [22] registered Dec 13 01:54:48.300185 kernel: acpiphp: Slot [23] registered Dec 13 01:54:48.300204 kernel: acpiphp: Slot [24] registered Dec 13 01:54:48.300222 kernel: acpiphp: Slot [25] registered Dec 13 01:54:48.300240 kernel: acpiphp: Slot [26] registered Dec 13 01:54:48.300259 kernel: acpiphp: Slot [27] registered Dec 13 01:54:48.300277 kernel: acpiphp: Slot [28] registered Dec 13 01:54:48.300296 kernel: acpiphp: Slot [29] registered Dec 13 01:54:48.300314 kernel: acpiphp: Slot [30] registered Dec 13 01:54:48.300333 kernel: acpiphp: Slot [31] registered Dec 13 01:54:48.300351 kernel: PCI host bridge to bus 0000:00 Dec 13 01:54:48.300642 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 01:54:48.300894 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:54:48.301086 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:48.301272 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 01:54:48.301506 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 01:54:48.301774 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 01:54:48.302002 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 01:54:48.302231 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:54:48.302442 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 01:54:48.302647 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:48.302929 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:54:48.303141 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 01:54:48.303359 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 01:54:48.303564 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 01:54:48.305895 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:48.306135 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 01:54:48.306361 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 01:54:48.306579 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 01:54:48.306839 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 01:54:48.307079 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 01:54:48.307300 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 01:54:48.307493 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:54:48.309919 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:48.309979 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:54:48.310002 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:54:48.310022 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:54:48.310042 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:54:48.310063 kernel: iommu: Default domain type: Translated Dec 13 01:54:48.310096 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:54:48.310116 kernel: efivars: Registered efivars operations Dec 13 01:54:48.310136 kernel: vgaarb: loaded Dec 13 01:54:48.310154 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:54:48.310173 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:54:48.310193 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:54:48.310212 kernel: pnp: PnP ACPI init Dec 13 01:54:48.310465 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 01:54:48.310502 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:54:48.310522 kernel: NET: Registered PF_INET protocol family Dec 13 01:54:48.310542 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:54:48.310562 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:54:48.310582 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:54:48.310602 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:54:48.310622 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:54:48.310642 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:54:48.310661 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:48.310717 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:48.310737 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:54:48.310757 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:54:48.310775 kernel: kvm [1]: HYP mode not available Dec 13 01:54:48.310794 kernel: Initialise system trusted keyrings Dec 13 01:54:48.310814 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:54:48.310833 kernel: Key type asymmetric registered Dec 13 01:54:48.310852 kernel: Asymmetric key parser 'x509' registered Dec 13 01:54:48.310871 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:54:48.310897 kernel: io scheduler mq-deadline registered Dec 13 01:54:48.310916 kernel: io scheduler kyber registered Dec 13 01:54:48.310935 kernel: io scheduler bfq registered Dec 13 01:54:48.311192 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 01:54:48.311229 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:54:48.311249 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:54:48.311268 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 01:54:48.311289 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 01:54:48.311318 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:54:48.311339 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 01:54:48.311572 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 01:54:48.311603 kernel: printk: console [ttyS0] disabled Dec 13 01:54:48.311623 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 01:54:48.311642 kernel: printk: console [ttyS0] enabled Dec 13 01:54:48.311661 kernel: printk: bootconsole [uart0] disabled Dec 13 01:54:48.312866 kernel: thunder_xcv, ver 1.0 Dec 13 01:54:48.312899 kernel: thunder_bgx, ver 1.0 Dec 13 01:54:48.312927 kernel: nicpf, ver 1.0 Dec 13 01:54:48.312946 kernel: nicvf, ver 1.0 Dec 13 01:54:48.313232 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:54:48.313429 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:54:47 UTC (1734054887) Dec 13 01:54:48.313457 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:54:48.313478 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 01:54:48.313499 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:54:48.313518 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:54:48.313544 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:54:48.313563 kernel: Segment Routing with IPv6 Dec 13 01:54:48.313581 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:54:48.313601 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:54:48.313620 kernel: Key type dns_resolver registered Dec 13 01:54:48.313639 kernel: registered taskstats version 1 Dec 13 01:54:48.313657 kernel: Loading compiled-in X.509 certificates Dec 13 01:54:48.313703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:54:48.313726 kernel: Key type .fscrypt registered Dec 13 01:54:48.314746 kernel: Key type fscrypt-provisioning registered Dec 13 01:54:48.314797 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:54:48.314817 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:54:48.314837 kernel: ima: No architecture policies found Dec 13 01:54:48.314858 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:54:48.314877 kernel: clk: Disabling unused clocks Dec 13 01:54:48.314897 kernel: Freeing unused kernel memory: 39360K Dec 13 01:54:48.314916 kernel: Run /init as init process Dec 13 01:54:48.314935 kernel: with arguments: Dec 13 01:54:48.314954 kernel: /init Dec 13 01:54:48.314984 kernel: with environment: Dec 13 01:54:48.315003 kernel: HOME=/ Dec 13 01:54:48.315021 kernel: TERM=linux Dec 13 01:54:48.315040 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:54:48.315064 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:48.315088 systemd[1]: Detected virtualization amazon. Dec 13 01:54:48.315109 systemd[1]: Detected architecture arm64. Dec 13 01:54:48.315133 systemd[1]: Running in initrd. Dec 13 01:54:48.315154 systemd[1]: No hostname configured, using default hostname. Dec 13 01:54:48.315173 systemd[1]: Hostname set to . Dec 13 01:54:48.315194 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:54:48.315215 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:54:48.315235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:48.315256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:48.315278 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:54:48.315303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:48.315324 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:54:48.315345 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:54:48.315369 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:54:48.315390 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:54:48.315411 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:48.315432 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:48.315457 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:48.315477 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:48.315498 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:48.315518 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:48.315539 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:48.315559 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:48.315580 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:54:48.315600 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:54:48.315621 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:48.315646 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:48.315667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:48.317772 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:48.317798 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:54:48.317820 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:48.317841 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:54:48.317862 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:54:48.317883 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:48.317916 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:48.317937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:48.317959 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:48.317981 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:48.318013 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:54:48.318064 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:48.318190 systemd-journald[251]: Collecting audit messages is disabled. Dec 13 01:54:48.318244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:48.318266 systemd-journald[251]: Journal started Dec 13 01:54:48.318309 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2d41bfa888ac746ccf8fbf1f369ba1) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:54:48.322810 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:54:48.289445 systemd-modules-load[252]: Inserted module 'overlay' Dec 13 01:54:48.331221 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:48.331294 kernel: Bridge firewalling registered Dec 13 01:54:48.330562 systemd-modules-load[252]: Inserted module 'br_netfilter' Dec 13 01:54:48.334752 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:48.343403 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:48.344860 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:48.352952 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:48.358964 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:48.366112 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:48.413175 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:48.423206 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:48.433189 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:48.445199 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:48.456062 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:48.465642 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:54:48.506968 dracut-cmdline[290]: dracut-dracut-053 Dec 13 01:54:48.513844 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:48.540453 systemd-resolved[288]: Positive Trust Anchors: Dec 13 01:54:48.540480 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:48.540543 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:48.689718 kernel: SCSI subsystem initialized Dec 13 01:54:48.697748 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:54:48.712084 kernel: iscsi: registered transport (tcp) Dec 13 01:54:48.734722 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:54:48.735706 kernel: QLogic iSCSI HBA Driver Dec 13 01:54:48.815747 kernel: random: crng init done Dec 13 01:54:48.816164 systemd-resolved[288]: Defaulting to hostname 'linux'. Dec 13 01:54:48.819363 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:48.821836 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:48.858645 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:48.870241 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:54:48.915385 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:54:48.915480 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:54:48.917199 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:54:48.987746 kernel: raid6: neonx8 gen() 6634 MB/s Dec 13 01:54:49.004743 kernel: raid6: neonx4 gen() 6499 MB/s Dec 13 01:54:49.021760 kernel: raid6: neonx2 gen() 5418 MB/s Dec 13 01:54:49.038768 kernel: raid6: neonx1 gen() 3895 MB/s Dec 13 01:54:49.055743 kernel: raid6: int64x8 gen() 3764 MB/s Dec 13 01:54:49.072737 kernel: raid6: int64x4 gen() 3676 MB/s Dec 13 01:54:49.089741 kernel: raid6: int64x2 gen() 3584 MB/s Dec 13 01:54:49.107660 kernel: raid6: int64x1 gen() 2733 MB/s Dec 13 01:54:49.107811 kernel: raid6: using algorithm neonx8 gen() 6634 MB/s Dec 13 01:54:49.125656 kernel: raid6: .... xor() 4790 MB/s, rmw enabled Dec 13 01:54:49.125800 kernel: raid6: using neon recovery algorithm Dec 13 01:54:49.134755 kernel: xor: measuring software checksum speed Dec 13 01:54:49.134855 kernel: 8regs : 9488 MB/sec Dec 13 01:54:49.138120 kernel: 32regs : 9718 MB/sec Dec 13 01:54:49.138207 kernel: arm64_neon : 9503 MB/sec Dec 13 01:54:49.138240 kernel: xor: using function: 32regs (9718 MB/sec) Dec 13 01:54:49.230761 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:54:49.258359 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:49.274035 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:49.322308 systemd-udevd[471]: Using default interface naming scheme 'v255'. Dec 13 01:54:49.332442 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:49.359535 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:54:49.396896 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Dec 13 01:54:49.465634 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:49.476139 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:49.608259 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:49.620955 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:54:49.666492 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:49.670572 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:49.682058 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:49.682542 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:49.698085 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:54:49.730778 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:49.810662 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:54:49.810776 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 01:54:49.845854 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:54:49.846233 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:54:49.846521 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:d0:d1:51:53:11 Dec 13 01:54:49.810027 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:49.810332 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:49.818609 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:49.822882 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:49.823325 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:49.825951 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:49.836205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:49.868522 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:54:49.882425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:49.889238 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 01:54:49.889295 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:54:49.899040 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:49.908773 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:54:49.912735 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:54:49.912829 kernel: GPT:9289727 != 16777215 Dec 13 01:54:49.912863 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:54:49.912891 kernel: GPT:9289727 != 16777215 Dec 13 01:54:49.912919 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:54:49.912946 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:49.963295 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:50.056792 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (516) Dec 13 01:54:50.063404 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/nvme0n1p3 scanned by (udev-worker) (524) Dec 13 01:54:50.153669 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:54:50.178407 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:54:50.226208 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:54:50.242627 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:50.247258 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:50.261249 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:54:50.284130 disk-uuid[660]: Primary Header is updated. Dec 13 01:54:50.284130 disk-uuid[660]: Secondary Entries is updated. Dec 13 01:54:50.284130 disk-uuid[660]: Secondary Header is updated. Dec 13 01:54:50.297734 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:50.310764 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:50.320729 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:51.326725 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:51.327350 disk-uuid[661]: The operation has completed successfully. Dec 13 01:54:51.539858 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:54:51.542300 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:54:51.604012 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:54:51.612119 sh[1004]: Success Dec 13 01:54:51.644855 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:54:51.756516 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:54:51.769895 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:54:51.774135 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:54:51.804284 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:54:51.804345 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:51.804372 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:54:51.805731 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:54:51.806768 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:54:51.857707 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:54:51.875718 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:54:51.879917 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:54:51.888004 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:54:51.893036 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:54:51.934929 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:51.935029 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:51.936319 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:51.942759 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:51.964459 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:54:51.967641 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:51.991881 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:54:52.004121 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:54:52.107285 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:52.131062 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:52.198402 systemd-networkd[1218]: lo: Link UP Dec 13 01:54:52.198811 systemd-networkd[1218]: lo: Gained carrier Dec 13 01:54:52.205488 systemd-networkd[1218]: Enumeration completed Dec 13 01:54:52.207847 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:52.210267 systemd[1]: Reached target network.target - Network. Dec 13 01:54:52.212091 systemd-networkd[1218]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:52.212099 systemd-networkd[1218]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:54:52.225010 systemd-networkd[1218]: eth0: Link UP Dec 13 01:54:52.225027 systemd-networkd[1218]: eth0: Gained carrier Dec 13 01:54:52.225046 systemd-networkd[1218]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:52.247825 systemd-networkd[1218]: eth0: DHCPv4 address 172.31.24.71/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:54:52.471464 ignition[1140]: Ignition 2.19.0 Dec 13 01:54:52.471495 ignition[1140]: Stage: fetch-offline Dec 13 01:54:52.473208 ignition[1140]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:52.473257 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:52.476869 ignition[1140]: Ignition finished successfully Dec 13 01:54:52.496839 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:52.512276 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:54:52.542548 ignition[1230]: Ignition 2.19.0 Dec 13 01:54:52.542578 ignition[1230]: Stage: fetch Dec 13 01:54:52.543475 ignition[1230]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:52.543537 ignition[1230]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:52.543868 ignition[1230]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:52.582661 ignition[1230]: PUT result: OK Dec 13 01:54:52.587202 ignition[1230]: parsed url from cmdline: "" Dec 13 01:54:52.587237 ignition[1230]: no config URL provided Dec 13 01:54:52.587257 ignition[1230]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:54:52.587288 ignition[1230]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:54:52.587331 ignition[1230]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:52.595991 ignition[1230]: PUT result: OK Dec 13 01:54:52.596132 ignition[1230]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:54:52.600388 ignition[1230]: GET result: OK Dec 13 01:54:52.600617 ignition[1230]: parsing config with SHA512: 4bf751547542dbbb892b7617dc1b15b8a66025bd65a14087c418d2c27b780cd09906b4efc551e15de1d2b9be76375fc6a0628c13d8e5ac74bf95b521fcdc31f5 Dec 13 01:54:52.608311 unknown[1230]: fetched base config from "system" Dec 13 01:54:52.608335 unknown[1230]: fetched base config from "system" Dec 13 01:54:52.609092 ignition[1230]: fetch: fetch complete Dec 13 01:54:52.608350 unknown[1230]: fetched user config from "aws" Dec 13 01:54:52.609121 ignition[1230]: fetch: fetch passed Dec 13 01:54:52.609262 ignition[1230]: Ignition finished successfully Dec 13 01:54:52.620366 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:54:52.630062 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:54:52.664514 ignition[1237]: Ignition 2.19.0 Dec 13 01:54:52.664542 ignition[1237]: Stage: kargs Dec 13 01:54:52.666354 ignition[1237]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:52.666385 ignition[1237]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:52.666906 ignition[1237]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:52.671931 ignition[1237]: PUT result: OK Dec 13 01:54:52.678803 ignition[1237]: kargs: kargs passed Dec 13 01:54:52.678932 ignition[1237]: Ignition finished successfully Dec 13 01:54:52.684545 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:54:52.695065 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:54:52.733573 ignition[1243]: Ignition 2.19.0 Dec 13 01:54:52.733599 ignition[1243]: Stage: disks Dec 13 01:54:52.735542 ignition[1243]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:52.735574 ignition[1243]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:52.735855 ignition[1243]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:52.743882 ignition[1243]: PUT result: OK Dec 13 01:54:52.750105 ignition[1243]: disks: disks passed Dec 13 01:54:52.750724 ignition[1243]: Ignition finished successfully Dec 13 01:54:52.756857 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:54:52.762134 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:52.767140 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:54:52.767359 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:52.776030 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:54:52.776362 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:54:52.795179 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:54:52.826744 systemd-fsck[1252]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:54:52.837598 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:54:52.847866 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:54:52.951729 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:54:52.953344 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:54:52.957706 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:54:52.982880 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:52.989358 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:54:52.991805 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:54:52.991905 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:54:52.991957 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:53.010730 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1271) Dec 13 01:54:53.015333 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:53.015431 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:53.016726 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:53.023241 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:53.026319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:53.030749 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:54:53.040026 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:54:53.369328 initrd-setup-root[1295]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:54:53.378049 initrd-setup-root[1302]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:54:53.386517 initrd-setup-root[1309]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:54:53.408970 initrd-setup-root[1316]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:54:53.785824 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:53.800020 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:54:53.807054 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:54:53.831220 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:54:53.837802 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:53.879792 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:54:53.889593 ignition[1384]: INFO : Ignition 2.19.0 Dec 13 01:54:53.889593 ignition[1384]: INFO : Stage: mount Dec 13 01:54:53.893038 ignition[1384]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:53.893038 ignition[1384]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:53.897448 ignition[1384]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:53.900533 ignition[1384]: INFO : PUT result: OK Dec 13 01:54:53.905544 ignition[1384]: INFO : mount: mount passed Dec 13 01:54:53.905544 ignition[1384]: INFO : Ignition finished successfully Dec 13 01:54:53.912786 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:54:53.923897 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:54:53.957644 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:53.976045 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1395) Dec 13 01:54:53.976107 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:53.979357 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:53.980640 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:53.986750 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:53.988946 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:54.030751 ignition[1412]: INFO : Ignition 2.19.0 Dec 13 01:54:54.030751 ignition[1412]: INFO : Stage: files Dec 13 01:54:54.034971 ignition[1412]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:54.034971 ignition[1412]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:54.034971 ignition[1412]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:54.042155 ignition[1412]: INFO : PUT result: OK Dec 13 01:54:54.047235 ignition[1412]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:54:54.073074 ignition[1412]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:54:54.076090 ignition[1412]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:54:54.083655 ignition[1412]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:54:54.089438 ignition[1412]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:54:54.092946 unknown[1412]: wrote ssh authorized keys file for user: core Dec 13 01:54:54.097588 ignition[1412]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:54:54.097588 ignition[1412]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:54.097588 ignition[1412]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:54.097588 ignition[1412]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:54.097588 ignition[1412]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:54.097588 ignition[1412]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:54:54.120631 ignition[1412]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:54:54.120631 ignition[1412]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:54:54.120631 ignition[1412]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 01:54:54.162898 systemd-networkd[1218]: eth0: Gained IPv6LL Dec 13 01:54:54.616925 ignition[1412]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 01:54:55.053909 ignition[1412]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:54:55.058299 ignition[1412]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:55.058299 ignition[1412]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:55.058299 ignition[1412]: INFO : files: files passed Dec 13 01:54:55.058299 ignition[1412]: INFO : Ignition finished successfully Dec 13 01:54:55.069655 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:54:55.092249 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:54:55.098542 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:54:55.109607 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:54:55.114439 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:54:55.148865 initrd-setup-root-after-ignition[1440]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:55.148865 initrd-setup-root-after-ignition[1440]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:55.158666 initrd-setup-root-after-ignition[1444]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:55.165075 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:55.168930 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:54:55.182984 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:54:55.252372 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:54:55.253436 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:54:55.260082 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:54:55.262354 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:54:55.267025 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:54:55.278043 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:54:55.318782 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:55.329058 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:54:55.367419 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:55.372934 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:55.378029 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:54:55.380948 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:54:55.381259 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:55.384765 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:54:55.387273 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:54:55.390870 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:54:55.397049 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:55.404277 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:55.415454 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:54:55.418366 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:55.421918 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:54:55.430046 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:54:55.432475 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:54:55.435914 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:54:55.436598 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:55.443400 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:55.446240 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:55.450204 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:54:55.455310 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:55.460349 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:54:55.460635 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:55.463261 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:54:55.463559 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:55.466271 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:54:55.466531 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:54:55.483592 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:54:55.495299 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:54:55.498257 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:54:55.499370 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:55.506055 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:54:55.506436 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:55.526479 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:54:55.533422 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:54:55.561039 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:54:55.574804 ignition[1464]: INFO : Ignition 2.19.0 Dec 13 01:54:55.574804 ignition[1464]: INFO : Stage: umount Dec 13 01:54:55.579236 ignition[1464]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:55.579236 ignition[1464]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:55.579236 ignition[1464]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:55.587633 ignition[1464]: INFO : PUT result: OK Dec 13 01:54:55.592222 ignition[1464]: INFO : umount: umount passed Dec 13 01:54:55.594284 ignition[1464]: INFO : Ignition finished successfully Dec 13 01:54:55.599650 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:54:55.600140 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:54:55.605557 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:54:55.605896 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:54:55.609142 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:54:55.609332 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:54:55.617963 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:54:55.618168 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:54:55.623450 systemd[1]: Stopped target network.target - Network. Dec 13 01:54:55.625811 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:54:55.626060 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:55.638236 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:54:55.640730 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:54:55.645109 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:55.648239 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:54:55.650577 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:54:55.653000 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:54:55.653096 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:55.653878 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:54:55.653966 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:55.654475 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:54:55.654591 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:54:55.676889 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:54:55.677044 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:55.683189 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:54:55.691369 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:54:55.694808 systemd-networkd[1218]: eth0: DHCPv6 lease lost Dec 13 01:54:55.704925 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:54:55.705207 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:54:55.710396 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:54:55.710643 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:55.728985 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:54:55.731151 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:54:55.731331 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:55.742076 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:55.756108 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:54:55.762347 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:54:55.779423 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:54:55.779814 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:55.796379 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:54:55.796500 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:55.798888 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:54:55.800803 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:55.803012 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:54:55.803111 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:55.814137 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:54:55.814259 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:55.816823 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:55.816916 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:55.833795 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:54:55.838506 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:54:55.838647 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:55.850591 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:54:55.850755 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:55.861998 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:54:55.862327 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:55.874104 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:54:55.874266 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:55.880959 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:54:55.881070 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:55.887213 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:54:55.887336 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:55.902012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:55.902130 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:55.908281 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:54:55.908723 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:54:55.917263 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:54:55.918502 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:54:55.964478 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:54:55.966108 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:54:55.971382 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:54:55.973767 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:54:55.973916 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:55.992440 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:54:56.013145 systemd[1]: Switching root. Dec 13 01:54:56.069340 systemd-journald[251]: Journal stopped Dec 13 01:54:58.616595 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Dec 13 01:54:58.616794 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:54:58.616854 kernel: SELinux: policy capability open_perms=1 Dec 13 01:54:58.616889 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:54:58.616923 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:54:58.616978 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:54:58.617017 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:54:58.617051 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:54:58.617081 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:54:58.617111 kernel: audit: type=1403 audit(1734054896.683:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:54:58.617163 systemd[1]: Successfully loaded SELinux policy in 66.811ms. Dec 13 01:54:58.617215 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.716ms. Dec 13 01:54:58.617253 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:58.617286 systemd[1]: Detected virtualization amazon. Dec 13 01:54:58.617321 systemd[1]: Detected architecture arm64. Dec 13 01:54:58.617355 systemd[1]: Detected first boot. Dec 13 01:54:58.617393 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:54:58.617431 zram_generator::config[1506]: No configuration found. Dec 13 01:54:58.617469 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:54:58.617503 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:54:58.617536 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:54:58.617570 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:54:58.617604 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:54:58.617637 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:54:58.621809 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:54:58.621898 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:54:58.621932 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:54:58.621976 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:54:58.622008 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:54:58.622039 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:54:58.622072 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:58.622109 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:58.622140 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:54:58.622180 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:54:58.622215 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:54:58.622246 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:58.622278 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:54:58.622311 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:58.622342 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:54:58.622374 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:54:58.622408 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:54:58.622445 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:54:58.622475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:58.622508 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:58.622539 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:58.622573 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:58.622605 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:54:58.622635 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:54:58.622668 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:58.622776 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:58.622809 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:58.622844 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:54:58.622880 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:54:58.622911 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:54:58.622943 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:54:58.622974 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:54:58.623005 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:54:58.623040 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:54:58.623078 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:54:58.623112 systemd[1]: Reached target machines.target - Containers. Dec 13 01:54:58.623143 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:54:58.623174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:54:58.623206 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:58.623236 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:54:58.623266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:54:58.623296 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:54:58.623328 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:58.623364 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:54:58.623397 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:54:58.623429 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:54:58.623459 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:54:58.623490 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:54:58.623520 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:54:58.623550 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:54:58.623580 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:58.623615 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:58.623650 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:54:58.627310 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:54:58.627378 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:58.627415 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:54:58.627447 systemd[1]: Stopped verity-setup.service. Dec 13 01:54:58.627479 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:54:58.627510 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:54:58.627539 kernel: loop: module loaded Dec 13 01:54:58.627586 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:54:58.627620 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:54:58.627650 kernel: fuse: init (API version 7.39) Dec 13 01:54:58.627730 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:54:58.627770 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:54:58.627809 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:58.627841 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:54:58.627876 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:54:58.627924 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:54:58.627955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:54:58.627986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:58.628017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:58.628048 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:54:58.628079 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:54:58.628115 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:54:58.628146 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:54:58.628181 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:54:58.628214 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:54:58.628247 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:54:58.628283 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:54:58.628315 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:54:58.628346 kernel: ACPI: bus type drm_connector registered Dec 13 01:54:58.628376 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:58.628406 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:54:58.628439 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:54:58.628472 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:58.628504 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:54:58.628561 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:54:58.628606 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:54:58.628638 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:54:58.628762 systemd-journald[1588]: Collecting audit messages is disabled. Dec 13 01:54:58.628821 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:58.628855 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:54:58.628886 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:54:58.628919 systemd-journald[1588]: Journal started Dec 13 01:54:58.628975 systemd-journald[1588]: Runtime Journal (/run/log/journal/ec2d41bfa888ac746ccf8fbf1f369ba1) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:54:58.637761 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:54:58.637852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:57.871994 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:54:57.922227 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:54:57.923223 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:54:58.658947 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:54:58.659036 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:54:58.670625 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:54:58.689169 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:58.709295 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:54:58.709393 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:58.716896 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:54:58.719738 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:54:58.740710 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:54:58.771438 kernel: loop0: detected capacity change from 0 to 189592 Dec 13 01:54:58.821715 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:54:58.811999 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:54:58.831165 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:54:58.847956 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:54:58.866231 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Dec 13 01:54:58.866267 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Dec 13 01:54:58.891422 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:58.927014 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:58.943258 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:54:58.950724 kernel: loop1: detected capacity change from 0 to 114432 Dec 13 01:54:58.951439 systemd-journald[1588]: Time spent on flushing to /var/log/journal/ec2d41bfa888ac746ccf8fbf1f369ba1 is 212.999ms for 904 entries. Dec 13 01:54:58.951439 systemd-journald[1588]: System Journal (/var/log/journal/ec2d41bfa888ac746ccf8fbf1f369ba1) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:54:59.184755 systemd-journald[1588]: Received client request to flush runtime journal. Dec 13 01:54:59.184960 kernel: loop2: detected capacity change from 0 to 114328 Dec 13 01:54:59.140903 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:54:59.153592 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:59.157584 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:59.178054 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:54:59.193293 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:54:59.206554 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:54:59.213485 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:54:59.266853 kernel: loop3: detected capacity change from 0 to 52536 Dec 13 01:54:59.271087 udevadm[1656]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:54:59.312569 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Dec 13 01:54:59.312624 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Dec 13 01:54:59.330792 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:59.367729 kernel: loop4: detected capacity change from 0 to 189592 Dec 13 01:54:59.395743 kernel: loop5: detected capacity change from 0 to 114432 Dec 13 01:54:59.411779 kernel: loop6: detected capacity change from 0 to 114328 Dec 13 01:54:59.430725 kernel: loop7: detected capacity change from 0 to 52536 Dec 13 01:54:59.443010 (sd-merge)[1665]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:54:59.444246 (sd-merge)[1665]: Merged extensions into '/usr'. Dec 13 01:54:59.450961 systemd[1]: Reloading requested from client PID 1619 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:54:59.450994 systemd[1]: Reloading... Dec 13 01:54:59.715749 zram_generator::config[1692]: No configuration found. Dec 13 01:55:00.027942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:00.154065 systemd[1]: Reloading finished in 701 ms. Dec 13 01:55:00.213094 ldconfig[1614]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:55:00.216847 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:55:00.226063 systemd[1]: Starting ensure-sysext.service... Dec 13 01:55:00.231169 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:55:00.245149 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:55:00.285918 systemd-tmpfiles[1743]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:55:00.289617 systemd-tmpfiles[1743]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:55:00.292041 systemd-tmpfiles[1743]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:55:00.292905 systemd-tmpfiles[1743]: ACLs are not supported, ignoring. Dec 13 01:55:00.293076 systemd-tmpfiles[1743]: ACLs are not supported, ignoring. Dec 13 01:55:00.300295 systemd[1]: Reloading requested from client PID 1742 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:55:00.300296 systemd-tmpfiles[1743]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:55:00.300334 systemd[1]: Reloading... Dec 13 01:55:00.300341 systemd-tmpfiles[1743]: Skipping /boot Dec 13 01:55:00.326553 systemd-tmpfiles[1743]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:55:00.326897 systemd-tmpfiles[1743]: Skipping /boot Dec 13 01:55:00.515769 zram_generator::config[1775]: No configuration found. Dec 13 01:55:00.773634 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:00.897018 systemd[1]: Reloading finished in 595 ms. Dec 13 01:55:00.930897 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:55:00.941763 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:55:00.965210 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:00.973901 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:55:00.981246 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:55:00.995253 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:55:01.002006 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:55:01.017042 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:55:01.035817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:01.040402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:55:01.046047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:55:01.054350 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:55:01.056839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:01.063955 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:01.064419 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:01.072296 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:55:01.081829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:01.087341 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:55:01.090109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:01.090553 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:55:01.108701 systemd[1]: Finished ensure-sysext.service. Dec 13 01:55:01.139524 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:55:01.200304 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:55:01.219155 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:55:01.232357 systemd-udevd[1830]: Using default interface naming scheme 'v255'. Dec 13 01:55:01.285716 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:55:01.297235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:55:01.299318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:55:01.302734 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:55:01.305341 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:55:01.308665 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:55:01.309038 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:55:01.316993 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:55:01.320171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:55:01.325400 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:55:01.325586 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:55:01.325668 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:55:01.332223 augenrules[1858]: No rules Dec 13 01:55:01.342666 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:01.351147 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:55:01.354145 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:55:01.381148 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:55:01.382288 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:55:01.551532 systemd-networkd[1870]: lo: Link UP Dec 13 01:55:01.552226 systemd-networkd[1870]: lo: Gained carrier Dec 13 01:55:01.553599 systemd-networkd[1870]: Enumeration completed Dec 13 01:55:01.554469 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:55:01.571044 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:55:01.604712 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1887) Dec 13 01:55:01.609726 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1887) Dec 13 01:55:01.641220 systemd-resolved[1829]: Positive Trust Anchors: Dec 13 01:55:01.641858 systemd-resolved[1829]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:55:01.641935 systemd-resolved[1829]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:55:01.652713 systemd-resolved[1829]: Defaulting to hostname 'linux'. Dec 13 01:55:01.657291 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:55:01.660842 systemd[1]: Reached target network.target - Network. Dec 13 01:55:01.662610 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:55:01.667660 (udev-worker)[1877]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:01.693035 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:55:01.721038 systemd-networkd[1870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:55:01.721318 systemd-networkd[1870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:55:01.726159 systemd-networkd[1870]: eth0: Link UP Dec 13 01:55:01.727670 systemd-networkd[1870]: eth0: Gained carrier Dec 13 01:55:01.728429 systemd-networkd[1870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:55:01.741852 systemd-networkd[1870]: eth0: DHCPv4 address 172.31.24.71/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:55:01.896833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:55:01.916762 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1895) Dec 13 01:55:02.047468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:55:02.127908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:55:02.132821 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:55:02.150192 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:55:02.157127 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:55:02.174274 lvm[1996]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:55:02.198743 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:55:02.218401 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:55:02.222662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:55:02.225266 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:55:02.227659 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:55:02.230314 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:55:02.233206 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:55:02.235913 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:55:02.238799 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:55:02.241696 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:55:02.241766 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:55:02.243617 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:55:02.247483 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:55:02.252433 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:55:02.267051 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:55:02.273077 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:55:02.277570 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:55:02.280326 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:55:02.282748 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:55:02.285063 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:55:02.285367 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:55:02.291725 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:55:02.299359 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:55:02.312243 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:55:02.319605 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:55:02.325481 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:55:02.330124 lvm[2003]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:55:02.328004 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:55:02.334159 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:55:02.341409 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:55:02.352940 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:55:02.359138 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:55:02.389389 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:55:02.403166 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:55:02.407740 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:55:02.408695 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:55:02.415976 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:55:02.424996 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:55:02.451562 extend-filesystems[2008]: Found loop4 Dec 13 01:55:02.454614 extend-filesystems[2008]: Found loop5 Dec 13 01:55:02.454614 extend-filesystems[2008]: Found loop6 Dec 13 01:55:02.454614 extend-filesystems[2008]: Found loop7 Dec 13 01:55:02.454614 extend-filesystems[2008]: Found nvme0n1 Dec 13 01:55:02.454614 extend-filesystems[2008]: Found nvme0n1p1 Dec 13 01:55:02.454614 extend-filesystems[2008]: Found nvme0n1p2 Dec 13 01:55:02.454614 extend-filesystems[2008]: Found nvme0n1p3 Dec 13 01:55:02.454614 extend-filesystems[2008]: Found usr Dec 13 01:55:02.454614 extend-filesystems[2008]: Found nvme0n1p4 Dec 13 01:55:02.454614 extend-filesystems[2008]: Found nvme0n1p6 Dec 13 01:55:02.487615 extend-filesystems[2008]: Found nvme0n1p7 Dec 13 01:55:02.487615 extend-filesystems[2008]: Found nvme0n1p9 Dec 13 01:55:02.487615 extend-filesystems[2008]: Checking size of /dev/nvme0n1p9 Dec 13 01:55:02.515952 update_engine[2016]: I20241213 01:55:02.515750 2016 main.cc:92] Flatcar Update Engine starting Dec 13 01:55:02.531113 jq[2007]: false Dec 13 01:55:02.550235 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:55:02.550651 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:55:02.563557 ntpd[2010]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:55:02.577369 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:55:02.577369 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:55:02.577369 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: ---------------------------------------------------- Dec 13 01:55:02.577369 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:55:02.577369 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:55:02.577369 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: corporation. Support and training for ntp-4 are Dec 13 01:55:02.577369 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: available at https://www.nwtime.org/support Dec 13 01:55:02.577369 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: ---------------------------------------------------- Dec 13 01:55:02.577369 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: proto: precision = 0.096 usec (-23) Dec 13 01:55:02.577369 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: basedate set to 2024-11-30 Dec 13 01:55:02.577369 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: gps base set to 2024-12-01 (week 2343) Dec 13 01:55:02.563632 ntpd[2010]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:55:02.563656 ntpd[2010]: ---------------------------------------------------- Dec 13 01:55:02.563759 ntpd[2010]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:55:02.563806 ntpd[2010]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:55:02.563827 ntpd[2010]: corporation. Support and training for ntp-4 are Dec 13 01:55:02.563846 ntpd[2010]: available at https://www.nwtime.org/support Dec 13 01:55:02.563865 ntpd[2010]: ---------------------------------------------------- Dec 13 01:55:02.574477 ntpd[2010]: proto: precision = 0.096 usec (-23) Dec 13 01:55:02.575018 ntpd[2010]: basedate set to 2024-11-30 Dec 13 01:55:02.575059 ntpd[2010]: gps base set to 2024-12-01 (week 2343) Dec 13 01:55:02.601441 extend-filesystems[2008]: Resized partition /dev/nvme0n1p9 Dec 13 01:55:02.594854 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:55:02.602496 ntpd[2010]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:55:02.607177 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:55:02.607177 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:55:02.607177 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:55:02.607177 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: Listen normally on 3 eth0 172.31.24.71:123 Dec 13 01:55:02.607177 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: Listen normally on 4 lo [::1]:123 Dec 13 01:55:02.607177 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: bind(21) AF_INET6 fe80::4d0:d1ff:fe51:5311%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:55:02.607177 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: unable to create socket on eth0 (5) for fe80::4d0:d1ff:fe51:5311%2#123 Dec 13 01:55:02.607177 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: failed to init interface for address fe80::4d0:d1ff:fe51:5311%2 Dec 13 01:55:02.607177 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: Listening on routing socket on fd #21 for interface updates Dec 13 01:55:02.602606 ntpd[2010]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:55:02.604441 ntpd[2010]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:55:02.604593 ntpd[2010]: Listen normally on 3 eth0 172.31.24.71:123 Dec 13 01:55:02.604774 ntpd[2010]: Listen normally on 4 lo [::1]:123 Dec 13 01:55:02.604862 ntpd[2010]: bind(21) AF_INET6 fe80::4d0:d1ff:fe51:5311%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:55:02.604902 ntpd[2010]: unable to create socket on eth0 (5) for fe80::4d0:d1ff:fe51:5311%2#123 Dec 13 01:55:02.604932 ntpd[2010]: failed to init interface for address fe80::4d0:d1ff:fe51:5311%2 Dec 13 01:55:02.604992 ntpd[2010]: Listening on routing socket on fd #21 for interface updates Dec 13 01:55:02.615754 jq[2017]: true Dec 13 01:55:02.613175 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:55:02.616408 extend-filesystems[2038]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:55:02.614850 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:55:02.622588 ntpd[2010]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:02.623247 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:02.622665 ntpd[2010]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:02.624591 ntpd[2010]: 13 Dec 01:55:02 ntpd[2010]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:02.628157 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:55:02.658453 dbus-daemon[2006]: [system] SELinux support is enabled Dec 13 01:55:02.664947 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:55:02.666323 (ntainerd)[2035]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:55:02.676424 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:55:02.676558 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:55:02.679443 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:55:02.679507 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:55:02.697985 dbus-daemon[2006]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1870 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:55:02.702566 update_engine[2016]: I20241213 01:55:02.702072 2016 update_check_scheduler.cc:74] Next update check in 10m25s Dec 13 01:55:02.709347 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:55:02.713402 dbus-daemon[2006]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:55:02.719059 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:55:02.739612 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:55:02.747740 jq[2042]: true Dec 13 01:55:02.772951 extend-filesystems[2038]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:55:02.772951 extend-filesystems[2038]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:55:02.772951 extend-filesystems[2038]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:55:02.783458 extend-filesystems[2008]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:55:02.797073 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:55:02.800864 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:55:02.802852 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:55:02.806211 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:55:02.807958 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:55:02.984110 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1893) Dec 13 01:55:02.989758 coreos-metadata[2005]: Dec 13 01:55:02.989 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:55:03.005393 coreos-metadata[2005]: Dec 13 01:55:03.005 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:55:03.005946 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:55:03.013561 coreos-metadata[2005]: Dec 13 01:55:03.009 INFO Fetch successful Dec 13 01:55:03.013561 coreos-metadata[2005]: Dec 13 01:55:03.009 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:55:03.020263 coreos-metadata[2005]: Dec 13 01:55:03.018 INFO Fetch successful Dec 13 01:55:03.020263 coreos-metadata[2005]: Dec 13 01:55:03.018 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:55:03.024842 coreos-metadata[2005]: Dec 13 01:55:03.022 INFO Fetch successful Dec 13 01:55:03.024842 coreos-metadata[2005]: Dec 13 01:55:03.023 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:55:03.026946 coreos-metadata[2005]: Dec 13 01:55:03.026 INFO Fetch successful Dec 13 01:55:03.026946 coreos-metadata[2005]: Dec 13 01:55:03.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:55:03.028896 coreos-metadata[2005]: Dec 13 01:55:03.027 INFO Fetch failed with 404: resource not found Dec 13 01:55:03.028896 coreos-metadata[2005]: Dec 13 01:55:03.027 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:55:03.035402 coreos-metadata[2005]: Dec 13 01:55:03.034 INFO Fetch successful Dec 13 01:55:03.035402 coreos-metadata[2005]: Dec 13 01:55:03.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:55:03.037045 coreos-metadata[2005]: Dec 13 01:55:03.036 INFO Fetch successful Dec 13 01:55:03.037045 coreos-metadata[2005]: Dec 13 01:55:03.036 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:55:03.037784 coreos-metadata[2005]: Dec 13 01:55:03.037 INFO Fetch successful Dec 13 01:55:03.037784 coreos-metadata[2005]: Dec 13 01:55:03.037 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:55:03.045759 coreos-metadata[2005]: Dec 13 01:55:03.045 INFO Fetch successful Dec 13 01:55:03.045759 coreos-metadata[2005]: Dec 13 01:55:03.045 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:55:03.046491 coreos-metadata[2005]: Dec 13 01:55:03.046 INFO Fetch successful Dec 13 01:55:03.103966 systemd-logind[2014]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:55:03.113436 systemd-logind[2014]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 01:55:03.115588 containerd[2035]: time="2024-12-13T01:55:03.113597109Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:55:03.115275 systemd-logind[2014]: New seat seat0. Dec 13 01:55:03.127927 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:55:03.199470 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:55:03.205102 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:55:03.223669 bash[2121]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:55:03.228817 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:55:03.250229 systemd[1]: Starting sshkeys.service... Dec 13 01:55:03.304720 containerd[2035]: time="2024-12-13T01:55:03.302797138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:03.322842 containerd[2035]: time="2024-12-13T01:55:03.322743550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:03.323290 containerd[2035]: time="2024-12-13T01:55:03.323199958Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:55:03.323806 containerd[2035]: time="2024-12-13T01:55:03.323474782Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:55:03.326219 containerd[2035]: time="2024-12-13T01:55:03.326133826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:55:03.326460 containerd[2035]: time="2024-12-13T01:55:03.326408050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:03.326895 containerd[2035]: time="2024-12-13T01:55:03.326822506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:03.327299 containerd[2035]: time="2024-12-13T01:55:03.327048394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:03.330079 containerd[2035]: time="2024-12-13T01:55:03.328099042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:03.330079 containerd[2035]: time="2024-12-13T01:55:03.329768938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:03.330079 containerd[2035]: time="2024-12-13T01:55:03.329853430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:03.330079 containerd[2035]: time="2024-12-13T01:55:03.329906722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:03.330733 containerd[2035]: time="2024-12-13T01:55:03.330603178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:03.335799 containerd[2035]: time="2024-12-13T01:55:03.335737402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:03.339327 containerd[2035]: time="2024-12-13T01:55:03.336232594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:03.339327 containerd[2035]: time="2024-12-13T01:55:03.336284530Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:55:03.339327 containerd[2035]: time="2024-12-13T01:55:03.336631174Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:55:03.339327 containerd[2035]: time="2024-12-13T01:55:03.336878194Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:55:03.355241 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:55:03.367550 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:55:03.373391 containerd[2035]: time="2024-12-13T01:55:03.372430390Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:55:03.373391 containerd[2035]: time="2024-12-13T01:55:03.372546238Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:55:03.373391 containerd[2035]: time="2024-12-13T01:55:03.372593038Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:55:03.373391 containerd[2035]: time="2024-12-13T01:55:03.372644674Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:55:03.373391 containerd[2035]: time="2024-12-13T01:55:03.372839470Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:55:03.374981 containerd[2035]: time="2024-12-13T01:55:03.374619094Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.379097602Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.379569286Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.379631266Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.379714750Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.379771894Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.379815130Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.379851070Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.379894042Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.379937290Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.379984246Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.380022478Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.380058658Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.380124574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.382914 containerd[2035]: time="2024-12-13T01:55:03.380162422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.379865 systemd-networkd[1870]: eth0: Gained IPv6LL Dec 13 01:55:03.384101 containerd[2035]: time="2024-12-13T01:55:03.380194366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.384101 containerd[2035]: time="2024-12-13T01:55:03.380228842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.384101 containerd[2035]: time="2024-12-13T01:55:03.380270710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.384101 containerd[2035]: time="2024-12-13T01:55:03.380306002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.384101 containerd[2035]: time="2024-12-13T01:55:03.380336206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.384101 containerd[2035]: time="2024-12-13T01:55:03.380368138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.384101 containerd[2035]: time="2024-12-13T01:55:03.380399194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.384101 containerd[2035]: time="2024-12-13T01:55:03.380442790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.384101 containerd[2035]: time="2024-12-13T01:55:03.380484550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.384101 containerd[2035]: time="2024-12-13T01:55:03.380549062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.384101 containerd[2035]: time="2024-12-13T01:55:03.380583706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.384101 containerd[2035]: time="2024-12-13T01:55:03.380621266Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:55:03.397415 containerd[2035]: time="2024-12-13T01:55:03.395848006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.397415 containerd[2035]: time="2024-12-13T01:55:03.395980594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.397415 containerd[2035]: time="2024-12-13T01:55:03.396068614Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:55:03.397415 containerd[2035]: time="2024-12-13T01:55:03.396392962Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:55:03.397415 containerd[2035]: time="2024-12-13T01:55:03.396493270Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:55:03.397415 containerd[2035]: time="2024-12-13T01:55:03.396584830Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:55:03.397415 containerd[2035]: time="2024-12-13T01:55:03.396663454Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:55:03.397415 containerd[2035]: time="2024-12-13T01:55:03.396740134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.397415 containerd[2035]: time="2024-12-13T01:55:03.396802282Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:55:03.397415 containerd[2035]: time="2024-12-13T01:55:03.396834586Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:55:03.397415 containerd[2035]: time="2024-12-13T01:55:03.396886186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:55:03.397772 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:55:03.403203 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:55:03.407405 containerd[2035]: time="2024-12-13T01:55:03.406373795Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:55:03.407405 containerd[2035]: time="2024-12-13T01:55:03.406586939Z" level=info msg="Connect containerd service" Dec 13 01:55:03.407405 containerd[2035]: time="2024-12-13T01:55:03.406708475Z" level=info msg="using legacy CRI server" Dec 13 01:55:03.407405 containerd[2035]: time="2024-12-13T01:55:03.406758539Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:55:03.407405 containerd[2035]: time="2024-12-13T01:55:03.407070191Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:55:03.413734 containerd[2035]: time="2024-12-13T01:55:03.411799451Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:55:03.423186 containerd[2035]: time="2024-12-13T01:55:03.419631467Z" level=info msg="Start subscribing containerd event" Dec 13 01:55:03.423186 containerd[2035]: time="2024-12-13T01:55:03.421946363Z" level=info msg="Start recovering state" Dec 13 01:55:03.423186 containerd[2035]: time="2024-12-13T01:55:03.422217035Z" level=info msg="Start event monitor" Dec 13 01:55:03.423186 containerd[2035]: time="2024-12-13T01:55:03.422266739Z" level=info msg="Start snapshots syncer" Dec 13 01:55:03.423186 containerd[2035]: time="2024-12-13T01:55:03.422329547Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:55:03.423186 containerd[2035]: time="2024-12-13T01:55:03.422366075Z" level=info msg="Start streaming server" Dec 13 01:55:03.424166 containerd[2035]: time="2024-12-13T01:55:03.423976679Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:55:03.424828 containerd[2035]: time="2024-12-13T01:55:03.424756235Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:55:03.432280 containerd[2035]: time="2024-12-13T01:55:03.432185207Z" level=info msg="containerd successfully booted in 0.326992s" Dec 13 01:55:03.460088 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:55:03.471320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:03.481349 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:55:03.485080 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:55:03.610988 dbus-daemon[2006]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:55:03.614560 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:55:03.617502 dbus-daemon[2006]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2056 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:55:03.666374 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:55:03.744591 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:55:03.761448 locksmithd[2050]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: Initializing new seelog logger Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: New Seelog Logger Creation Complete Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: 2024/12/13 01:55:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: 2024/12/13 01:55:03 processing appconfig overrides Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: 2024/12/13 01:55:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: 2024/12/13 01:55:03 processing appconfig overrides Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: 2024/12/13 01:55:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: 2024/12/13 01:55:03 processing appconfig overrides Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO Proxy environment variables: Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: 2024/12/13 01:55:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.774725 amazon-ssm-agent[2169]: 2024/12/13 01:55:03 processing appconfig overrides Dec 13 01:55:03.818469 polkitd[2197]: Started polkitd version 121 Dec 13 01:55:03.856651 polkitd[2197]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:55:03.865582 polkitd[2197]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:55:03.869123 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO https_proxy: Dec 13 01:55:03.875751 polkitd[2197]: Finished loading, compiling and executing 2 rules Dec 13 01:55:03.883511 dbus-daemon[2006]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:55:03.883896 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:55:03.890179 polkitd[2197]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:55:03.925548 coreos-metadata[2161]: Dec 13 01:55:03.925 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:55:03.925548 coreos-metadata[2161]: Dec 13 01:55:03.925 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:55:03.932729 coreos-metadata[2161]: Dec 13 01:55:03.927 INFO Fetch successful Dec 13 01:55:03.932729 coreos-metadata[2161]: Dec 13 01:55:03.927 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:55:03.932729 coreos-metadata[2161]: Dec 13 01:55:03.928 INFO Fetch successful Dec 13 01:55:03.940245 systemd-hostnamed[2056]: Hostname set to (transient) Dec 13 01:55:03.942155 unknown[2161]: wrote ssh authorized keys file for user: core Dec 13 01:55:03.942372 systemd-resolved[1829]: System hostname changed to 'ip-172-31-24-71'. Dec 13 01:55:03.975708 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO http_proxy: Dec 13 01:55:04.007063 update-ssh-keys[2224]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:55:04.009801 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:55:04.019781 systemd[1]: Finished sshkeys.service. Dec 13 01:55:04.073665 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO no_proxy: Dec 13 01:55:04.175886 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:55:04.273128 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:55:04.373789 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO Agent will take identity from EC2 Dec 13 01:55:04.474117 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:04.574239 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:04.677504 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:04.778703 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:55:04.883615 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 01:55:04.928406 sshd_keygen[2040]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:55:04.962482 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:55:04.965478 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:55:04.965478 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO [Registrar] Starting registrar module Dec 13 01:55:04.965478 amazon-ssm-agent[2169]: 2024-12-13 01:55:03 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:55:04.965478 amazon-ssm-agent[2169]: 2024-12-13 01:55:04 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:55:04.965478 amazon-ssm-agent[2169]: 2024-12-13 01:55:04 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:55:04.965478 amazon-ssm-agent[2169]: 2024-12-13 01:55:04 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:55:04.965478 amazon-ssm-agent[2169]: 2024-12-13 01:55:04 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:55:04.986118 amazon-ssm-agent[2169]: 2024-12-13 01:55:04 INFO [CredentialRefresher] Next credential rotation will be in 31.333302644933333 minutes Dec 13 01:55:04.999164 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:55:05.012283 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:55:05.030447 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:55:05.030940 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:55:05.043203 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:55:05.073846 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:55:05.086853 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:55:05.107341 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:55:05.110053 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:55:05.196024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:05.199521 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:55:05.199543 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:05.205848 systemd[1]: Startup finished in 1.313s (kernel) + 8.854s (initrd) + 8.585s (userspace) = 18.754s. Dec 13 01:55:05.564530 ntpd[2010]: Listen normally on 6 eth0 [fe80::4d0:d1ff:fe51:5311%2]:123 Dec 13 01:55:05.566088 ntpd[2010]: 13 Dec 01:55:05 ntpd[2010]: Listen normally on 6 eth0 [fe80::4d0:d1ff:fe51:5311%2]:123 Dec 13 01:55:05.930707 kubelet[2250]: E1213 01:55:05.930453 2250 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:05.935456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:05.935993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:05.936739 systemd[1]: kubelet.service: Consumed 1.347s CPU time. Dec 13 01:55:05.995260 amazon-ssm-agent[2169]: 2024-12-13 01:55:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:55:06.096719 amazon-ssm-agent[2169]: 2024-12-13 01:55:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2262) started Dec 13 01:55:06.196808 amazon-ssm-agent[2169]: 2024-12-13 01:55:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:55:11.992798 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:55:11.999263 systemd[1]: Started sshd@0-172.31.24.71:22-139.178.68.195:52184.service - OpenSSH per-connection server daemon (139.178.68.195:52184). Dec 13 01:55:12.214860 sshd[2272]: Accepted publickey for core from 139.178.68.195 port 52184 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:12.219906 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:12.239535 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:55:12.246270 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:55:12.251801 systemd-logind[2014]: New session 1 of user core. Dec 13 01:55:12.281849 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:55:12.292383 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:55:12.311224 (systemd)[2276]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:12.546997 systemd[2276]: Queued start job for default target default.target. Dec 13 01:55:12.559501 systemd[2276]: Created slice app.slice - User Application Slice. Dec 13 01:55:12.559609 systemd[2276]: Reached target paths.target - Paths. Dec 13 01:55:12.559650 systemd[2276]: Reached target timers.target - Timers. Dec 13 01:55:12.563242 systemd[2276]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:55:12.601244 systemd[2276]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:55:12.601669 systemd[2276]: Reached target sockets.target - Sockets. Dec 13 01:55:12.601797 systemd[2276]: Reached target basic.target - Basic System. Dec 13 01:55:12.601928 systemd[2276]: Reached target default.target - Main User Target. Dec 13 01:55:12.602022 systemd[2276]: Startup finished in 277ms. Dec 13 01:55:12.602148 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:55:12.612054 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:55:12.775356 systemd[1]: Started sshd@1-172.31.24.71:22-139.178.68.195:52194.service - OpenSSH per-connection server daemon (139.178.68.195:52194). Dec 13 01:55:12.961155 sshd[2287]: Accepted publickey for core from 139.178.68.195 port 52194 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:12.964518 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:12.974384 systemd-logind[2014]: New session 2 of user core. Dec 13 01:55:12.981151 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:55:13.113560 sshd[2287]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:13.119499 systemd[1]: sshd@1-172.31.24.71:22-139.178.68.195:52194.service: Deactivated successfully. Dec 13 01:55:13.123920 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:55:13.128173 systemd-logind[2014]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:55:13.131104 systemd-logind[2014]: Removed session 2. Dec 13 01:55:13.159326 systemd[1]: Started sshd@2-172.31.24.71:22-139.178.68.195:52204.service - OpenSSH per-connection server daemon (139.178.68.195:52204). Dec 13 01:55:13.339649 sshd[2294]: Accepted publickey for core from 139.178.68.195 port 52204 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:13.342187 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:13.352494 systemd-logind[2014]: New session 3 of user core. Dec 13 01:55:13.360072 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:55:13.480391 sshd[2294]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:13.487618 systemd[1]: sshd@2-172.31.24.71:22-139.178.68.195:52204.service: Deactivated successfully. Dec 13 01:55:13.491206 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:55:13.493923 systemd-logind[2014]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:55:13.496774 systemd-logind[2014]: Removed session 3. Dec 13 01:55:13.531380 systemd[1]: Started sshd@3-172.31.24.71:22-139.178.68.195:52220.service - OpenSSH per-connection server daemon (139.178.68.195:52220). Dec 13 01:55:13.703973 sshd[2301]: Accepted publickey for core from 139.178.68.195 port 52220 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:13.706641 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:13.716911 systemd-logind[2014]: New session 4 of user core. Dec 13 01:55:13.722950 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:55:13.850282 sshd[2301]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:13.855501 systemd[1]: sshd@3-172.31.24.71:22-139.178.68.195:52220.service: Deactivated successfully. Dec 13 01:55:13.859648 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:55:13.865013 systemd-logind[2014]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:55:13.867582 systemd-logind[2014]: Removed session 4. Dec 13 01:55:13.890263 systemd[1]: Started sshd@4-172.31.24.71:22-139.178.68.195:52232.service - OpenSSH per-connection server daemon (139.178.68.195:52232). Dec 13 01:55:14.074005 sshd[2308]: Accepted publickey for core from 139.178.68.195 port 52232 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:14.077667 sshd[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:14.086655 systemd-logind[2014]: New session 5 of user core. Dec 13 01:55:14.097002 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:55:14.219033 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:55:14.220121 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:14.239729 sudo[2311]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:14.263526 sshd[2308]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:14.270473 systemd[1]: sshd@4-172.31.24.71:22-139.178.68.195:52232.service: Deactivated successfully. Dec 13 01:55:14.274457 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:55:14.279054 systemd-logind[2014]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:55:14.282276 systemd-logind[2014]: Removed session 5. Dec 13 01:55:14.307519 systemd[1]: Started sshd@5-172.31.24.71:22-139.178.68.195:52238.service - OpenSSH per-connection server daemon (139.178.68.195:52238). Dec 13 01:55:14.490953 sshd[2316]: Accepted publickey for core from 139.178.68.195 port 52238 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:14.494152 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:14.503139 systemd-logind[2014]: New session 6 of user core. Dec 13 01:55:14.511978 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:55:14.623818 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:55:14.625503 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:14.633181 sudo[2320]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:14.647564 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:55:14.648391 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:14.676298 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:14.694534 auditctl[2323]: No rules Dec 13 01:55:14.695546 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:55:14.695971 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:14.709377 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:14.779458 augenrules[2341]: No rules Dec 13 01:55:14.783651 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:14.787394 sudo[2319]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:14.812366 sshd[2316]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:14.820156 systemd[1]: sshd@5-172.31.24.71:22-139.178.68.195:52238.service: Deactivated successfully. Dec 13 01:55:14.825067 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:55:14.827469 systemd-logind[2014]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:55:14.830112 systemd-logind[2014]: Removed session 6. Dec 13 01:55:14.855281 systemd[1]: Started sshd@6-172.31.24.71:22-139.178.68.195:52246.service - OpenSSH per-connection server daemon (139.178.68.195:52246). Dec 13 01:55:15.039308 sshd[2349]: Accepted publickey for core from 139.178.68.195 port 52246 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:15.041941 sshd[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:15.051008 systemd-logind[2014]: New session 7 of user core. Dec 13 01:55:15.062949 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:55:15.169867 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:55:15.170771 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:15.977513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:55:15.990285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:16.201233 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:55:16.201533 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:55:16.202228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:16.215051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:16.265511 systemd[1]: Reloading requested from client PID 2388 ('systemctl') (unit session-7.scope)... Dec 13 01:55:16.265759 systemd[1]: Reloading... Dec 13 01:55:16.522958 zram_generator::config[2431]: No configuration found. Dec 13 01:55:16.814203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:17.004547 systemd[1]: Reloading finished in 737 ms. Dec 13 01:55:17.116226 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:55:17.116431 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:55:17.117004 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:17.123372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:17.576935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:17.593295 (kubelet)[2491]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:55:17.670550 kubelet[2491]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:17.670550 kubelet[2491]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:55:17.670550 kubelet[2491]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:17.671192 kubelet[2491]: I1213 01:55:17.670718 2491 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:55:19.330348 kubelet[2491]: I1213 01:55:19.330229 2491 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:55:19.330348 kubelet[2491]: I1213 01:55:19.330321 2491 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:55:19.331080 kubelet[2491]: I1213 01:55:19.330836 2491 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:55:19.369442 kubelet[2491]: I1213 01:55:19.369104 2491 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:55:19.387514 kubelet[2491]: E1213 01:55:19.387452 2491 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:55:19.387785 kubelet[2491]: I1213 01:55:19.387726 2491 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:55:19.395649 kubelet[2491]: I1213 01:55:19.395601 2491 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:55:19.396733 kubelet[2491]: I1213 01:55:19.396244 2491 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:55:19.396733 kubelet[2491]: I1213 01:55:19.396579 2491 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:55:19.397740 kubelet[2491]: I1213 01:55:19.396641 2491 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.24.71","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:55:19.397740 kubelet[2491]: I1213 01:55:19.397229 2491 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:55:19.397740 kubelet[2491]: I1213 01:55:19.397256 2491 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:55:19.397740 kubelet[2491]: I1213 01:55:19.397511 2491 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:19.400263 kubelet[2491]: I1213 01:55:19.400158 2491 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:55:19.400263 kubelet[2491]: I1213 01:55:19.400259 2491 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:55:19.400496 kubelet[2491]: I1213 01:55:19.400380 2491 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:55:19.400496 kubelet[2491]: I1213 01:55:19.400417 2491 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:55:19.401530 kubelet[2491]: E1213 01:55:19.400942 2491 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:19.401530 kubelet[2491]: E1213 01:55:19.401035 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:19.405958 kubelet[2491]: I1213 01:55:19.405877 2491 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:55:19.409354 kubelet[2491]: I1213 01:55:19.409311 2491 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:55:19.410097 kubelet[2491]: W1213 01:55:19.409695 2491 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:55:19.411372 kubelet[2491]: I1213 01:55:19.411017 2491 server.go:1269] "Started kubelet" Dec 13 01:55:19.412648 kubelet[2491]: W1213 01:55:19.412415 2491 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.24.71" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:55:19.412648 kubelet[2491]: E1213 01:55:19.412552 2491 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.24.71\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 01:55:19.413834 kubelet[2491]: W1213 01:55:19.413020 2491 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:55:19.413834 kubelet[2491]: E1213 01:55:19.413091 2491 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 01:55:19.418255 kubelet[2491]: I1213 01:55:19.418080 2491 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:55:19.419969 kubelet[2491]: I1213 01:55:19.419859 2491 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:55:19.423433 kubelet[2491]: I1213 01:55:19.423376 2491 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:55:19.434795 kubelet[2491]: I1213 01:55:19.434654 2491 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:55:19.435266 kubelet[2491]: E1213 01:55:19.435192 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:19.435266 kubelet[2491]: I1213 01:55:19.434916 2491 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:55:19.439186 kubelet[2491]: I1213 01:55:19.439128 2491 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:55:19.439547 kubelet[2491]: I1213 01:55:19.439256 2491 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:55:19.439547 kubelet[2491]: I1213 01:55:19.439460 2491 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:55:19.445278 kubelet[2491]: I1213 01:55:19.445015 2491 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:55:19.450710 kubelet[2491]: E1213 01:55:19.449337 2491 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:55:19.452040 kubelet[2491]: I1213 01:55:19.434868 2491 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:55:19.452914 kubelet[2491]: I1213 01:55:19.452847 2491 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:55:19.455574 kubelet[2491]: E1213 01:55:19.453966 2491 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.24.71\" not found" node="172.31.24.71" Dec 13 01:55:19.459721 kubelet[2491]: I1213 01:55:19.457941 2491 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:55:19.501482 kubelet[2491]: I1213 01:55:19.501438 2491 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:55:19.504336 kubelet[2491]: I1213 01:55:19.504268 2491 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:55:19.504336 kubelet[2491]: I1213 01:55:19.504341 2491 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:19.510834 kubelet[2491]: I1213 01:55:19.510796 2491 policy_none.go:49] "None policy: Start" Dec 13 01:55:19.512240 kubelet[2491]: I1213 01:55:19.512186 2491 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:55:19.512528 kubelet[2491]: I1213 01:55:19.512502 2491 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:55:19.532513 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:55:19.535929 kubelet[2491]: E1213 01:55:19.535716 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:19.554398 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:55:19.567289 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:55:19.570937 kubelet[2491]: I1213 01:55:19.570877 2491 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:55:19.579026 kubelet[2491]: I1213 01:55:19.578137 2491 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:55:19.579026 kubelet[2491]: I1213 01:55:19.578510 2491 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:55:19.579026 kubelet[2491]: I1213 01:55:19.578537 2491 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:55:19.579564 kubelet[2491]: I1213 01:55:19.579487 2491 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:55:19.579564 kubelet[2491]: I1213 01:55:19.579549 2491 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:55:19.579849 kubelet[2491]: I1213 01:55:19.579603 2491 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:55:19.579921 kubelet[2491]: E1213 01:55:19.579870 2491 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 01:55:19.585658 kubelet[2491]: I1213 01:55:19.585527 2491 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:55:19.594574 kubelet[2491]: E1213 01:55:19.594536 2491 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.24.71\" not found" Dec 13 01:55:19.680408 kubelet[2491]: I1213 01:55:19.680371 2491 kubelet_node_status.go:72] "Attempting to register node" node="172.31.24.71" Dec 13 01:55:19.690909 kubelet[2491]: I1213 01:55:19.690851 2491 kubelet_node_status.go:75] "Successfully registered node" node="172.31.24.71" Dec 13 01:55:19.691173 kubelet[2491]: E1213 01:55:19.691138 2491 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.24.71\": node \"172.31.24.71\" not found" Dec 13 01:55:19.708377 kubelet[2491]: E1213 01:55:19.708274 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:19.808940 kubelet[2491]: E1213 01:55:19.808875 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:19.909140 kubelet[2491]: E1213 01:55:19.909072 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:20.009625 kubelet[2491]: E1213 01:55:20.009551 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:20.109859 kubelet[2491]: E1213 01:55:20.109777 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:20.137743 sudo[2352]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:20.163160 sshd[2349]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:20.170080 systemd[1]: sshd@6-172.31.24.71:22-139.178.68.195:52246.service: Deactivated successfully. Dec 13 01:55:20.173943 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:55:20.177850 systemd-logind[2014]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:55:20.181309 systemd-logind[2014]: Removed session 7. Dec 13 01:55:20.210819 kubelet[2491]: E1213 01:55:20.210726 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:20.311526 kubelet[2491]: E1213 01:55:20.311447 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:20.335883 kubelet[2491]: I1213 01:55:20.335796 2491 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:55:20.336632 kubelet[2491]: W1213 01:55:20.336387 2491 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:55:20.336632 kubelet[2491]: W1213 01:55:20.336493 2491 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:55:20.401412 kubelet[2491]: E1213 01:55:20.401329 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:20.412164 kubelet[2491]: E1213 01:55:20.412091 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:20.513213 kubelet[2491]: E1213 01:55:20.512327 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:20.613319 kubelet[2491]: E1213 01:55:20.613246 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:20.713825 kubelet[2491]: E1213 01:55:20.713773 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.24.71\" not found" Dec 13 01:55:20.815318 kubelet[2491]: I1213 01:55:20.815175 2491 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:55:20.816607 containerd[2035]: time="2024-12-13T01:55:20.816351412Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:55:20.818139 kubelet[2491]: I1213 01:55:20.817110 2491 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:55:21.402283 kubelet[2491]: I1213 01:55:21.401953 2491 apiserver.go:52] "Watching apiserver" Dec 13 01:55:21.402973 kubelet[2491]: E1213 01:55:21.402819 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:21.407130 kubelet[2491]: E1213 01:55:21.406148 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkk48" podUID="029015f5-5b9b-4497-b440-744f1cbc3e91" Dec 13 01:55:21.420592 systemd[1]: Created slice kubepods-besteffort-podd11894fc_3414_4fd4_ac07_ce63101754b9.slice - libcontainer container kubepods-besteffort-podd11894fc_3414_4fd4_ac07_ce63101754b9.slice. Dec 13 01:55:21.438037 kubelet[2491]: I1213 01:55:21.436525 2491 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:55:21.448997 systemd[1]: Created slice kubepods-besteffort-pod73cea603_865f_49e5_a3d9_4f3c393b01e1.slice - libcontainer container kubepods-besteffort-pod73cea603_865f_49e5_a3d9_4f3c393b01e1.slice. Dec 13 01:55:21.468991 kubelet[2491]: I1213 01:55:21.468914 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/73cea603-865f-49e5-a3d9-4f3c393b01e1-cni-bin-dir\") pod \"calico-node-lhrst\" (UID: \"73cea603-865f-49e5-a3d9-4f3c393b01e1\") " pod="calico-system/calico-node-lhrst" Dec 13 01:55:21.469415 kubelet[2491]: I1213 01:55:21.469271 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/029015f5-5b9b-4497-b440-744f1cbc3e91-kubelet-dir\") pod \"csi-node-driver-tkk48\" (UID: \"029015f5-5b9b-4497-b440-744f1cbc3e91\") " pod="calico-system/csi-node-driver-tkk48" Dec 13 01:55:21.469415 kubelet[2491]: I1213 01:55:21.469331 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6fq6\" (UniqueName: \"kubernetes.io/projected/029015f5-5b9b-4497-b440-744f1cbc3e91-kube-api-access-h6fq6\") pod \"csi-node-driver-tkk48\" (UID: \"029015f5-5b9b-4497-b440-744f1cbc3e91\") " pod="calico-system/csi-node-driver-tkk48" Dec 13 01:55:21.469415 kubelet[2491]: I1213 01:55:21.469377 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d11894fc-3414-4fd4-ac07-ce63101754b9-xtables-lock\") pod \"kube-proxy-9rj5n\" (UID: \"d11894fc-3414-4fd4-ac07-ce63101754b9\") " pod="kube-system/kube-proxy-9rj5n" Dec 13 01:55:21.470281 kubelet[2491]: I1213 01:55:21.469434 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5ph8\" (UniqueName: \"kubernetes.io/projected/d11894fc-3414-4fd4-ac07-ce63101754b9-kube-api-access-n5ph8\") pod \"kube-proxy-9rj5n\" (UID: \"d11894fc-3414-4fd4-ac07-ce63101754b9\") " pod="kube-system/kube-proxy-9rj5n" Dec 13 01:55:21.470281 kubelet[2491]: I1213 01:55:21.469525 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73cea603-865f-49e5-a3d9-4f3c393b01e1-tigera-ca-bundle\") pod \"calico-node-lhrst\" (UID: \"73cea603-865f-49e5-a3d9-4f3c393b01e1\") " pod="calico-system/calico-node-lhrst" Dec 13 01:55:21.470281 kubelet[2491]: I1213 01:55:21.469588 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/73cea603-865f-49e5-a3d9-4f3c393b01e1-node-certs\") pod \"calico-node-lhrst\" (UID: \"73cea603-865f-49e5-a3d9-4f3c393b01e1\") " pod="calico-system/calico-node-lhrst" Dec 13 01:55:21.470281 kubelet[2491]: I1213 01:55:21.469625 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/73cea603-865f-49e5-a3d9-4f3c393b01e1-cni-net-dir\") pod \"calico-node-lhrst\" (UID: \"73cea603-865f-49e5-a3d9-4f3c393b01e1\") " pod="calico-system/calico-node-lhrst" Dec 13 01:55:21.470281 kubelet[2491]: I1213 01:55:21.469664 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/73cea603-865f-49e5-a3d9-4f3c393b01e1-cni-log-dir\") pod \"calico-node-lhrst\" (UID: \"73cea603-865f-49e5-a3d9-4f3c393b01e1\") " pod="calico-system/calico-node-lhrst" Dec 13 01:55:21.470720 kubelet[2491]: I1213 01:55:21.469737 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hr8h\" (UniqueName: \"kubernetes.io/projected/73cea603-865f-49e5-a3d9-4f3c393b01e1-kube-api-access-6hr8h\") pod \"calico-node-lhrst\" (UID: \"73cea603-865f-49e5-a3d9-4f3c393b01e1\") " pod="calico-system/calico-node-lhrst" Dec 13 01:55:21.470720 kubelet[2491]: I1213 01:55:21.469776 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/029015f5-5b9b-4497-b440-744f1cbc3e91-varrun\") pod \"csi-node-driver-tkk48\" (UID: \"029015f5-5b9b-4497-b440-744f1cbc3e91\") " pod="calico-system/csi-node-driver-tkk48" Dec 13 01:55:21.470720 kubelet[2491]: I1213 01:55:21.469813 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d11894fc-3414-4fd4-ac07-ce63101754b9-kube-proxy\") pod \"kube-proxy-9rj5n\" (UID: \"d11894fc-3414-4fd4-ac07-ce63101754b9\") " pod="kube-system/kube-proxy-9rj5n" Dec 13 01:55:21.470720 kubelet[2491]: I1213 01:55:21.469870 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73cea603-865f-49e5-a3d9-4f3c393b01e1-lib-modules\") pod \"calico-node-lhrst\" (UID: \"73cea603-865f-49e5-a3d9-4f3c393b01e1\") " pod="calico-system/calico-node-lhrst" Dec 13 01:55:21.470720 kubelet[2491]: I1213 01:55:21.469909 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/029015f5-5b9b-4497-b440-744f1cbc3e91-socket-dir\") pod \"csi-node-driver-tkk48\" (UID: \"029015f5-5b9b-4497-b440-744f1cbc3e91\") " pod="calico-system/csi-node-driver-tkk48" Dec 13 01:55:21.471087 kubelet[2491]: I1213 01:55:21.469943 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/73cea603-865f-49e5-a3d9-4f3c393b01e1-policysync\") pod \"calico-node-lhrst\" (UID: \"73cea603-865f-49e5-a3d9-4f3c393b01e1\") " pod="calico-system/calico-node-lhrst" Dec 13 01:55:21.471087 kubelet[2491]: I1213 01:55:21.469979 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/73cea603-865f-49e5-a3d9-4f3c393b01e1-var-run-calico\") pod \"calico-node-lhrst\" (UID: \"73cea603-865f-49e5-a3d9-4f3c393b01e1\") " pod="calico-system/calico-node-lhrst" Dec 13 01:55:21.471087 kubelet[2491]: I1213 01:55:21.470014 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/73cea603-865f-49e5-a3d9-4f3c393b01e1-var-lib-calico\") pod \"calico-node-lhrst\" (UID: \"73cea603-865f-49e5-a3d9-4f3c393b01e1\") " pod="calico-system/calico-node-lhrst" Dec 13 01:55:21.471087 kubelet[2491]: I1213 01:55:21.470049 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/73cea603-865f-49e5-a3d9-4f3c393b01e1-flexvol-driver-host\") pod \"calico-node-lhrst\" (UID: \"73cea603-865f-49e5-a3d9-4f3c393b01e1\") " pod="calico-system/calico-node-lhrst" Dec 13 01:55:21.471087 kubelet[2491]: I1213 01:55:21.470088 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/029015f5-5b9b-4497-b440-744f1cbc3e91-registration-dir\") pod \"csi-node-driver-tkk48\" (UID: \"029015f5-5b9b-4497-b440-744f1cbc3e91\") " pod="calico-system/csi-node-driver-tkk48" Dec 13 01:55:21.471372 kubelet[2491]: I1213 01:55:21.470130 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d11894fc-3414-4fd4-ac07-ce63101754b9-lib-modules\") pod \"kube-proxy-9rj5n\" (UID: \"d11894fc-3414-4fd4-ac07-ce63101754b9\") " pod="kube-system/kube-proxy-9rj5n" Dec 13 01:55:21.471372 kubelet[2491]: I1213 01:55:21.470169 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73cea603-865f-49e5-a3d9-4f3c393b01e1-xtables-lock\") pod \"calico-node-lhrst\" (UID: \"73cea603-865f-49e5-a3d9-4f3c393b01e1\") " pod="calico-system/calico-node-lhrst" Dec 13 01:55:21.578871 kubelet[2491]: E1213 01:55:21.576918 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:21.578871 kubelet[2491]: W1213 01:55:21.576986 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:21.578871 kubelet[2491]: E1213 01:55:21.577906 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:21.579138 kubelet[2491]: E1213 01:55:21.578876 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:21.579138 kubelet[2491]: W1213 01:55:21.578931 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:21.579138 kubelet[2491]: E1213 01:55:21.578964 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:21.593220 kubelet[2491]: E1213 01:55:21.593029 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:21.593405 kubelet[2491]: W1213 01:55:21.593288 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:21.593405 kubelet[2491]: E1213 01:55:21.593336 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:21.615754 kubelet[2491]: E1213 01:55:21.613809 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:21.615754 kubelet[2491]: W1213 01:55:21.613862 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:21.615754 kubelet[2491]: E1213 01:55:21.613903 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:21.637797 kubelet[2491]: E1213 01:55:21.636758 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:21.637797 kubelet[2491]: W1213 01:55:21.637534 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:21.639134 kubelet[2491]: E1213 01:55:21.637914 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:21.645021 kubelet[2491]: E1213 01:55:21.644942 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:21.645021 kubelet[2491]: W1213 01:55:21.645007 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:21.645239 kubelet[2491]: E1213 01:55:21.645045 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:21.739715 containerd[2035]: time="2024-12-13T01:55:21.739249846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rj5n,Uid:d11894fc-3414-4fd4-ac07-ce63101754b9,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:21.759220 containerd[2035]: time="2024-12-13T01:55:21.759067065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lhrst,Uid:73cea603-865f-49e5-a3d9-4f3c393b01e1,Namespace:calico-system,Attempt:0,}" Dec 13 01:55:22.377733 containerd[2035]: time="2024-12-13T01:55:22.375855650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:22.378578 containerd[2035]: time="2024-12-13T01:55:22.378108015Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:22.380720 containerd[2035]: time="2024-12-13T01:55:22.380545508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:55:22.381463 containerd[2035]: time="2024-12-13T01:55:22.381365058Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:55:22.383257 containerd[2035]: time="2024-12-13T01:55:22.383144500Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:22.390323 containerd[2035]: time="2024-12-13T01:55:22.390195616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:22.392657 containerd[2035]: time="2024-12-13T01:55:22.392271610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 633.063975ms" Dec 13 01:55:22.397373 containerd[2035]: time="2024-12-13T01:55:22.397277318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 657.896317ms" Dec 13 01:55:22.403220 kubelet[2491]: E1213 01:55:22.403143 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:22.588116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151230661.mount: Deactivated successfully. Dec 13 01:55:22.642841 containerd[2035]: time="2024-12-13T01:55:22.641138139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:22.642841 containerd[2035]: time="2024-12-13T01:55:22.641271704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:22.642841 containerd[2035]: time="2024-12-13T01:55:22.641312040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:22.646340 containerd[2035]: time="2024-12-13T01:55:22.645862480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:22.646902 containerd[2035]: time="2024-12-13T01:55:22.646495691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:22.647242 containerd[2035]: time="2024-12-13T01:55:22.646956645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:22.648899 containerd[2035]: time="2024-12-13T01:55:22.648565808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:22.649199 containerd[2035]: time="2024-12-13T01:55:22.649046527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:22.812238 systemd[1]: Started cri-containerd-e07a933be9952b3f27f4222916a6c0d8b5daea1c9059ab8a88fb2eb1f882c693.scope - libcontainer container e07a933be9952b3f27f4222916a6c0d8b5daea1c9059ab8a88fb2eb1f882c693. Dec 13 01:55:22.816963 systemd[1]: Started cri-containerd-f8378acb4982121498225a256b4f704741d2d06c7662df267a1cd37d8503586f.scope - libcontainer container f8378acb4982121498225a256b4f704741d2d06c7662df267a1cd37d8503586f. Dec 13 01:55:22.887820 containerd[2035]: time="2024-12-13T01:55:22.887755146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rj5n,Uid:d11894fc-3414-4fd4-ac07-ce63101754b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e07a933be9952b3f27f4222916a6c0d8b5daea1c9059ab8a88fb2eb1f882c693\"" Dec 13 01:55:22.893853 containerd[2035]: time="2024-12-13T01:55:22.893539648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:55:22.903310 containerd[2035]: time="2024-12-13T01:55:22.903162506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lhrst,Uid:73cea603-865f-49e5-a3d9-4f3c393b01e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8378acb4982121498225a256b4f704741d2d06c7662df267a1cd37d8503586f\"" Dec 13 01:55:23.403500 kubelet[2491]: E1213 01:55:23.403399 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:23.584752 kubelet[2491]: E1213 01:55:23.583158 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkk48" podUID="029015f5-5b9b-4497-b440-744f1cbc3e91" Dec 13 01:55:24.403873 kubelet[2491]: E1213 01:55:24.403803 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:24.458492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286858271.mount: Deactivated successfully. Dec 13 01:55:25.078010 containerd[2035]: time="2024-12-13T01:55:25.077938209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:25.080032 containerd[2035]: time="2024-12-13T01:55:25.079740403Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771426" Dec 13 01:55:25.083236 containerd[2035]: time="2024-12-13T01:55:25.083132582Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:25.089004 containerd[2035]: time="2024-12-13T01:55:25.088887436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:25.090906 containerd[2035]: time="2024-12-13T01:55:25.090817558Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 2.197059955s" Dec 13 01:55:25.090906 containerd[2035]: time="2024-12-13T01:55:25.090894548Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Dec 13 01:55:25.095102 containerd[2035]: time="2024-12-13T01:55:25.094895770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:55:25.098537 containerd[2035]: time="2024-12-13T01:55:25.098313172Z" level=info msg="CreateContainer within sandbox \"e07a933be9952b3f27f4222916a6c0d8b5daea1c9059ab8a88fb2eb1f882c693\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:55:25.136293 containerd[2035]: time="2024-12-13T01:55:25.136200263Z" level=info msg="CreateContainer within sandbox \"e07a933be9952b3f27f4222916a6c0d8b5daea1c9059ab8a88fb2eb1f882c693\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3c885063ffbfbc269208422ef1e45c994ae23203b0e0d768e23f8cd2bb60d208\"" Dec 13 01:55:25.145737 containerd[2035]: time="2024-12-13T01:55:25.143598846Z" level=info msg="StartContainer for \"3c885063ffbfbc269208422ef1e45c994ae23203b0e0d768e23f8cd2bb60d208\"" Dec 13 01:55:25.214111 systemd[1]: Started cri-containerd-3c885063ffbfbc269208422ef1e45c994ae23203b0e0d768e23f8cd2bb60d208.scope - libcontainer container 3c885063ffbfbc269208422ef1e45c994ae23203b0e0d768e23f8cd2bb60d208. Dec 13 01:55:25.279826 containerd[2035]: time="2024-12-13T01:55:25.279547694Z" level=info msg="StartContainer for \"3c885063ffbfbc269208422ef1e45c994ae23203b0e0d768e23f8cd2bb60d208\" returns successfully" Dec 13 01:55:25.404849 kubelet[2491]: E1213 01:55:25.403979 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:25.581314 kubelet[2491]: E1213 01:55:25.580777 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkk48" podUID="029015f5-5b9b-4497-b440-744f1cbc3e91" Dec 13 01:55:25.637431 kubelet[2491]: I1213 01:55:25.637297 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9rj5n" podStartSLOduration=4.435098771 podStartE2EDuration="6.637267774s" podCreationTimestamp="2024-12-13 01:55:19 +0000 UTC" firstStartedPulling="2024-12-13 01:55:22.891418617 +0000 UTC m=+5.291639953" lastFinishedPulling="2024-12-13 01:55:25.093587608 +0000 UTC m=+7.493808956" observedRunningTime="2024-12-13 01:55:25.637053465 +0000 UTC m=+8.037274837" watchObservedRunningTime="2024-12-13 01:55:25.637267774 +0000 UTC m=+8.037489122" Dec 13 01:55:25.671507 kubelet[2491]: E1213 01:55:25.671031 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.671507 kubelet[2491]: W1213 01:55:25.671071 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.671507 kubelet[2491]: E1213 01:55:25.671104 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.672852 kubelet[2491]: E1213 01:55:25.671992 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.672852 kubelet[2491]: W1213 01:55:25.672563 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.672852 kubelet[2491]: E1213 01:55:25.672615 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.673957 kubelet[2491]: E1213 01:55:25.673492 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.673957 kubelet[2491]: W1213 01:55:25.673532 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.673957 kubelet[2491]: E1213 01:55:25.673573 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.675281 kubelet[2491]: E1213 01:55:25.674955 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.675281 kubelet[2491]: W1213 01:55:25.674992 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.675281 kubelet[2491]: E1213 01:55:25.675026 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.676436 kubelet[2491]: E1213 01:55:25.676147 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.676436 kubelet[2491]: W1213 01:55:25.676197 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.676436 kubelet[2491]: E1213 01:55:25.676240 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.677551 kubelet[2491]: E1213 01:55:25.677065 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.677551 kubelet[2491]: W1213 01:55:25.677102 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.677551 kubelet[2491]: E1213 01:55:25.677136 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.678453 kubelet[2491]: E1213 01:55:25.678141 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.678453 kubelet[2491]: W1213 01:55:25.678188 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.678453 kubelet[2491]: E1213 01:55:25.678224 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.678911 kubelet[2491]: E1213 01:55:25.678876 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.679288 kubelet[2491]: W1213 01:55:25.679020 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.679288 kubelet[2491]: E1213 01:55:25.679063 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.680131 kubelet[2491]: E1213 01:55:25.680078 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.680581 kubelet[2491]: W1213 01:55:25.680315 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.680581 kubelet[2491]: E1213 01:55:25.680368 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.681450 kubelet[2491]: E1213 01:55:25.681143 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.681450 kubelet[2491]: W1213 01:55:25.681179 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.681450 kubelet[2491]: E1213 01:55:25.681216 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.682539 kubelet[2491]: E1213 01:55:25.682277 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.682539 kubelet[2491]: W1213 01:55:25.682311 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.682539 kubelet[2491]: E1213 01:55:25.682344 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.683514 kubelet[2491]: E1213 01:55:25.683101 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.683514 kubelet[2491]: W1213 01:55:25.683160 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.683514 kubelet[2491]: E1213 01:55:25.683208 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.684510 kubelet[2491]: E1213 01:55:25.684238 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.684510 kubelet[2491]: W1213 01:55:25.684281 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.684510 kubelet[2491]: E1213 01:55:25.684314 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.685505 kubelet[2491]: E1213 01:55:25.685165 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.685505 kubelet[2491]: W1213 01:55:25.685212 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.685505 kubelet[2491]: E1213 01:55:25.685253 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.686661 kubelet[2491]: E1213 01:55:25.686291 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.686661 kubelet[2491]: W1213 01:55:25.686327 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.686661 kubelet[2491]: E1213 01:55:25.686361 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.687960 kubelet[2491]: E1213 01:55:25.687197 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.687960 kubelet[2491]: W1213 01:55:25.687263 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.687960 kubelet[2491]: E1213 01:55:25.687318 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.688646 kubelet[2491]: E1213 01:55:25.688363 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.688646 kubelet[2491]: W1213 01:55:25.688397 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.688646 kubelet[2491]: E1213 01:55:25.688429 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.689151 kubelet[2491]: E1213 01:55:25.689107 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.689355 kubelet[2491]: W1213 01:55:25.689285 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.689991 kubelet[2491]: E1213 01:55:25.689663 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.690470 kubelet[2491]: E1213 01:55:25.690434 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.690945 kubelet[2491]: W1213 01:55:25.690597 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.690945 kubelet[2491]: E1213 01:55:25.690638 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.691734 kubelet[2491]: E1213 01:55:25.691514 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.691734 kubelet[2491]: W1213 01:55:25.691562 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.691734 kubelet[2491]: E1213 01:55:25.691603 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.706354 kubelet[2491]: E1213 01:55:25.706094 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.706354 kubelet[2491]: W1213 01:55:25.706131 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.706354 kubelet[2491]: E1213 01:55:25.706164 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.707241 kubelet[2491]: E1213 01:55:25.707190 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.707539 kubelet[2491]: W1213 01:55:25.707239 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.707539 kubelet[2491]: E1213 01:55:25.707293 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.707996 kubelet[2491]: E1213 01:55:25.707962 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.707996 kubelet[2491]: W1213 01:55:25.707993 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.708183 kubelet[2491]: E1213 01:55:25.708040 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.709166 kubelet[2491]: E1213 01:55:25.709098 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.709166 kubelet[2491]: W1213 01:55:25.709150 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.709755 kubelet[2491]: E1213 01:55:25.709250 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.709755 kubelet[2491]: E1213 01:55:25.709574 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.709755 kubelet[2491]: W1213 01:55:25.709595 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.709755 kubelet[2491]: E1213 01:55:25.709634 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.710360 kubelet[2491]: E1213 01:55:25.710108 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.710360 kubelet[2491]: W1213 01:55:25.710136 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.710360 kubelet[2491]: E1213 01:55:25.710179 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.711048 kubelet[2491]: E1213 01:55:25.710587 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.711048 kubelet[2491]: W1213 01:55:25.710618 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.711048 kubelet[2491]: E1213 01:55:25.710666 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.711656 kubelet[2491]: E1213 01:55:25.711229 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.711656 kubelet[2491]: W1213 01:55:25.711260 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.711656 kubelet[2491]: E1213 01:55:25.711406 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.711656 kubelet[2491]: E1213 01:55:25.711613 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.711656 kubelet[2491]: W1213 01:55:25.711631 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.711656 kubelet[2491]: E1213 01:55:25.711653 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.713172 kubelet[2491]: E1213 01:55:25.712091 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.713172 kubelet[2491]: W1213 01:55:25.712124 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.713172 kubelet[2491]: E1213 01:55:25.712150 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.713172 kubelet[2491]: E1213 01:55:25.712996 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.713172 kubelet[2491]: W1213 01:55:25.713030 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.713172 kubelet[2491]: E1213 01:55:25.713062 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:25.714147 kubelet[2491]: E1213 01:55:25.714113 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:25.714379 kubelet[2491]: W1213 01:55:25.714273 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:25.714379 kubelet[2491]: E1213 01:55:25.714316 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.355864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount759389892.mount: Deactivated successfully. Dec 13 01:55:26.404129 kubelet[2491]: E1213 01:55:26.404077 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:26.502770 containerd[2035]: time="2024-12-13T01:55:26.501903842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:26.504365 containerd[2035]: time="2024-12-13T01:55:26.504236639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Dec 13 01:55:26.506591 containerd[2035]: time="2024-12-13T01:55:26.506471889Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:26.512424 containerd[2035]: time="2024-12-13T01:55:26.512249315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:26.515764 containerd[2035]: time="2024-12-13T01:55:26.515412181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.41978943s" Dec 13 01:55:26.515764 containerd[2035]: time="2024-12-13T01:55:26.515513350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:55:26.525016 containerd[2035]: time="2024-12-13T01:55:26.524639633Z" level=info msg="CreateContainer within sandbox \"f8378acb4982121498225a256b4f704741d2d06c7662df267a1cd37d8503586f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:55:26.560350 containerd[2035]: time="2024-12-13T01:55:26.560251834Z" level=info msg="CreateContainer within sandbox \"f8378acb4982121498225a256b4f704741d2d06c7662df267a1cd37d8503586f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f9cf38efceb5d5959f69ae8060dd8536a03865d8f2705da44c35f462fc525a05\"" Dec 13 01:55:26.561737 containerd[2035]: time="2024-12-13T01:55:26.561642802Z" level=info msg="StartContainer for \"f9cf38efceb5d5959f69ae8060dd8536a03865d8f2705da44c35f462fc525a05\"" Dec 13 01:55:26.640074 systemd[1]: Started cri-containerd-f9cf38efceb5d5959f69ae8060dd8536a03865d8f2705da44c35f462fc525a05.scope - libcontainer container f9cf38efceb5d5959f69ae8060dd8536a03865d8f2705da44c35f462fc525a05. Dec 13 01:55:26.689362 containerd[2035]: time="2024-12-13T01:55:26.689185511Z" level=info msg="StartContainer for \"f9cf38efceb5d5959f69ae8060dd8536a03865d8f2705da44c35f462fc525a05\" returns successfully" Dec 13 01:55:26.699001 kubelet[2491]: E1213 01:55:26.698613 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.699001 kubelet[2491]: W1213 01:55:26.698773 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.699001 kubelet[2491]: E1213 01:55:26.698813 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.700632 kubelet[2491]: E1213 01:55:26.700350 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.700632 kubelet[2491]: W1213 01:55:26.700385 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.700632 kubelet[2491]: E1213 01:55:26.700419 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.701528 kubelet[2491]: E1213 01:55:26.700826 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.701528 kubelet[2491]: W1213 01:55:26.700846 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.701528 kubelet[2491]: E1213 01:55:26.700869 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.702155 kubelet[2491]: E1213 01:55:26.702101 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.702554 kubelet[2491]: W1213 01:55:26.702456 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.702554 kubelet[2491]: E1213 01:55:26.702521 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.703630 kubelet[2491]: E1213 01:55:26.703290 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.703630 kubelet[2491]: W1213 01:55:26.703334 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.703630 kubelet[2491]: E1213 01:55:26.703363 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.704309 kubelet[2491]: E1213 01:55:26.704018 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.704309 kubelet[2491]: W1213 01:55:26.704056 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.704309 kubelet[2491]: E1213 01:55:26.704104 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.705094 kubelet[2491]: E1213 01:55:26.704886 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.705094 kubelet[2491]: W1213 01:55:26.704942 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.705094 kubelet[2491]: E1213 01:55:26.704977 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.705957 kubelet[2491]: E1213 01:55:26.705659 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.705957 kubelet[2491]: W1213 01:55:26.705705 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.705957 kubelet[2491]: E1213 01:55:26.705785 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.706625 kubelet[2491]: E1213 01:55:26.706425 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.706625 kubelet[2491]: W1213 01:55:26.706453 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.706625 kubelet[2491]: E1213 01:55:26.706477 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.707314 kubelet[2491]: E1213 01:55:26.707038 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.707314 kubelet[2491]: W1213 01:55:26.707058 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.707314 kubelet[2491]: E1213 01:55:26.707082 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.707743 kubelet[2491]: E1213 01:55:26.707665 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.708081 kubelet[2491]: W1213 01:55:26.707868 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.708081 kubelet[2491]: E1213 01:55:26.707903 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.708362 kubelet[2491]: E1213 01:55:26.708339 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.708470 kubelet[2491]: W1213 01:55:26.708443 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.708576 kubelet[2491]: E1213 01:55:26.708553 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.709201 kubelet[2491]: E1213 01:55:26.709016 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.709201 kubelet[2491]: W1213 01:55:26.709039 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.709201 kubelet[2491]: E1213 01:55:26.709061 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.709796 kubelet[2491]: E1213 01:55:26.709551 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.709796 kubelet[2491]: W1213 01:55:26.709578 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.709796 kubelet[2491]: E1213 01:55:26.709602 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.710244 kubelet[2491]: E1213 01:55:26.710219 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.710555 kubelet[2491]: W1213 01:55:26.710352 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.710555 kubelet[2491]: E1213 01:55:26.710386 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.710846 kubelet[2491]: E1213 01:55:26.710823 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:26.710950 kubelet[2491]: W1213 01:55:26.710926 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:26.711071 kubelet[2491]: E1213 01:55:26.711046 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:26.712838 systemd[1]: cri-containerd-f9cf38efceb5d5959f69ae8060dd8536a03865d8f2705da44c35f462fc525a05.scope: Deactivated successfully. Dec 13 01:55:27.181129 containerd[2035]: time="2024-12-13T01:55:27.181031674Z" level=info msg="shim disconnected" id=f9cf38efceb5d5959f69ae8060dd8536a03865d8f2705da44c35f462fc525a05 namespace=k8s.io Dec 13 01:55:27.181129 containerd[2035]: time="2024-12-13T01:55:27.181117132Z" level=warning msg="cleaning up after shim disconnected" id=f9cf38efceb5d5959f69ae8060dd8536a03865d8f2705da44c35f462fc525a05 namespace=k8s.io Dec 13 01:55:27.181988 containerd[2035]: time="2024-12-13T01:55:27.181140364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:27.301321 systemd[1]: run-containerd-runc-k8s.io-f9cf38efceb5d5959f69ae8060dd8536a03865d8f2705da44c35f462fc525a05-runc.aOOpvf.mount: Deactivated successfully. Dec 13 01:55:27.301514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9cf38efceb5d5959f69ae8060dd8536a03865d8f2705da44c35f462fc525a05-rootfs.mount: Deactivated successfully. Dec 13 01:55:27.405264 kubelet[2491]: E1213 01:55:27.405205 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:27.581643 kubelet[2491]: E1213 01:55:27.581053 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkk48" podUID="029015f5-5b9b-4497-b440-744f1cbc3e91" Dec 13 01:55:27.636846 containerd[2035]: time="2024-12-13T01:55:27.636787760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:55:28.405940 kubelet[2491]: E1213 01:55:28.405870 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:29.406591 kubelet[2491]: E1213 01:55:29.406513 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:29.583302 kubelet[2491]: E1213 01:55:29.583200 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkk48" podUID="029015f5-5b9b-4497-b440-744f1cbc3e91" Dec 13 01:55:30.408396 kubelet[2491]: E1213 01:55:30.407476 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:31.408207 kubelet[2491]: E1213 01:55:31.408129 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:31.581660 kubelet[2491]: E1213 01:55:31.581581 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkk48" podUID="029015f5-5b9b-4497-b440-744f1cbc3e91" Dec 13 01:55:31.635028 containerd[2035]: time="2024-12-13T01:55:31.634927450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:31.636925 containerd[2035]: time="2024-12-13T01:55:31.636823473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:55:31.638932 containerd[2035]: time="2024-12-13T01:55:31.638819731Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:31.645808 containerd[2035]: time="2024-12-13T01:55:31.645721282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:31.648728 containerd[2035]: time="2024-12-13T01:55:31.648077000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.010595891s" Dec 13 01:55:31.648728 containerd[2035]: time="2024-12-13T01:55:31.648169881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:55:31.653088 containerd[2035]: time="2024-12-13T01:55:31.653028603Z" level=info msg="CreateContainer within sandbox \"f8378acb4982121498225a256b4f704741d2d06c7662df267a1cd37d8503586f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:55:31.688348 containerd[2035]: time="2024-12-13T01:55:31.687902849Z" level=info msg="CreateContainer within sandbox \"f8378acb4982121498225a256b4f704741d2d06c7662df267a1cd37d8503586f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a24d31a97d98a4bb6891cdd9cb65de1fa00599735dabbe427c70839b3c95d577\"" Dec 13 01:55:31.690733 containerd[2035]: time="2024-12-13T01:55:31.690368432Z" level=info msg="StartContainer for \"a24d31a97d98a4bb6891cdd9cb65de1fa00599735dabbe427c70839b3c95d577\"" Dec 13 01:55:31.770046 systemd[1]: Started cri-containerd-a24d31a97d98a4bb6891cdd9cb65de1fa00599735dabbe427c70839b3c95d577.scope - libcontainer container a24d31a97d98a4bb6891cdd9cb65de1fa00599735dabbe427c70839b3c95d577. Dec 13 01:55:31.826785 containerd[2035]: time="2024-12-13T01:55:31.826301351Z" level=info msg="StartContainer for \"a24d31a97d98a4bb6891cdd9cb65de1fa00599735dabbe427c70839b3c95d577\" returns successfully" Dec 13 01:55:32.408720 kubelet[2491]: E1213 01:55:32.408322 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:33.409708 kubelet[2491]: E1213 01:55:33.409576 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:33.455755 containerd[2035]: time="2024-12-13T01:55:33.455380697Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:55:33.459191 systemd[1]: cri-containerd-a24d31a97d98a4bb6891cdd9cb65de1fa00599735dabbe427c70839b3c95d577.scope: Deactivated successfully. Dec 13 01:55:33.460035 systemd[1]: cri-containerd-a24d31a97d98a4bb6891cdd9cb65de1fa00599735dabbe427c70839b3c95d577.scope: Consumed 1.009s CPU time. Dec 13 01:55:33.491783 kubelet[2491]: I1213 01:55:33.491723 2491 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:55:33.504325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a24d31a97d98a4bb6891cdd9cb65de1fa00599735dabbe427c70839b3c95d577-rootfs.mount: Deactivated successfully. Dec 13 01:55:33.596818 systemd[1]: Created slice kubepods-besteffort-pod029015f5_5b9b_4497_b440_744f1cbc3e91.slice - libcontainer container kubepods-besteffort-pod029015f5_5b9b_4497_b440_744f1cbc3e91.slice. Dec 13 01:55:33.603295 containerd[2035]: time="2024-12-13T01:55:33.602997057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkk48,Uid:029015f5-5b9b-4497-b440-744f1cbc3e91,Namespace:calico-system,Attempt:0,}" Dec 13 01:55:33.900484 containerd[2035]: time="2024-12-13T01:55:33.900190326Z" level=error msg="Failed to destroy network for sandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:33.904235 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd-shm.mount: Deactivated successfully. Dec 13 01:55:33.907605 containerd[2035]: time="2024-12-13T01:55:33.905722079Z" level=error msg="encountered an error cleaning up failed sandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:33.907605 containerd[2035]: time="2024-12-13T01:55:33.906130583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkk48,Uid:029015f5-5b9b-4497-b440-744f1cbc3e91,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:33.908946 kubelet[2491]: E1213 01:55:33.908301 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:33.908946 kubelet[2491]: E1213 01:55:33.908452 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkk48" Dec 13 01:55:33.908946 kubelet[2491]: E1213 01:55:33.908496 2491 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkk48" Dec 13 01:55:33.909358 kubelet[2491]: E1213 01:55:33.908574 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tkk48_calico-system(029015f5-5b9b-4497-b440-744f1cbc3e91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tkk48_calico-system(029015f5-5b9b-4497-b440-744f1cbc3e91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tkk48" podUID="029015f5-5b9b-4497-b440-744f1cbc3e91" Dec 13 01:55:33.977060 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:55:34.366530 containerd[2035]: time="2024-12-13T01:55:34.366355812Z" level=info msg="shim disconnected" id=a24d31a97d98a4bb6891cdd9cb65de1fa00599735dabbe427c70839b3c95d577 namespace=k8s.io Dec 13 01:55:34.366530 containerd[2035]: time="2024-12-13T01:55:34.366431986Z" level=warning msg="cleaning up after shim disconnected" id=a24d31a97d98a4bb6891cdd9cb65de1fa00599735dabbe427c70839b3c95d577 namespace=k8s.io Dec 13 01:55:34.366530 containerd[2035]: time="2024-12-13T01:55:34.366454283Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:34.410464 kubelet[2491]: E1213 01:55:34.410378 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:34.664450 containerd[2035]: time="2024-12-13T01:55:34.664343337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:55:34.665238 kubelet[2491]: I1213 01:55:34.664994 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:55:34.669774 containerd[2035]: time="2024-12-13T01:55:34.666911552Z" level=info msg="StopPodSandbox for \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\"" Dec 13 01:55:34.669774 containerd[2035]: time="2024-12-13T01:55:34.667641111Z" level=info msg="Ensure that sandbox 30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd in task-service has been cleanup successfully" Dec 13 01:55:34.726207 containerd[2035]: time="2024-12-13T01:55:34.726112521Z" level=error msg="StopPodSandbox for \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\" failed" error="failed to destroy network for sandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:34.726641 kubelet[2491]: E1213 01:55:34.726554 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:55:34.726793 kubelet[2491]: E1213 01:55:34.726669 2491 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd"} Dec 13 01:55:34.726866 kubelet[2491]: E1213 01:55:34.726845 2491 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"029015f5-5b9b-4497-b440-744f1cbc3e91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:34.726985 kubelet[2491]: E1213 01:55:34.726918 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"029015f5-5b9b-4497-b440-744f1cbc3e91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tkk48" podUID="029015f5-5b9b-4497-b440-744f1cbc3e91" Dec 13 01:55:35.410638 kubelet[2491]: E1213 01:55:35.410554 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:36.411709 kubelet[2491]: E1213 01:55:36.411585 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:37.412053 kubelet[2491]: E1213 01:55:37.411939 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:38.328933 kubelet[2491]: W1213 01:55:38.328850 2491 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172.31.24.71" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '172.31.24.71' and this object Dec 13 01:55:38.328933 kubelet[2491]: E1213 01:55:38.328924 2491 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172.31.24.71\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node '172.31.24.71' and this object" logger="UnhandledError" Dec 13 01:55:38.340497 systemd[1]: Created slice kubepods-besteffort-pod69a3915f_9873_4ee1_80f3_587390904df5.slice - libcontainer container kubepods-besteffort-pod69a3915f_9873_4ee1_80f3_587390904df5.slice. Dec 13 01:55:38.412320 kubelet[2491]: E1213 01:55:38.412172 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:38.498731 kubelet[2491]: I1213 01:55:38.498480 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k47x\" (UniqueName: \"kubernetes.io/projected/69a3915f-9873-4ee1-80f3-587390904df5-kube-api-access-8k47x\") pod \"nginx-deployment-8587fbcb89-pgvfp\" (UID: \"69a3915f-9873-4ee1-80f3-587390904df5\") " pod="default/nginx-deployment-8587fbcb89-pgvfp" Dec 13 01:55:39.401631 kubelet[2491]: E1213 01:55:39.401469 2491 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:39.412851 kubelet[2491]: E1213 01:55:39.412576 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:39.556071 containerd[2035]: time="2024-12-13T01:55:39.555464636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pgvfp,Uid:69a3915f-9873-4ee1-80f3-587390904df5,Namespace:default,Attempt:0,}" Dec 13 01:55:39.823302 containerd[2035]: time="2024-12-13T01:55:39.822796072Z" level=error msg="Failed to destroy network for sandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:39.827993 containerd[2035]: time="2024-12-13T01:55:39.825454182Z" level=error msg="encountered an error cleaning up failed sandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:39.828367 containerd[2035]: time="2024-12-13T01:55:39.828302241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pgvfp,Uid:69a3915f-9873-4ee1-80f3-587390904df5,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:39.828470 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94-shm.mount: Deactivated successfully. Dec 13 01:55:39.830050 kubelet[2491]: E1213 01:55:39.829951 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:39.830211 kubelet[2491]: E1213 01:55:39.830047 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pgvfp" Dec 13 01:55:39.830211 kubelet[2491]: E1213 01:55:39.830084 2491 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pgvfp" Dec 13 01:55:39.830211 kubelet[2491]: E1213 01:55:39.830147 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-pgvfp_default(69a3915f-9873-4ee1-80f3-587390904df5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-pgvfp_default(69a3915f-9873-4ee1-80f3-587390904df5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pgvfp" podUID="69a3915f-9873-4ee1-80f3-587390904df5" Dec 13 01:55:40.413458 kubelet[2491]: E1213 01:55:40.412959 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:40.689135 kubelet[2491]: I1213 01:55:40.688460 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:55:40.690180 containerd[2035]: time="2024-12-13T01:55:40.690122646Z" level=info msg="StopPodSandbox for \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\"" Dec 13 01:55:40.691997 containerd[2035]: time="2024-12-13T01:55:40.691516901Z" level=info msg="Ensure that sandbox f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94 in task-service has been cleanup successfully" Dec 13 01:55:40.786986 containerd[2035]: time="2024-12-13T01:55:40.786643246Z" level=error msg="StopPodSandbox for \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\" failed" error="failed to destroy network for sandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:40.787827 kubelet[2491]: E1213 01:55:40.787297 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:55:40.787827 kubelet[2491]: E1213 01:55:40.787645 2491 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94"} Dec 13 01:55:40.787827 kubelet[2491]: E1213 01:55:40.787755 2491 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69a3915f-9873-4ee1-80f3-587390904df5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:40.787827 kubelet[2491]: E1213 01:55:40.787805 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69a3915f-9873-4ee1-80f3-587390904df5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pgvfp" podUID="69a3915f-9873-4ee1-80f3-587390904df5" Dec 13 01:55:41.414321 kubelet[2491]: E1213 01:55:41.414210 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:41.756631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558982931.mount: Deactivated successfully. Dec 13 01:55:41.843354 containerd[2035]: time="2024-12-13T01:55:41.843278823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:41.845878 containerd[2035]: time="2024-12-13T01:55:41.845656837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:55:41.847166 containerd[2035]: time="2024-12-13T01:55:41.847018216Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:41.854282 containerd[2035]: time="2024-12-13T01:55:41.854126137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:41.856297 containerd[2035]: time="2024-12-13T01:55:41.855946598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.191513126s" Dec 13 01:55:41.856297 containerd[2035]: time="2024-12-13T01:55:41.856060601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:55:41.883634 containerd[2035]: time="2024-12-13T01:55:41.882074160Z" level=info msg="CreateContainer within sandbox \"f8378acb4982121498225a256b4f704741d2d06c7662df267a1cd37d8503586f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:55:41.916035 containerd[2035]: time="2024-12-13T01:55:41.915936088Z" level=info msg="CreateContainer within sandbox \"f8378acb4982121498225a256b4f704741d2d06c7662df267a1cd37d8503586f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"27072da0eea3d5abf6729e2e6872cd715d95a8d9ddb465def6d6fd41bb97e58f\"" Dec 13 01:55:41.917054 containerd[2035]: time="2024-12-13T01:55:41.916934241Z" level=info msg="StartContainer for \"27072da0eea3d5abf6729e2e6872cd715d95a8d9ddb465def6d6fd41bb97e58f\"" Dec 13 01:55:41.970087 systemd[1]: Started cri-containerd-27072da0eea3d5abf6729e2e6872cd715d95a8d9ddb465def6d6fd41bb97e58f.scope - libcontainer container 27072da0eea3d5abf6729e2e6872cd715d95a8d9ddb465def6d6fd41bb97e58f. Dec 13 01:55:42.033190 containerd[2035]: time="2024-12-13T01:55:42.033013817Z" level=info msg="StartContainer for \"27072da0eea3d5abf6729e2e6872cd715d95a8d9ddb465def6d6fd41bb97e58f\" returns successfully" Dec 13 01:55:42.155778 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:55:42.156549 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:55:42.415437 kubelet[2491]: E1213 01:55:42.415350 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:43.415717 kubelet[2491]: E1213 01:55:43.415583 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:44.085719 kernel: bpftool[3285]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:55:44.402249 (udev-worker)[3140]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:44.409458 systemd-networkd[1870]: vxlan.calico: Link UP Dec 13 01:55:44.409477 systemd-networkd[1870]: vxlan.calico: Gained carrier Dec 13 01:55:44.416267 kubelet[2491]: E1213 01:55:44.416165 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:44.452409 (udev-worker)[3141]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:45.417470 kubelet[2491]: E1213 01:55:45.417394 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:46.131222 systemd-networkd[1870]: vxlan.calico: Gained IPv6LL Dec 13 01:55:46.418051 kubelet[2491]: E1213 01:55:46.417866 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:46.581973 containerd[2035]: time="2024-12-13T01:55:46.581287524Z" level=info msg="StopPodSandbox for \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\"" Dec 13 01:55:46.681192 kubelet[2491]: I1213 01:55:46.680916 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lhrst" podStartSLOduration=8.727661968 podStartE2EDuration="27.680888878s" podCreationTimestamp="2024-12-13 01:55:19 +0000 UTC" firstStartedPulling="2024-12-13 01:55:22.905269708 +0000 UTC m=+5.305491044" lastFinishedPulling="2024-12-13 01:55:41.858496618 +0000 UTC m=+24.258717954" observedRunningTime="2024-12-13 01:55:42.725354995 +0000 UTC m=+25.125576355" watchObservedRunningTime="2024-12-13 01:55:46.680888878 +0000 UTC m=+29.081110226" Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.680 [INFO][3369] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.682 [INFO][3369] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" iface="eth0" netns="/var/run/netns/cni-00f8991c-737a-3f07-a7de-343e99238940" Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.683 [INFO][3369] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" iface="eth0" netns="/var/run/netns/cni-00f8991c-737a-3f07-a7de-343e99238940" Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.684 [INFO][3369] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" iface="eth0" netns="/var/run/netns/cni-00f8991c-737a-3f07-a7de-343e99238940" Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.684 [INFO][3369] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.684 [INFO][3369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.743 [INFO][3375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" HandleID="k8s-pod-network.30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Workload="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.743 [INFO][3375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.743 [INFO][3375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.756 [WARNING][3375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" HandleID="k8s-pod-network.30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Workload="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.757 [INFO][3375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" HandleID="k8s-pod-network.30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Workload="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.759 [INFO][3375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:46.768317 containerd[2035]: 2024-12-13 01:55:46.764 [INFO][3369] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:55:46.769443 containerd[2035]: time="2024-12-13T01:55:46.768898056Z" level=info msg="TearDown network for sandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\" successfully" Dec 13 01:55:46.769443 containerd[2035]: time="2024-12-13T01:55:46.769006146Z" level=info msg="StopPodSandbox for \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\" returns successfully" Dec 13 01:55:46.772905 systemd[1]: run-netns-cni\x2d00f8991c\x2d737a\x2d3f07\x2da7de\x2d343e99238940.mount: Deactivated successfully. Dec 13 01:55:46.775096 containerd[2035]: time="2024-12-13T01:55:46.774001431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkk48,Uid:029015f5-5b9b-4497-b440-744f1cbc3e91,Namespace:calico-system,Attempt:1,}" Dec 13 01:55:47.029185 systemd-networkd[1870]: cali1ea77be5e6a: Link UP Dec 13 01:55:47.030773 systemd-networkd[1870]: cali1ea77be5e6a: Gained carrier Dec 13 01:55:47.033404 (udev-worker)[3318]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.872 [INFO][3382] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.24.71-k8s-csi--node--driver--tkk48-eth0 csi-node-driver- calico-system 029015f5-5b9b-4497-b440-744f1cbc3e91 1047 0 2024-12-13 01:55:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.24.71 csi-node-driver-tkk48 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1ea77be5e6a [] []}} ContainerID="db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" Namespace="calico-system" Pod="csi-node-driver-tkk48" WorkloadEndpoint="172.31.24.71-k8s-csi--node--driver--tkk48-" Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.873 [INFO][3382] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" Namespace="calico-system" Pod="csi-node-driver-tkk48" WorkloadEndpoint="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.927 [INFO][3392] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" HandleID="k8s-pod-network.db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" Workload="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.947 [INFO][3392] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" HandleID="k8s-pod-network.db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" Workload="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001f80d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.24.71", "pod":"csi-node-driver-tkk48", "timestamp":"2024-12-13 01:55:46.92696492 +0000 UTC"}, Hostname:"172.31.24.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.948 [INFO][3392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.948 [INFO][3392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.948 [INFO][3392] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.24.71' Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.952 [INFO][3392] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" host="172.31.24.71" Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.959 [INFO][3392] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.24.71" Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.968 [INFO][3392] ipam/ipam.go 489: Trying affinity for 192.168.114.192/26 host="172.31.24.71" Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.972 [INFO][3392] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.192/26 host="172.31.24.71" Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.976 [INFO][3392] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="172.31.24.71" Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.977 [INFO][3392] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" host="172.31.24.71" Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.980 [INFO][3392] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929 Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:46.989 [INFO][3392] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" host="172.31.24.71" Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:47.016 [INFO][3392] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.193/26] block=192.168.114.192/26 handle="k8s-pod-network.db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" host="172.31.24.71" Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:47.016 [INFO][3392] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.193/26] handle="k8s-pod-network.db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" host="172.31.24.71" Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:47.017 [INFO][3392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:47.077174 containerd[2035]: 2024-12-13 01:55:47.017 [INFO][3392] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.193/26] IPv6=[] ContainerID="db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" HandleID="k8s-pod-network.db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" Workload="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:55:47.081629 containerd[2035]: 2024-12-13 01:55:47.020 [INFO][3382] cni-plugin/k8s.go 386: Populated endpoint ContainerID="db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" Namespace="calico-system" Pod="csi-node-driver-tkk48" WorkloadEndpoint="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.71-k8s-csi--node--driver--tkk48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"029015f5-5b9b-4497-b440-744f1cbc3e91", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.24.71", ContainerID:"", Pod:"csi-node-driver-tkk48", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1ea77be5e6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:47.081629 containerd[2035]: 2024-12-13 01:55:47.021 [INFO][3382] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.193/32] ContainerID="db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" Namespace="calico-system" Pod="csi-node-driver-tkk48" WorkloadEndpoint="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:55:47.081629 containerd[2035]: 2024-12-13 01:55:47.021 [INFO][3382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ea77be5e6a ContainerID="db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" Namespace="calico-system" Pod="csi-node-driver-tkk48" WorkloadEndpoint="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:55:47.081629 containerd[2035]: 2024-12-13 01:55:47.030 [INFO][3382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" Namespace="calico-system" Pod="csi-node-driver-tkk48" WorkloadEndpoint="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:55:47.081629 containerd[2035]: 2024-12-13 01:55:47.033 [INFO][3382] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" Namespace="calico-system" Pod="csi-node-driver-tkk48" WorkloadEndpoint="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.71-k8s-csi--node--driver--tkk48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"029015f5-5b9b-4497-b440-744f1cbc3e91", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.24.71", ContainerID:"db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929", Pod:"csi-node-driver-tkk48", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1ea77be5e6a", MAC:"b6:c1:36:12:23:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:47.081629 containerd[2035]: 2024-12-13 01:55:47.074 [INFO][3382] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929" Namespace="calico-system" Pod="csi-node-driver-tkk48" WorkloadEndpoint="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:55:47.125113 containerd[2035]: time="2024-12-13T01:55:47.124807773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:47.125113 containerd[2035]: time="2024-12-13T01:55:47.124937740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:47.126137 containerd[2035]: time="2024-12-13T01:55:47.125024313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:47.127803 containerd[2035]: time="2024-12-13T01:55:47.127459862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:47.179566 systemd[1]: Started cri-containerd-db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929.scope - libcontainer container db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929. Dec 13 01:55:47.220433 containerd[2035]: time="2024-12-13T01:55:47.220329453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkk48,Uid:029015f5-5b9b-4497-b440-744f1cbc3e91,Namespace:calico-system,Attempt:1,} returns sandbox id \"db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929\"" Dec 13 01:55:47.223319 containerd[2035]: time="2024-12-13T01:55:47.223012607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:55:47.418701 kubelet[2491]: E1213 01:55:47.418589 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:48.371578 systemd-networkd[1870]: cali1ea77be5e6a: Gained IPv6LL Dec 13 01:55:48.419533 kubelet[2491]: E1213 01:55:48.419413 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:48.455087 update_engine[2016]: I20241213 01:55:48.453791 2016 update_attempter.cc:509] Updating boot flags... Dec 13 01:55:48.570835 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3141) Dec 13 01:55:48.575744 containerd[2035]: time="2024-12-13T01:55:48.575104446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:48.578935 containerd[2035]: time="2024-12-13T01:55:48.577852332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:55:48.583665 containerd[2035]: time="2024-12-13T01:55:48.583204978Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:48.594437 containerd[2035]: time="2024-12-13T01:55:48.594362517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:48.599063 containerd[2035]: time="2024-12-13T01:55:48.598360980Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.375284278s" Dec 13 01:55:48.599063 containerd[2035]: time="2024-12-13T01:55:48.598552812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:55:48.606098 containerd[2035]: time="2024-12-13T01:55:48.605428997Z" level=info msg="CreateContainer within sandbox \"db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:55:48.650627 containerd[2035]: time="2024-12-13T01:55:48.649116470Z" level=info msg="CreateContainer within sandbox \"db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"acf9f4363d8abeef1e6d0e5aaa31498a7bdb7aafda08897484caf011eea121bd\"" Dec 13 01:55:48.652934 containerd[2035]: time="2024-12-13T01:55:48.651879984Z" level=info msg="StartContainer for \"acf9f4363d8abeef1e6d0e5aaa31498a7bdb7aafda08897484caf011eea121bd\"" Dec 13 01:55:48.743097 systemd[1]: Started cri-containerd-acf9f4363d8abeef1e6d0e5aaa31498a7bdb7aafda08897484caf011eea121bd.scope - libcontainer container acf9f4363d8abeef1e6d0e5aaa31498a7bdb7aafda08897484caf011eea121bd. Dec 13 01:55:48.926799 containerd[2035]: time="2024-12-13T01:55:48.925524258Z" level=info msg="StartContainer for \"acf9f4363d8abeef1e6d0e5aaa31498a7bdb7aafda08897484caf011eea121bd\" returns successfully" Dec 13 01:55:48.933400 containerd[2035]: time="2024-12-13T01:55:48.933037575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:55:49.068920 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3141) Dec 13 01:55:49.420721 kubelet[2491]: E1213 01:55:49.419609 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:50.353763 containerd[2035]: time="2024-12-13T01:55:50.353323953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:50.355764 containerd[2035]: time="2024-12-13T01:55:50.355652792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:55:50.358055 containerd[2035]: time="2024-12-13T01:55:50.357958123Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:50.363640 containerd[2035]: time="2024-12-13T01:55:50.362874715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:50.364819 containerd[2035]: time="2024-12-13T01:55:50.364728316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.431606447s" Dec 13 01:55:50.364819 containerd[2035]: time="2024-12-13T01:55:50.364803183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:55:50.370021 containerd[2035]: time="2024-12-13T01:55:50.369935523Z" level=info msg="CreateContainer within sandbox \"db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:55:50.402304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495554661.mount: Deactivated successfully. Dec 13 01:55:50.405255 containerd[2035]: time="2024-12-13T01:55:50.404326123Z" level=info msg="CreateContainer within sandbox \"db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1286634899e5446bed26a4d36ae052befe71aa42c53860782df3c3c68758820c\"" Dec 13 01:55:50.406504 containerd[2035]: time="2024-12-13T01:55:50.406266105Z" level=info msg="StartContainer for \"1286634899e5446bed26a4d36ae052befe71aa42c53860782df3c3c68758820c\"" Dec 13 01:55:50.421594 kubelet[2491]: E1213 01:55:50.419860 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:50.476491 systemd[1]: run-containerd-runc-k8s.io-1286634899e5446bed26a4d36ae052befe71aa42c53860782df3c3c68758820c-runc.SPgaJu.mount: Deactivated successfully. Dec 13 01:55:50.493058 systemd[1]: Started cri-containerd-1286634899e5446bed26a4d36ae052befe71aa42c53860782df3c3c68758820c.scope - libcontainer container 1286634899e5446bed26a4d36ae052befe71aa42c53860782df3c3c68758820c. Dec 13 01:55:50.552935 containerd[2035]: time="2024-12-13T01:55:50.552425805Z" level=info msg="StartContainer for \"1286634899e5446bed26a4d36ae052befe71aa42c53860782df3c3c68758820c\" returns successfully" Dec 13 01:55:50.564523 ntpd[2010]: Listen normally on 7 vxlan.calico 192.168.114.192:123 Dec 13 01:55:50.564984 ntpd[2010]: Listen normally on 8 vxlan.calico [fe80::643a:e0ff:fec2:5cc6%3]:123 Dec 13 01:55:50.565457 ntpd[2010]: 13 Dec 01:55:50 ntpd[2010]: Listen normally on 7 vxlan.calico 192.168.114.192:123 Dec 13 01:55:50.565457 ntpd[2010]: 13 Dec 01:55:50 ntpd[2010]: Listen normally on 8 vxlan.calico [fe80::643a:e0ff:fec2:5cc6%3]:123 Dec 13 01:55:50.565457 ntpd[2010]: 13 Dec 01:55:50 ntpd[2010]: Listen normally on 9 cali1ea77be5e6a [fe80::ecee:eeff:feee:eeee%6]:123 Dec 13 01:55:50.565108 ntpd[2010]: Listen normally on 9 cali1ea77be5e6a [fe80::ecee:eeff:feee:eeee%6]:123 Dec 13 01:55:50.616667 kubelet[2491]: I1213 01:55:50.615804 2491 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:55:50.616667 kubelet[2491]: I1213 01:55:50.615859 2491 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:55:50.802423 kubelet[2491]: I1213 01:55:50.802071 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tkk48" podStartSLOduration=28.657195371 podStartE2EDuration="31.802049683s" podCreationTimestamp="2024-12-13 01:55:19 +0000 UTC" firstStartedPulling="2024-12-13 01:55:47.222513128 +0000 UTC m=+29.622734464" lastFinishedPulling="2024-12-13 01:55:50.36736744 +0000 UTC m=+32.767588776" observedRunningTime="2024-12-13 01:55:50.800536964 +0000 UTC m=+33.200758336" watchObservedRunningTime="2024-12-13 01:55:50.802049683 +0000 UTC m=+33.202271043" Dec 13 01:55:50.931208 kubelet[2491]: I1213 01:55:50.931053 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:55:51.420525 kubelet[2491]: E1213 01:55:51.420447 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:52.420751 kubelet[2491]: E1213 01:55:52.420650 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:52.581726 containerd[2035]: time="2024-12-13T01:55:52.581599205Z" level=info msg="StopPodSandbox for \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\"" Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.666 [INFO][3776] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.667 [INFO][3776] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" iface="eth0" netns="/var/run/netns/cni-c9f2c917-e693-0470-2e2d-9f0ed1d335e3" Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.667 [INFO][3776] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" iface="eth0" netns="/var/run/netns/cni-c9f2c917-e693-0470-2e2d-9f0ed1d335e3" Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.668 [INFO][3776] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" iface="eth0" netns="/var/run/netns/cni-c9f2c917-e693-0470-2e2d-9f0ed1d335e3" Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.668 [INFO][3776] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.668 [INFO][3776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.706 [INFO][3782] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" HandleID="k8s-pod-network.f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Workload="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.706 [INFO][3782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.706 [INFO][3782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.725 [WARNING][3782] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" HandleID="k8s-pod-network.f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Workload="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.725 [INFO][3782] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" HandleID="k8s-pod-network.f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Workload="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.729 [INFO][3782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:52.735419 containerd[2035]: 2024-12-13 01:55:52.732 [INFO][3776] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:55:52.740222 containerd[2035]: time="2024-12-13T01:55:52.736932012Z" level=info msg="TearDown network for sandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\" successfully" Dec 13 01:55:52.740222 containerd[2035]: time="2024-12-13T01:55:52.736986033Z" level=info msg="StopPodSandbox for \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\" returns successfully" Dec 13 01:55:52.740222 containerd[2035]: time="2024-12-13T01:55:52.738003472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pgvfp,Uid:69a3915f-9873-4ee1-80f3-587390904df5,Namespace:default,Attempt:1,}" Dec 13 01:55:52.741919 systemd[1]: run-netns-cni\x2dc9f2c917\x2de693\x2d0470\x2d2e2d\x2d9f0ed1d335e3.mount: Deactivated successfully. Dec 13 01:55:53.082446 systemd-networkd[1870]: cali24ab358b5f6: Link UP Dec 13 01:55:53.083068 systemd-networkd[1870]: cali24ab358b5f6: Gained carrier Dec 13 01:55:53.091835 (udev-worker)[3806]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:52.848 [INFO][3789] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0 nginx-deployment-8587fbcb89- default 69a3915f-9873-4ee1-80f3-587390904df5 1084 0 2024-12-13 01:55:38 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.24.71 nginx-deployment-8587fbcb89-pgvfp eth0 default [] [] [kns.default ksa.default.default] cali24ab358b5f6 [] []}} ContainerID="a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" Namespace="default" Pod="nginx-deployment-8587fbcb89-pgvfp" WorkloadEndpoint="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-" Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:52.849 [INFO][3789] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" Namespace="default" Pod="nginx-deployment-8587fbcb89-pgvfp" WorkloadEndpoint="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:52.911 [INFO][3799] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" HandleID="k8s-pod-network.a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" Workload="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:52.931 [INFO][3799] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" HandleID="k8s-pod-network.a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" Workload="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ea450), Attrs:map[string]string{"namespace":"default", "node":"172.31.24.71", "pod":"nginx-deployment-8587fbcb89-pgvfp", "timestamp":"2024-12-13 01:55:52.911128239 +0000 UTC"}, Hostname:"172.31.24.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:52.931 [INFO][3799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:52.931 [INFO][3799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:52.931 [INFO][3799] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.24.71' Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:52.935 [INFO][3799] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" host="172.31.24.71" Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:53.027 [INFO][3799] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.24.71" Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:53.036 [INFO][3799] ipam/ipam.go 489: Trying affinity for 192.168.114.192/26 host="172.31.24.71" Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:53.040 [INFO][3799] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.192/26 host="172.31.24.71" Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:53.045 [INFO][3799] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="172.31.24.71" Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:53.045 [INFO][3799] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" host="172.31.24.71" Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:53.048 [INFO][3799] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:53.055 [INFO][3799] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" host="172.31.24.71" Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:53.068 [INFO][3799] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.194/26] block=192.168.114.192/26 handle="k8s-pod-network.a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" host="172.31.24.71" Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:53.068 [INFO][3799] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.194/26] handle="k8s-pod-network.a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" host="172.31.24.71" Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:53.068 [INFO][3799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:53.114085 containerd[2035]: 2024-12-13 01:55:53.068 [INFO][3799] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.194/26] IPv6=[] ContainerID="a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" HandleID="k8s-pod-network.a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" Workload="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:55:53.115366 containerd[2035]: 2024-12-13 01:55:53.072 [INFO][3789] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" Namespace="default" Pod="nginx-deployment-8587fbcb89-pgvfp" WorkloadEndpoint="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"69a3915f-9873-4ee1-80f3-587390904df5", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.24.71", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-pgvfp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali24ab358b5f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:53.115366 containerd[2035]: 2024-12-13 01:55:53.072 [INFO][3789] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.194/32] ContainerID="a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" Namespace="default" Pod="nginx-deployment-8587fbcb89-pgvfp" WorkloadEndpoint="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:55:53.115366 containerd[2035]: 2024-12-13 01:55:53.072 [INFO][3789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24ab358b5f6 ContainerID="a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" Namespace="default" Pod="nginx-deployment-8587fbcb89-pgvfp" WorkloadEndpoint="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:55:53.115366 containerd[2035]: 2024-12-13 01:55:53.081 [INFO][3789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" Namespace="default" Pod="nginx-deployment-8587fbcb89-pgvfp" WorkloadEndpoint="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:55:53.115366 containerd[2035]: 2024-12-13 01:55:53.085 [INFO][3789] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" Namespace="default" Pod="nginx-deployment-8587fbcb89-pgvfp" WorkloadEndpoint="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"69a3915f-9873-4ee1-80f3-587390904df5", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.24.71", ContainerID:"a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd", Pod:"nginx-deployment-8587fbcb89-pgvfp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali24ab358b5f6", MAC:"76:38:fc:d2:f1:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:53.115366 containerd[2035]: 2024-12-13 01:55:53.103 [INFO][3789] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd" Namespace="default" Pod="nginx-deployment-8587fbcb89-pgvfp" WorkloadEndpoint="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:55:53.166891 containerd[2035]: time="2024-12-13T01:55:53.165316530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:53.166891 containerd[2035]: time="2024-12-13T01:55:53.165412698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:53.166891 containerd[2035]: time="2024-12-13T01:55:53.165440416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:53.166891 containerd[2035]: time="2024-12-13T01:55:53.165598113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:53.217305 systemd[1]: Started cri-containerd-a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd.scope - libcontainer container a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd. Dec 13 01:55:53.294741 containerd[2035]: time="2024-12-13T01:55:53.294608564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pgvfp,Uid:69a3915f-9873-4ee1-80f3-587390904df5,Namespace:default,Attempt:1,} returns sandbox id \"a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd\"" Dec 13 01:55:53.298836 containerd[2035]: time="2024-12-13T01:55:53.298296467Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:55:53.421036 kubelet[2491]: E1213 01:55:53.420960 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:54.195584 systemd-networkd[1870]: cali24ab358b5f6: Gained IPv6LL Dec 13 01:55:54.421782 kubelet[2491]: E1213 01:55:54.421733 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:55.423492 kubelet[2491]: E1213 01:55:55.423431 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:56.425061 kubelet[2491]: E1213 01:55:56.425009 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:56.564501 ntpd[2010]: Listen normally on 10 cali24ab358b5f6 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:55:56.566460 ntpd[2010]: 13 Dec 01:55:56 ntpd[2010]: Listen normally on 10 cali24ab358b5f6 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:55:56.905188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666111051.mount: Deactivated successfully. Dec 13 01:55:57.427006 kubelet[2491]: E1213 01:55:57.426934 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:58.428022 kubelet[2491]: E1213 01:55:58.427710 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:58.548761 containerd[2035]: time="2024-12-13T01:55:58.548513329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:58.550792 containerd[2035]: time="2024-12-13T01:55:58.550718318Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67696939" Dec 13 01:55:58.552357 containerd[2035]: time="2024-12-13T01:55:58.552236806Z" level=info msg="ImageCreate event name:\"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:58.560273 containerd[2035]: time="2024-12-13T01:55:58.560150975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:58.562882 containerd[2035]: time="2024-12-13T01:55:58.562624750Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 5.264263348s" Dec 13 01:55:58.562882 containerd[2035]: time="2024-12-13T01:55:58.562721074Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 01:55:58.567924 containerd[2035]: time="2024-12-13T01:55:58.567833660Z" level=info msg="CreateContainer within sandbox \"a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:55:58.595134 containerd[2035]: time="2024-12-13T01:55:58.595017581Z" level=info msg="CreateContainer within sandbox \"a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"752846a579319b968041c2979ee12a3c15328a42d85d35ea9ea67ae60c2b2b4b\"" Dec 13 01:55:58.596407 containerd[2035]: time="2024-12-13T01:55:58.596159494Z" level=info msg="StartContainer for \"752846a579319b968041c2979ee12a3c15328a42d85d35ea9ea67ae60c2b2b4b\"" Dec 13 01:55:58.665039 systemd[1]: Started cri-containerd-752846a579319b968041c2979ee12a3c15328a42d85d35ea9ea67ae60c2b2b4b.scope - libcontainer container 752846a579319b968041c2979ee12a3c15328a42d85d35ea9ea67ae60c2b2b4b. Dec 13 01:55:58.710295 containerd[2035]: time="2024-12-13T01:55:58.709532324Z" level=info msg="StartContainer for \"752846a579319b968041c2979ee12a3c15328a42d85d35ea9ea67ae60c2b2b4b\" returns successfully" Dec 13 01:55:59.401230 kubelet[2491]: E1213 01:55:59.401149 2491 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:59.428246 kubelet[2491]: E1213 01:55:59.428160 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:00.428997 kubelet[2491]: E1213 01:56:00.428917 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:01.430140 kubelet[2491]: E1213 01:56:01.430016 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:02.430707 kubelet[2491]: E1213 01:56:02.430577 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:03.430900 kubelet[2491]: E1213 01:56:03.430845 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:04.432135 kubelet[2491]: E1213 01:56:04.432069 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:05.432712 kubelet[2491]: E1213 01:56:05.432582 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:05.567540 kubelet[2491]: I1213 01:56:05.567387 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-pgvfp" podStartSLOduration=22.300607025 podStartE2EDuration="27.567359979s" podCreationTimestamp="2024-12-13 01:55:38 +0000 UTC" firstStartedPulling="2024-12-13 01:55:53.29788608 +0000 UTC m=+35.698107428" lastFinishedPulling="2024-12-13 01:55:58.564639034 +0000 UTC m=+40.964860382" observedRunningTime="2024-12-13 01:55:58.827773146 +0000 UTC m=+41.227994518" watchObservedRunningTime="2024-12-13 01:56:05.567359979 +0000 UTC m=+47.967581327" Dec 13 01:56:05.581831 systemd[1]: Created slice kubepods-besteffort-pod865cb3eb_d0af_49d8_9f19_71504448dbff.slice - libcontainer container kubepods-besteffort-pod865cb3eb_d0af_49d8_9f19_71504448dbff.slice. Dec 13 01:56:05.691565 kubelet[2491]: I1213 01:56:05.691326 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzzn6\" (UniqueName: \"kubernetes.io/projected/865cb3eb-d0af-49d8-9f19-71504448dbff-kube-api-access-bzzn6\") pod \"nfs-server-provisioner-0\" (UID: \"865cb3eb-d0af-49d8-9f19-71504448dbff\") " pod="default/nfs-server-provisioner-0" Dec 13 01:56:05.691565 kubelet[2491]: I1213 01:56:05.691500 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/865cb3eb-d0af-49d8-9f19-71504448dbff-data\") pod \"nfs-server-provisioner-0\" (UID: \"865cb3eb-d0af-49d8-9f19-71504448dbff\") " pod="default/nfs-server-provisioner-0" Dec 13 01:56:05.890421 containerd[2035]: time="2024-12-13T01:56:05.889935060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:865cb3eb-d0af-49d8-9f19-71504448dbff,Namespace:default,Attempt:0,}" Dec 13 01:56:06.114992 systemd-networkd[1870]: cali60e51b789ff: Link UP Dec 13 01:56:06.118486 systemd-networkd[1870]: cali60e51b789ff: Gained carrier Dec 13 01:56:06.118760 (udev-worker)[3968]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:05.974 [INFO][3972] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.24.71-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 865cb3eb-d0af-49d8-9f19-71504448dbff 1147 0 2024-12-13 01:56:05 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.24.71 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.24.71-k8s-nfs--server--provisioner--0-" Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:05.975 [INFO][3972] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.24.71-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.028 [INFO][3983] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" HandleID="k8s-pod-network.0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" Workload="172.31.24.71-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.052 [INFO][3983] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" HandleID="k8s-pod-network.0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" Workload="172.31.24.71-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000409330), Attrs:map[string]string{"namespace":"default", "node":"172.31.24.71", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 01:56:06.028578516 +0000 UTC"}, Hostname:"172.31.24.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.052 [INFO][3983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.052 [INFO][3983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.052 [INFO][3983] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.24.71' Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.056 [INFO][3983] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" host="172.31.24.71" Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.063 [INFO][3983] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.24.71" Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.071 [INFO][3983] ipam/ipam.go 489: Trying affinity for 192.168.114.192/26 host="172.31.24.71" Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.074 [INFO][3983] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.192/26 host="172.31.24.71" Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.079 [INFO][3983] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="172.31.24.71" Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.079 [INFO][3983] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" host="172.31.24.71" Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.083 [INFO][3983] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8 Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.096 [INFO][3983] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" host="172.31.24.71" Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.107 [INFO][3983] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.195/26] block=192.168.114.192/26 handle="k8s-pod-network.0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" host="172.31.24.71" Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.107 [INFO][3983] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.195/26] handle="k8s-pod-network.0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" host="172.31.24.71" Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.108 [INFO][3983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:06.150859 containerd[2035]: 2024-12-13 01:56:06.108 [INFO][3983] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.195/26] IPv6=[] ContainerID="0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" HandleID="k8s-pod-network.0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" Workload="172.31.24.71-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:56:06.152594 containerd[2035]: 2024-12-13 01:56:06.111 [INFO][3972] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.24.71-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.71-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"865cb3eb-d0af-49d8-9f19-71504448dbff", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.24.71", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.114.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:06.152594 containerd[2035]: 2024-12-13 01:56:06.111 [INFO][3972] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.195/32] ContainerID="0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.24.71-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:56:06.152594 containerd[2035]: 2024-12-13 01:56:06.111 [INFO][3972] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.24.71-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:56:06.152594 containerd[2035]: 2024-12-13 01:56:06.116 [INFO][3972] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.24.71-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:56:06.153321 containerd[2035]: 2024-12-13 01:56:06.120 [INFO][3972] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.24.71-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.71-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"865cb3eb-d0af-49d8-9f19-71504448dbff", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.24.71", ContainerID:"0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.114.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"1e:b2:b0:4f:4c:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:06.153321 containerd[2035]: 2024-12-13 01:56:06.147 [INFO][3972] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.24.71-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:56:06.199824 containerd[2035]: time="2024-12-13T01:56:06.199627205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:06.200095 containerd[2035]: time="2024-12-13T01:56:06.199849802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:06.200095 containerd[2035]: time="2024-12-13T01:56:06.199915529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:06.200430 containerd[2035]: time="2024-12-13T01:56:06.200335559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:06.262161 systemd[1]: Started cri-containerd-0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8.scope - libcontainer container 0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8. Dec 13 01:56:06.329733 containerd[2035]: time="2024-12-13T01:56:06.329632342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:865cb3eb-d0af-49d8-9f19-71504448dbff,Namespace:default,Attempt:0,} returns sandbox id \"0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8\"" Dec 13 01:56:06.334389 containerd[2035]: time="2024-12-13T01:56:06.334213559Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:56:06.433921 kubelet[2491]: E1213 01:56:06.433811 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:07.434799 kubelet[2491]: E1213 01:56:07.434648 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:08.083320 systemd-networkd[1870]: cali60e51b789ff: Gained IPv6LL Dec 13 01:56:08.435211 kubelet[2491]: E1213 01:56:08.435135 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:09.435975 kubelet[2491]: E1213 01:56:09.435863 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:09.717043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703704993.mount: Deactivated successfully. Dec 13 01:56:10.436631 kubelet[2491]: E1213 01:56:10.436550 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:10.564810 ntpd[2010]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:56:10.566302 ntpd[2010]: 13 Dec 01:56:10 ntpd[2010]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:56:11.437290 kubelet[2491]: E1213 01:56:11.437199 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:12.437475 kubelet[2491]: E1213 01:56:12.437392 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:13.107881 containerd[2035]: time="2024-12-13T01:56:13.107249679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:13.109786 containerd[2035]: time="2024-12-13T01:56:13.109653960Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Dec 13 01:56:13.111669 containerd[2035]: time="2024-12-13T01:56:13.111582019Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:13.117492 containerd[2035]: time="2024-12-13T01:56:13.117345892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:13.120158 containerd[2035]: time="2024-12-13T01:56:13.119950857Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 6.785570221s" Dec 13 01:56:13.120158 containerd[2035]: time="2024-12-13T01:56:13.120022977Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 01:56:13.125310 containerd[2035]: time="2024-12-13T01:56:13.124971833Z" level=info msg="CreateContainer within sandbox \"0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:56:13.154301 containerd[2035]: time="2024-12-13T01:56:13.154213037Z" level=info msg="CreateContainer within sandbox \"0333a665028e4336d9de89ecceb3d515426f2ef3e39021872ebd27032395eae8\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"017c042c1fb3b5cafec80ae699b63437112c802448a1e9ebed342b8aa212e6e6\"" Dec 13 01:56:13.155505 containerd[2035]: time="2024-12-13T01:56:13.155306170Z" level=info msg="StartContainer for \"017c042c1fb3b5cafec80ae699b63437112c802448a1e9ebed342b8aa212e6e6\"" Dec 13 01:56:13.218056 systemd[1]: Started cri-containerd-017c042c1fb3b5cafec80ae699b63437112c802448a1e9ebed342b8aa212e6e6.scope - libcontainer container 017c042c1fb3b5cafec80ae699b63437112c802448a1e9ebed342b8aa212e6e6. Dec 13 01:56:13.276608 containerd[2035]: time="2024-12-13T01:56:13.276454422Z" level=info msg="StartContainer for \"017c042c1fb3b5cafec80ae699b63437112c802448a1e9ebed342b8aa212e6e6\" returns successfully" Dec 13 01:56:13.437867 kubelet[2491]: E1213 01:56:13.437756 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:13.882557 kubelet[2491]: I1213 01:56:13.882404 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.093524748 podStartE2EDuration="8.882380198s" podCreationTimestamp="2024-12-13 01:56:05 +0000 UTC" firstStartedPulling="2024-12-13 01:56:06.333575418 +0000 UTC m=+48.733796766" lastFinishedPulling="2024-12-13 01:56:13.12243088 +0000 UTC m=+55.522652216" observedRunningTime="2024-12-13 01:56:13.882217859 +0000 UTC m=+56.282439243" watchObservedRunningTime="2024-12-13 01:56:13.882380198 +0000 UTC m=+56.282601534" Dec 13 01:56:14.438578 kubelet[2491]: E1213 01:56:14.438500 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:15.438820 kubelet[2491]: E1213 01:56:15.438748 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:16.439328 kubelet[2491]: E1213 01:56:16.439255 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:17.439902 kubelet[2491]: E1213 01:56:17.439794 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:18.440811 kubelet[2491]: E1213 01:56:18.440753 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:19.401552 kubelet[2491]: E1213 01:56:19.401448 2491 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:19.441322 kubelet[2491]: E1213 01:56:19.441230 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:19.444749 containerd[2035]: time="2024-12-13T01:56:19.444636035Z" level=info msg="StopPodSandbox for \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\"" Dec 13 01:56:19.593226 containerd[2035]: 2024-12-13 01:56:19.527 [WARNING][4148] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.71-k8s-csi--node--driver--tkk48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"029015f5-5b9b-4497-b440-744f1cbc3e91", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.24.71", ContainerID:"db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929", Pod:"csi-node-driver-tkk48", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1ea77be5e6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:19.593226 containerd[2035]: 2024-12-13 01:56:19.527 [INFO][4148] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:56:19.593226 containerd[2035]: 2024-12-13 01:56:19.527 [INFO][4148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" iface="eth0" netns="" Dec 13 01:56:19.593226 containerd[2035]: 2024-12-13 01:56:19.527 [INFO][4148] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:56:19.593226 containerd[2035]: 2024-12-13 01:56:19.527 [INFO][4148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:56:19.593226 containerd[2035]: 2024-12-13 01:56:19.569 [INFO][4154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" HandleID="k8s-pod-network.30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Workload="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:56:19.593226 containerd[2035]: 2024-12-13 01:56:19.569 [INFO][4154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:19.593226 containerd[2035]: 2024-12-13 01:56:19.569 [INFO][4154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:19.593226 containerd[2035]: 2024-12-13 01:56:19.582 [WARNING][4154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" HandleID="k8s-pod-network.30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Workload="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:56:19.593226 containerd[2035]: 2024-12-13 01:56:19.583 [INFO][4154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" HandleID="k8s-pod-network.30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Workload="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:56:19.593226 containerd[2035]: 2024-12-13 01:56:19.587 [INFO][4154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:19.593226 containerd[2035]: 2024-12-13 01:56:19.590 [INFO][4148] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:56:19.595069 containerd[2035]: time="2024-12-13T01:56:19.593237162Z" level=info msg="TearDown network for sandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\" successfully" Dec 13 01:56:19.595069 containerd[2035]: time="2024-12-13T01:56:19.593287369Z" level=info msg="StopPodSandbox for \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\" returns successfully" Dec 13 01:56:19.595069 containerd[2035]: time="2024-12-13T01:56:19.594586187Z" level=info msg="RemovePodSandbox for \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\"" Dec 13 01:56:19.595069 containerd[2035]: time="2024-12-13T01:56:19.594651878Z" level=info msg="Forcibly stopping sandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\"" Dec 13 01:56:19.739618 containerd[2035]: 2024-12-13 01:56:19.681 [WARNING][4174] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.71-k8s-csi--node--driver--tkk48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"029015f5-5b9b-4497-b440-744f1cbc3e91", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.24.71", ContainerID:"db384c220ed4c43f8dde2e196147422c758342b7d093c483a5822bf04a3c3929", Pod:"csi-node-driver-tkk48", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1ea77be5e6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:19.739618 containerd[2035]: 2024-12-13 01:56:19.682 [INFO][4174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:56:19.739618 containerd[2035]: 2024-12-13 01:56:19.682 [INFO][4174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" iface="eth0" netns="" Dec 13 01:56:19.739618 containerd[2035]: 2024-12-13 01:56:19.682 [INFO][4174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:56:19.739618 containerd[2035]: 2024-12-13 01:56:19.682 [INFO][4174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:56:19.739618 containerd[2035]: 2024-12-13 01:56:19.719 [INFO][4180] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" HandleID="k8s-pod-network.30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Workload="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:56:19.739618 containerd[2035]: 2024-12-13 01:56:19.719 [INFO][4180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:19.739618 containerd[2035]: 2024-12-13 01:56:19.720 [INFO][4180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:19.739618 containerd[2035]: 2024-12-13 01:56:19.731 [WARNING][4180] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" HandleID="k8s-pod-network.30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Workload="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:56:19.739618 containerd[2035]: 2024-12-13 01:56:19.731 [INFO][4180] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" HandleID="k8s-pod-network.30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Workload="172.31.24.71-k8s-csi--node--driver--tkk48-eth0" Dec 13 01:56:19.739618 containerd[2035]: 2024-12-13 01:56:19.734 [INFO][4180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:19.739618 containerd[2035]: 2024-12-13 01:56:19.736 [INFO][4174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd" Dec 13 01:56:19.739618 containerd[2035]: time="2024-12-13T01:56:19.738759980Z" level=info msg="TearDown network for sandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\" successfully" Dec 13 01:56:19.743163 containerd[2035]: time="2024-12-13T01:56:19.743096831Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:56:19.743163 containerd[2035]: time="2024-12-13T01:56:19.743198720Z" level=info msg="RemovePodSandbox \"30d65644660d5fae02a01ee12a0d56fab27192cc265648906f4d2e346adc79bd\" returns successfully" Dec 13 01:56:19.743943 containerd[2035]: time="2024-12-13T01:56:19.743890894Z" level=info msg="StopPodSandbox for \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\"" Dec 13 01:56:19.900452 containerd[2035]: 2024-12-13 01:56:19.832 [WARNING][4198] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"69a3915f-9873-4ee1-80f3-587390904df5", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.24.71", ContainerID:"a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd", Pod:"nginx-deployment-8587fbcb89-pgvfp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali24ab358b5f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:19.900452 containerd[2035]: 2024-12-13 01:56:19.833 [INFO][4198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:56:19.900452 containerd[2035]: 2024-12-13 01:56:19.833 [INFO][4198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" iface="eth0" netns="" Dec 13 01:56:19.900452 containerd[2035]: 2024-12-13 01:56:19.833 [INFO][4198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:56:19.900452 containerd[2035]: 2024-12-13 01:56:19.833 [INFO][4198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:56:19.900452 containerd[2035]: 2024-12-13 01:56:19.874 [INFO][4206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" HandleID="k8s-pod-network.f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Workload="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:56:19.900452 containerd[2035]: 2024-12-13 01:56:19.875 [INFO][4206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:19.900452 containerd[2035]: 2024-12-13 01:56:19.875 [INFO][4206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:19.900452 containerd[2035]: 2024-12-13 01:56:19.891 [WARNING][4206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" HandleID="k8s-pod-network.f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Workload="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:56:19.900452 containerd[2035]: 2024-12-13 01:56:19.891 [INFO][4206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" HandleID="k8s-pod-network.f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Workload="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:56:19.900452 containerd[2035]: 2024-12-13 01:56:19.894 [INFO][4206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:19.900452 containerd[2035]: 2024-12-13 01:56:19.897 [INFO][4198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:56:19.902454 containerd[2035]: time="2024-12-13T01:56:19.900446609Z" level=info msg="TearDown network for sandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\" successfully" Dec 13 01:56:19.902454 containerd[2035]: time="2024-12-13T01:56:19.900513884Z" level=info msg="StopPodSandbox for \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\" returns successfully" Dec 13 01:56:19.902454 containerd[2035]: time="2024-12-13T01:56:19.901534105Z" level=info msg="RemovePodSandbox for \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\"" Dec 13 01:56:19.902454 containerd[2035]: time="2024-12-13T01:56:19.901615808Z" level=info msg="Forcibly stopping sandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\"" Dec 13 01:56:20.051535 containerd[2035]: 2024-12-13 01:56:19.982 [WARNING][4225] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"69a3915f-9873-4ee1-80f3-587390904df5", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.24.71", ContainerID:"a5675cb40d16440e978ea45cc8f6ba3f858cfc6d96ebfec41b47dc4af79a8ebd", Pod:"nginx-deployment-8587fbcb89-pgvfp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali24ab358b5f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:20.051535 containerd[2035]: 2024-12-13 01:56:19.982 [INFO][4225] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:56:20.051535 containerd[2035]: 2024-12-13 01:56:19.982 [INFO][4225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" iface="eth0" netns="" Dec 13 01:56:20.051535 containerd[2035]: 2024-12-13 01:56:19.982 [INFO][4225] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:56:20.051535 containerd[2035]: 2024-12-13 01:56:19.982 [INFO][4225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:56:20.051535 containerd[2035]: 2024-12-13 01:56:20.026 [INFO][4231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" HandleID="k8s-pod-network.f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Workload="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:56:20.051535 containerd[2035]: 2024-12-13 01:56:20.026 [INFO][4231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:20.051535 containerd[2035]: 2024-12-13 01:56:20.027 [INFO][4231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:20.051535 containerd[2035]: 2024-12-13 01:56:20.042 [WARNING][4231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" HandleID="k8s-pod-network.f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Workload="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:56:20.051535 containerd[2035]: 2024-12-13 01:56:20.042 [INFO][4231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" HandleID="k8s-pod-network.f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Workload="172.31.24.71-k8s-nginx--deployment--8587fbcb89--pgvfp-eth0" Dec 13 01:56:20.051535 containerd[2035]: 2024-12-13 01:56:20.046 [INFO][4231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:20.051535 containerd[2035]: 2024-12-13 01:56:20.048 [INFO][4225] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94" Dec 13 01:56:20.052748 containerd[2035]: time="2024-12-13T01:56:20.051502668Z" level=info msg="TearDown network for sandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\" successfully" Dec 13 01:56:20.057205 containerd[2035]: time="2024-12-13T01:56:20.057110594Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:56:20.057461 containerd[2035]: time="2024-12-13T01:56:20.057208010Z" level=info msg="RemovePodSandbox \"f68af58b4da7c85ed6f50b194c615d9ca17c873d6524141f470f54fefa79fe94\" returns successfully" Dec 13 01:56:20.442243 kubelet[2491]: E1213 01:56:20.442161 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:21.443172 kubelet[2491]: E1213 01:56:21.443082 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:22.444122 kubelet[2491]: E1213 01:56:22.444016 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:23.445374 kubelet[2491]: E1213 01:56:23.445282 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:24.446411 kubelet[2491]: E1213 01:56:24.446329 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:25.447426 kubelet[2491]: E1213 01:56:25.447339 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:26.448092 kubelet[2491]: E1213 01:56:26.448003 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:27.449064 kubelet[2491]: E1213 01:56:27.449003 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:28.450198 kubelet[2491]: E1213 01:56:28.450110 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:29.450547 kubelet[2491]: E1213 01:56:29.450475 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:30.451449 kubelet[2491]: E1213 01:56:30.451358 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:31.452615 kubelet[2491]: E1213 01:56:31.452521 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:32.453879 kubelet[2491]: E1213 01:56:32.453799 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:33.454947 kubelet[2491]: E1213 01:56:33.454845 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:34.455984 kubelet[2491]: E1213 01:56:34.455893 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:35.456769 kubelet[2491]: E1213 01:56:35.456692 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:36.457799 kubelet[2491]: E1213 01:56:36.457717 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:37.458887 kubelet[2491]: E1213 01:56:37.458803 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:38.459904 kubelet[2491]: E1213 01:56:38.459832 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:38.550894 systemd[1]: Created slice kubepods-besteffort-podf7bba7a2_3fb9_459c_8409_5d3d525be012.slice - libcontainer container kubepods-besteffort-podf7bba7a2_3fb9_459c_8409_5d3d525be012.slice. Dec 13 01:56:38.717967 kubelet[2491]: I1213 01:56:38.716995 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5jv6\" (UniqueName: \"kubernetes.io/projected/f7bba7a2-3fb9-459c-8409-5d3d525be012-kube-api-access-h5jv6\") pod \"test-pod-1\" (UID: \"f7bba7a2-3fb9-459c-8409-5d3d525be012\") " pod="default/test-pod-1" Dec 13 01:56:38.717967 kubelet[2491]: I1213 01:56:38.717180 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec85cc24-f7fd-4afa-bbd2-ef13101f0cfa\" (UniqueName: \"kubernetes.io/nfs/f7bba7a2-3fb9-459c-8409-5d3d525be012-pvc-ec85cc24-f7fd-4afa-bbd2-ef13101f0cfa\") pod \"test-pod-1\" (UID: \"f7bba7a2-3fb9-459c-8409-5d3d525be012\") " pod="default/test-pod-1" Dec 13 01:56:38.857724 kernel: FS-Cache: Loaded Dec 13 01:56:38.904719 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:56:38.904893 kernel: RPC: Registered udp transport module. Dec 13 01:56:38.904942 kernel: RPC: Registered tcp transport module. Dec 13 01:56:38.904985 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 01:56:38.905661 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:56:39.249526 kernel: NFS: Registering the id_resolver key type Dec 13 01:56:39.249658 kernel: Key type id_resolver registered Dec 13 01:56:39.249762 kernel: Key type id_legacy registered Dec 13 01:56:39.288406 nfsidmap[4299]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 01:56:39.294972 nfsidmap[4300]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 01:56:39.400512 kubelet[2491]: E1213 01:56:39.400447 2491 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:39.458837 containerd[2035]: time="2024-12-13T01:56:39.458538269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f7bba7a2-3fb9-459c-8409-5d3d525be012,Namespace:default,Attempt:0,}" Dec 13 01:56:39.460970 kubelet[2491]: E1213 01:56:39.460873 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:39.689009 systemd-networkd[1870]: cali5ec59c6bf6e: Link UP Dec 13 01:56:39.691226 (udev-worker)[4294]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:39.694320 systemd-networkd[1870]: cali5ec59c6bf6e: Gained carrier Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.546 [INFO][4302] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.24.71-k8s-test--pod--1-eth0 default f7bba7a2-3fb9-459c-8409-5d3d525be012 1243 0 2024-12-13 01:56:06 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.24.71 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.24.71-k8s-test--pod--1-" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.546 [INFO][4302] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.24.71-k8s-test--pod--1-eth0" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.600 [INFO][4312] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" HandleID="k8s-pod-network.83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" Workload="172.31.24.71-k8s-test--pod--1-eth0" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.628 [INFO][4312] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" HandleID="k8s-pod-network.83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" Workload="172.31.24.71-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002214a0), Attrs:map[string]string{"namespace":"default", "node":"172.31.24.71", "pod":"test-pod-1", "timestamp":"2024-12-13 01:56:39.600058534 +0000 UTC"}, Hostname:"172.31.24.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.629 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.629 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.629 [INFO][4312] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.24.71' Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.634 [INFO][4312] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" host="172.31.24.71" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.641 [INFO][4312] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.24.71" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.649 [INFO][4312] ipam/ipam.go 489: Trying affinity for 192.168.114.192/26 host="172.31.24.71" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.653 [INFO][4312] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.192/26 host="172.31.24.71" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.658 [INFO][4312] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="172.31.24.71" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.658 [INFO][4312] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" host="172.31.24.71" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.661 [INFO][4312] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.667 [INFO][4312] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" host="172.31.24.71" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.679 [INFO][4312] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.196/26] block=192.168.114.192/26 handle="k8s-pod-network.83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" host="172.31.24.71" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.679 [INFO][4312] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.196/26] handle="k8s-pod-network.83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" host="172.31.24.71" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.679 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.679 [INFO][4312] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.196/26] IPv6=[] ContainerID="83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" HandleID="k8s-pod-network.83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" Workload="172.31.24.71-k8s-test--pod--1-eth0" Dec 13 01:56:39.714100 containerd[2035]: 2024-12-13 01:56:39.683 [INFO][4302] cni-plugin/k8s.go 386: Populated endpoint ContainerID="83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.24.71-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.71-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f7bba7a2-3fb9-459c-8409-5d3d525be012", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.24.71", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.114.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:39.719972 containerd[2035]: 2024-12-13 01:56:39.683 [INFO][4302] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.196/32] ContainerID="83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.24.71-k8s-test--pod--1-eth0" Dec 13 01:56:39.719972 containerd[2035]: 2024-12-13 01:56:39.683 [INFO][4302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.24.71-k8s-test--pod--1-eth0" Dec 13 01:56:39.719972 containerd[2035]: 2024-12-13 01:56:39.691 [INFO][4302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.24.71-k8s-test--pod--1-eth0" Dec 13 01:56:39.719972 containerd[2035]: 2024-12-13 01:56:39.695 [INFO][4302] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.24.71-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.71-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f7bba7a2-3fb9-459c-8409-5d3d525be012", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.24.71", ContainerID:"83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.114.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"a2:17:f3:26:ba:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:39.719972 containerd[2035]: 2024-12-13 01:56:39.711 [INFO][4302] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.24.71-k8s-test--pod--1-eth0" Dec 13 01:56:39.758096 containerd[2035]: time="2024-12-13T01:56:39.755665896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:39.758096 containerd[2035]: time="2024-12-13T01:56:39.755919353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:39.758096 containerd[2035]: time="2024-12-13T01:56:39.755965902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:39.758096 containerd[2035]: time="2024-12-13T01:56:39.756129464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:39.785107 systemd[1]: Started cri-containerd-83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f.scope - libcontainer container 83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f. Dec 13 01:56:39.876285 containerd[2035]: time="2024-12-13T01:56:39.876200643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f7bba7a2-3fb9-459c-8409-5d3d525be012,Namespace:default,Attempt:0,} returns sandbox id \"83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f\"" Dec 13 01:56:39.882154 containerd[2035]: time="2024-12-13T01:56:39.881811088Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:56:40.194166 containerd[2035]: time="2024-12-13T01:56:40.194093422Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:40.195928 containerd[2035]: time="2024-12-13T01:56:40.195836078Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 01:56:40.202853 containerd[2035]: time="2024-12-13T01:56:40.202772304Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 320.890103ms" Dec 13 01:56:40.203266 containerd[2035]: time="2024-12-13T01:56:40.203063255Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 01:56:40.207709 containerd[2035]: time="2024-12-13T01:56:40.207634144Z" level=info msg="CreateContainer within sandbox \"83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:56:40.241816 containerd[2035]: time="2024-12-13T01:56:40.241569308Z" level=info msg="CreateContainer within sandbox \"83d78d5b79c14492cf66a3d95e126e3906d943957bc2ff29b7fab4a9b357e87f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"0166044391b6e73becb4a5ec542db5025c0102d226de915f5bef6de23cefde5b\"" Dec 13 01:56:40.242886 containerd[2035]: time="2024-12-13T01:56:40.242372007Z" level=info msg="StartContainer for \"0166044391b6e73becb4a5ec542db5025c0102d226de915f5bef6de23cefde5b\"" Dec 13 01:56:40.315088 systemd[1]: Started cri-containerd-0166044391b6e73becb4a5ec542db5025c0102d226de915f5bef6de23cefde5b.scope - libcontainer container 0166044391b6e73becb4a5ec542db5025c0102d226de915f5bef6de23cefde5b. Dec 13 01:56:40.365264 containerd[2035]: time="2024-12-13T01:56:40.365190939Z" level=info msg="StartContainer for \"0166044391b6e73becb4a5ec542db5025c0102d226de915f5bef6de23cefde5b\" returns successfully" Dec 13 01:56:40.462018 kubelet[2491]: E1213 01:56:40.461807 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:40.839897 systemd[1]: run-containerd-runc-k8s.io-0166044391b6e73becb4a5ec542db5025c0102d226de915f5bef6de23cefde5b-runc.f05xxM.mount: Deactivated successfully. Dec 13 01:56:40.971332 kubelet[2491]: I1213 01:56:40.971230 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=34.647499056 podStartE2EDuration="34.971205626s" podCreationTimestamp="2024-12-13 01:56:06 +0000 UTC" firstStartedPulling="2024-12-13 01:56:39.880766831 +0000 UTC m=+82.280988179" lastFinishedPulling="2024-12-13 01:56:40.204473402 +0000 UTC m=+82.604694749" observedRunningTime="2024-12-13 01:56:40.970567305 +0000 UTC m=+83.370788653" watchObservedRunningTime="2024-12-13 01:56:40.971205626 +0000 UTC m=+83.371426974" Dec 13 01:56:41.462775 kubelet[2491]: E1213 01:56:41.462704 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:41.555129 systemd-networkd[1870]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 01:56:42.463372 kubelet[2491]: E1213 01:56:42.463291 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:43.463919 kubelet[2491]: E1213 01:56:43.463841 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:43.564580 ntpd[2010]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:56:43.565264 ntpd[2010]: 13 Dec 01:56:43 ntpd[2010]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:56:44.465060 kubelet[2491]: E1213 01:56:44.464983 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:45.465400 kubelet[2491]: E1213 01:56:45.465328 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:46.465911 kubelet[2491]: E1213 01:56:46.465818 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:47.466469 kubelet[2491]: E1213 01:56:47.466398 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:48.467095 kubelet[2491]: E1213 01:56:48.467022 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:49.467485 kubelet[2491]: E1213 01:56:49.467396 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:50.468056 kubelet[2491]: E1213 01:56:50.467987 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:51.468232 kubelet[2491]: E1213 01:56:51.468163 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:52.468560 kubelet[2491]: E1213 01:56:52.468473 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:53.469335 kubelet[2491]: E1213 01:56:53.469266 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:54.470090 kubelet[2491]: E1213 01:56:54.470011 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:55.470549 kubelet[2491]: E1213 01:56:55.470426 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:56.471079 kubelet[2491]: E1213 01:56:56.470999 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:57.472037 kubelet[2491]: E1213 01:56:57.471928 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:58.472784 kubelet[2491]: E1213 01:56:58.472661 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:59.401208 kubelet[2491]: E1213 01:56:59.401015 2491 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:59.473349 kubelet[2491]: E1213 01:56:59.473239 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:00.474180 kubelet[2491]: E1213 01:57:00.474078 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:01.475221 kubelet[2491]: E1213 01:57:01.475113 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:02.475955 kubelet[2491]: E1213 01:57:02.475842 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:03.477152 kubelet[2491]: E1213 01:57:03.477078 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:04.478267 kubelet[2491]: E1213 01:57:04.478194 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:05.478980 kubelet[2491]: E1213 01:57:05.478897 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:06.480903 kubelet[2491]: E1213 01:57:06.480820 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:07.481299 kubelet[2491]: E1213 01:57:07.481221 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:08.482564 kubelet[2491]: E1213 01:57:08.482472 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:09.483236 kubelet[2491]: E1213 01:57:09.483154 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:10.484498 kubelet[2491]: E1213 01:57:10.484380 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:11.484618 kubelet[2491]: E1213 01:57:11.484543 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:12.207490 kubelet[2491]: E1213 01:57:12.207137 2491 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.71?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 01:57:12.486100 kubelet[2491]: E1213 01:57:12.485635 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:13.486873 kubelet[2491]: E1213 01:57:13.486775 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:14.487607 kubelet[2491]: E1213 01:57:14.487538 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:15.488236 kubelet[2491]: E1213 01:57:15.488172 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:16.489228 kubelet[2491]: E1213 01:57:16.489119 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:17.490307 kubelet[2491]: E1213 01:57:17.490223 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:18.490606 kubelet[2491]: E1213 01:57:18.490524 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:19.401174 kubelet[2491]: E1213 01:57:19.401098 2491 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:19.490912 kubelet[2491]: E1213 01:57:19.490850 2491 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"